WorldWideScience

Sample records for searching biomedical databases

  1. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  2. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  3. On-line biomedical databases-the best source for quick search of the scientific information in the biomedicine.

    Science.gov (United States)

    Masic, Izet; Milinovic, Katarina

    2012-06-01

    Most of medical journals now has it's electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals.

  4. Relational Databases and Biomedical Big Data.

    Science.gov (United States)

    de Silva, N H Nisansa D

    2017-01-01

    In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.

  5. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  6. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  7. NBIC: Search Ballast Report Database

    Science.gov (United States)

    Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database developed an online database that can be queried through our website. Data are accessible for all coastal Lakes, have been incorporated into the NBIC database as of August 2004. Information on data availability

  8. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    Science.gov (United States)

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations

  9. Biomedical databases: protecting privacy and promoting research.

    Science.gov (United States)

    Wylie, Jean E; Mineau, Geraldine P

    2003-03-01

    When combined with medical information, large electronic databases of information that identify individuals provide superlative resources for genetic, epidemiology and other biomedical research. Such research resources increasingly need to balance the protection of privacy and confidentiality with the promotion of research. Models that do not allow the use of such individual-identifying information constrain research; models that involve commercial interests raise concerns about what type of access is acceptable. Researchers, individuals representing the public interest and those developing regulatory guidelines must be involved in an ongoing dialogue to identify practical models.

  10. Electronic biomedical literature search for budding researcher.

    Science.gov (United States)

    Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D

    2013-09-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.

  11. Database searches for qualitative research

    OpenAIRE

    Evans, David

    2002-01-01

    Interest in the role of qualitative research in evidence-based health care is growing. However, the methods currently used to identify quantitative research do not translate easily to qualitative research. This paper highlights some of the difficulties during searches of electronic databases for qualitative research. These difficulties relate to the descriptive nature of the titles used in some qualitative studies, the variable information provided in abstracts, and the differences in the ind...

  12. Database citation in full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.

  13. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  14. Ontological interpretation of biomedical database content.

    Science.gov (United States)

    Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan

    2017-06-26

    Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is

  15. Quantum search of a real unstructured database

    Science.gov (United States)

    Broda, Bogusław

    2016-02-01

    A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.

  16. Discovering gene annotations in biomedical text databases

    Directory of Open Access Journals (Sweden)

    Ozsoyoglu Gultekin

    2008-03-01

    Full Text Available Abstract Background Genes and gene products are frequently annotated with Gene Ontology concepts based on the evidence provided in genomics articles. Manually locating and curating information about a genomic entity from the biomedical literature requires vast amounts of human effort. Hence, there is clearly a need forautomated computational tools to annotate the genes and gene products with Gene Ontology concepts by computationally capturing the related knowledge embedded in textual data. Results In this article, we present an automated genomic entity annotation system, GEANN, which extracts information about the characteristics of genes and gene products in article abstracts from PubMed, and translates the discoveredknowledge into Gene Ontology (GO concepts, a widely-used standardized vocabulary of genomic traits. GEANN utilizes textual "extraction patterns", and a semantic matching framework to locate phrases matching to a pattern and produce Gene Ontology annotations for genes and gene products. In our experiments, GEANN has reached to the precision level of 78% at therecall level of 61%. On a select set of Gene Ontology concepts, GEANN either outperforms or is comparable to two other automated annotation studies. Use of WordNet for semantic pattern matching improves the precision and recall by 24% and 15%, respectively, and the improvement due to semantic pattern matching becomes more apparent as the Gene Ontology terms become more general. Conclusion GEANN is useful for two distinct purposes: (i automating the annotation of genomic entities with Gene Ontology concepts, and (ii providing existing annotations with additional "evidence articles" from the literature. The use of textual extraction patterns that are constructed based on the existing annotations achieve high precision. The semantic pattern matching framework provides a more flexible pattern matching scheme with respect to "exactmatching" with the advantage of locating approximate

  17. African Journal of Biomedical Research: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  18. Annals of Biomedical Sciences: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  19. Egyptian Journal of Biomedical Sciences: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  20. Interactive searching of facial image databases

    Science.gov (United States)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  1. Where to search top-K biomedical ontologies?

    Science.gov (United States)

    Oliveira, Daniela; Butt, Anila Sahar; Haller, Armin; Rebholz-Schuhmann, Dietrich; Sahay, Ratnesh

    2018-03-20

    Searching for precise terms and terminological definitions in the biomedical data space is problematic, as researchers find overlapping, closely related and even equivalent concepts in a single or multiple ontologies. Search engines that retrieve ontological resources often suggest an extensive list of search results for a given input term, which leads to the tedious task of selecting the best-fit ontological resource (class or property) for the input term and reduces user confidence in the retrieval engines. A systematic evaluation of these search engines is necessary to understand their strengths and weaknesses in different search requirements. We have implemented seven comparable Information Retrieval ranking algorithms to search through ontologies and compared them against four search engines for ontologies. Free-text queries have been performed, the outcomes have been judged by experts and the ranking algorithms and search engines have been evaluated against the expert-based ground truth (GT). In addition, we propose a probabilistic GT that is developed automatically to provide deeper insights and confidence to the expert-based GT as well as evaluating a broader range of search queries. The main outcome of this work is the identification of key search factors for biomedical ontologies together with search requirements and a set of recommendations that will help biomedical experts and ontology engineers to select the best-suited retrieval mechanism in their search scenarios. We expect that this evaluation will allow researchers and practitioners to apply the current search techniques more reliably and that it will help them to select the right solution for their daily work. The source code (of seven ranking algorithms), ground truths and experimental results are available at https://github.com/danielapoliveira/bioont-search-benchmark.

  2. Fast Structural Search in Phylogenetic Databases

    Directory of Open Access Journals (Sweden)

    William H. Piel

    2005-01-01

    Full Text Available As the size of phylogenetic databases grows, the need for efficiently searching these databases arises. Thanks to previous and ongoing research, searching by attribute value and by text has become commonplace in these databases. However, searching by topological or physical structure, especially for large databases and especially for approximate matches, is still an art. We propose structural search techniques that, given a query or pattern tree P and a database of phylogenies D, find trees in D that are sufficiently close to P . The “closeness” is a measure of the topological relationships in P that are found to be the same or similar in a tree D in D. We develop a filtering technique that accelerates searches and present algorithms for rooted and unrooted trees where the trees can be weighted or unweighted. Experimental results on comparing the similarity measure with existing tree metrics and on evaluating the efficiency of the search techniques demonstrate that the proposed approach is promising

  3. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews.

    NARCIS (Netherlands)

    W.M. Bramer (Wichor); D. Giustini (Dean); B.M.R. Kramer (Bianca); P.F. Anderson (Patricia)

    2013-01-01

    textabstractThe usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here

  4. BOSS: context-enhanced search for biomedical objects

    Directory of Open Access Journals (Sweden)

    Choi Jaehoon

    2012-04-01

    Full Text Available Abstract Background There exist many academic search solutions and most of them can be put on either ends of spectrum: general-purpose search and domain-specific "deep" search systems. The general-purpose search systems, such as PubMed, offer flexible query interface, but churn out a list of matching documents that users have to go through the results in order to find the answers to their queries. On the other hand, the "deep" search systems, such as PPI Finder and iHOP, return the precompiled results in a structured way. Their results, however, are often found only within some predefined contexts. In order to alleviate these problems, we introduce a new search engine, BOSS, Biomedical Object Search System. Methods Unlike the conventional search systems, BOSS indexes segments, rather than documents. A segment refers to a Maximal Coherent Semantic Unit (MCSU such as phrase, clause or sentence that is semantically coherent in the given context (e.g., biomedical objects or their relations. For a user query, BOSS finds all matching segments, identifies the objects appearing in those segments, and aggregates the segments for each object. Finally, it returns the ranked list of the objects along with their matching segments. Results The working prototype of BOSS is available at http://boss.korea.ac.kr. The current version of BOSS has indexed abstracts of more than 20 million articles published during last 16 years from 1996 to 2011 across all science disciplines. Conclusion BOSS fills the gap between either ends of the spectrum by allowing users to pose context-free queries and by returning a structured set of results. Furthermore, BOSS exhibits the characteristic of good scalability, just as with conventional document search engines, because it is designed to use a standard document-indexing model with minimal modifications. Considering the features, BOSS notches up the technological level of traditional solutions for search on biomedical information.

  5. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  6. Biomedical journals and databases in Russia and Russian language in the former Soviet Union and beyond

    Directory of Open Access Journals (Sweden)

    Danishevskiy Kirill D

    2008-09-01

    Full Text Available Abstract In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon that have appeared recently are incomplete and do not enable effective searching.

  7. BEST: Next-Generation Biomedical Entity Search Tool for Knowledge Discovery from Biomedical Literature.

    Directory of Open Access Journals (Sweden)

    Sunwon Lee

    Full Text Available As the volume of publications rapidly increases, searching for relevant information from the literature becomes more challenging. To complement standard search engines such as PubMed, it is desirable to have an advanced search tool that directly returns relevant biomedical entities such as targets, drugs, and mutations rather than a long list of articles. Some existing tools submit a query to PubMed and process retrieved abstracts to extract information at query time, resulting in a slow response time and limited coverage of only a fraction of the PubMed corpus. Other tools preprocess the PubMed corpus to speed up the response time; however, they are not constantly updated, and thus produce outdated results. Further, most existing tools cannot process sophisticated queries such as searches for mutations that co-occur with query terms in the literature. To address these problems, we introduce BEST, a biomedical entity search tool. BEST returns, as a result, a list of 10 different types of biomedical entities including genes, diseases, drugs, targets, transcription factors, miRNAs, and mutations that are relevant to a user's query. To the best of our knowledge, BEST is the only system that processes free text queries and returns up-to-date results in real time including mutation information in the results. BEST is freely accessible at http://best.korea.ac.kr.

  8. An overview of biomedical literature search on the World Wide Web in the third millennium.

    Science.gov (United States)

    Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan

    2012-06-01

    Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.

  9. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  10. Protein structure database search and evolutionary classification.

    Science.gov (United States)

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  11. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  12. Textpresso Central: a customizable platform for searching, text mining, viewing, and curating biomedical literature.

    Science.gov (United States)

    Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W

    2018-03-09

    The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements

  13. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews

    OpenAIRE

    Bramer, Wichor M; Giustini, Dean; Kramer, Bianca MR; Anderson, PF

    2013-01-01

    Background The usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here by testing whether GS can locate all studies included in 21 previously published SRs. Second, it examines the recall of GS, taking into account the maximum number of items that can be viewed, and te...

  14. Searching the online biomedical literature from developing countries

    African Journals Online (AJOL)

    Administrator

    This commentary highlights popular research literature databases and the use of the internet to obtain valuable research information. These literature retrieval methods include the use of the popular. PubMed as well as internet search engines. Specific websites catering to developing countries' information and journals' ...

  15. Searching the online biomedical literature from developing countries ...

    African Journals Online (AJOL)

    This commentary highlights popular research literature databases and the use of the internet to obtain valuable research information. These literature retrieval methods include the use of the popular PubMed as well as internet search engines. Specific websites catering to developing countries' information and journals' ...

  16. Search pattern of databases by the undergraduate students of ...

    African Journals Online (AJOL)

    The main objective of this study is to assess the awareness and search pattern of databases in order to determine the extent to which user are aware and search for databases by examining the relationship between their Awareness and search patterns of Databases, and their information literacy skills. The methodology ...

  17. Winnowing sequences from a database search.

    Science.gov (United States)

    Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W

    2000-01-01

    In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.

  18. Design and implementation of a biomedical image database (BDIM).

    Science.gov (United States)

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. PubMed and beyond: a survey of web tools for searching biomedical literature

    Science.gov (United States)

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  20. PubMed and beyond: a survey of web tools for searching biomedical literature.

    Science.gov (United States)

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search.

  1. WAIS Searching of the Current Contents Database

    Science.gov (United States)

    Banholzer, P.; Grabenstein, M. E.

    The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.

  2. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  3. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    Science.gov (United States)

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  4. Supporting inter-topic entity search for biomedical Linked Data based on heterogeneous relationships.

    Science.gov (United States)

    Zong, Nansu; Lee, Sungin; Ahn, Jinhyun; Kim, Hong-Gee

    2017-08-01

    The keyword-based entity search restricts search space based on the preference of search. When given keywords and preferences are not related to the same biomedical topic, existing biomedical Linked Data search engines fail to deliver satisfactory results. This research aims to tackle this issue by supporting an inter-topic search-improving search with inputs, keywords and preferences, under different topics. This study developed an effective algorithm in which the relations between biomedical entities were used in tandem with a keyword-based entity search, Siren. The algorithm, PERank, which is an adaptation of Personalized PageRank (PPR), uses a pair of input: (1) search preferences, and (2) entities from a keyword-based entity search with a keyword query, to formalize the search results on-the-fly based on the index of the precomputed Individual Personalized PageRank Vectors (IPPVs). Our experiments were performed over ten linked life datasets for two query sets, one with keyword-preference topic correspondence (intra-topic search), and the other without (inter-topic search). The experiments showed that the proposed method achieved better search results, for example a 14% increase in precision for the inter-topic search than the baseline keyword-based search engine. The proposed method improved the keyword-based biomedical entity search by supporting the inter-topic search without affecting the intra-topic search based on the relations between different entities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Semantic similarity measure in biomedical domain leverage web search engine.

    Science.gov (United States)

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  6. Searching the ASRS Database Using QUORUM Keyword Search, Phrase Search, Phrase Generation, and Phrase Discovery

    Science.gov (United States)

    McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)

    2001-01-01

    To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.

  7. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews.

    Science.gov (United States)

    Bramer, Wichor M; Giustini, Dean; Kramer, Bianca Mr; Anderson, Pf

    2013-12-23

    The usefulness of Google Scholar (GS) as a bibliographic database for biomedical systematic review (SR) searching is a subject of current interest and debate in research circles. Recent research has suggested GS might even be used alone in SR searching. This assertion is challenged here by testing whether GS can locate all studies included in 21 previously published SRs. Second, it examines the recall of GS, taking into account the maximum number of items that can be viewed, and tests whether more complete searches created by an information specialist will improve recall compared to the searches used in the 21 published SRs. The authors identified 21 biomedical SRs that had used GS and PubMed as information sources and reported their use of identical, reproducible search strategies in both databases. These search strategies were rerun in GS and PubMed, and analyzed as to their coverage and recall. Efforts were made to improve searches that underperformed in each database. GS' overall coverage was higher than PubMed (98% versus 91%) and overall recall is higher in GS: 80% of the references included in the 21 SRs were returned by the original searches in GS versus 68% in PubMed. Only 72% of the included references could be used as they were listed among the first 1,000 hits (the maximum number shown). Practical precision (the number of included references retrieved in the first 1,000, divided by 1,000) was on average 1.9%, which is only slightly lower than in other published SRs. Improving searches with the lowest recall resulted in an increase in recall from 48% to 66% in GS and, in PubMed, from 60% to 85%. Although its coverage and precision are acceptable, GS, because of its incomplete recall, should not be used as a single source in SR searching. A specialized, curated medical database such as PubMed provides experienced searchers with tools and functionality that help improve recall, and numerous options in order to optimize precision. Searches for SRs should be

  8. Searching the PASCAL database - A user's perspective

    Science.gov (United States)

    Jack, Robert F.

    1989-01-01

    The operation of PASCAL, a bibliographic data base covering broad subject areas in science and technology, is discussed. The data base includes information from about 1973 to the present, including topics in engineering, chemistry, physics, earth science, environmental science, biology, psychology, and medicine. Data from 1986 to the present may be searched using DIALOG. The procedures and classification codes for searching PASCAL are presented. Examples of citations retrieved from the data base are given and suggestions are made concerning when to use PASCAL.

  9. An integrated biomedical knowledge extraction and analysis platform: using federated search and document clustering technology.

    Science.gov (United States)

    Taylor, Donald P

    2007-01-01

    High content screening (HCS) requires time-consuming and often complex iterative information retrieval and assessment approaches to optimally conduct drug discovery programs and biomedical research. Pre- and post-HCS experimentation both require the retrieval of information from public as well as proprietary literature in addition to structured information assets such as compound libraries and projects databases. Unfortunately, this information is typically scattered across a plethora of proprietary bioinformatics tools and databases and public domain sources. Consequently, single search requests must be presented to each information repository, forcing the results to be manually integrated for a meaningful result set. Furthermore, these bioinformatics tools and data repositories are becoming increasingly complex to use; typically they fail to allow for more natural query interfaces. Vivisimo has developed an enterprise software platform to bridge disparate silos of information. The platform automatically categorizes search results into descriptive folders without the use of taxonomies to drive the categorization. A new approach to information retrieval for HCS experimentation is proposed.

  10. Two Search Techniques within a Human Pedigree Database

    OpenAIRE

    Gersting, J. M.; Conneally, P. M.; Rogers, K.

    1982-01-01

    This paper presents the basic features of two search techniques from MEGADATS-2 (MEdical Genetics Acquisition and DAta Transfer System), a system for collecting, storing, retrieving and plotting human family pedigrees. The individual search provides a quick method for locating an individual in the pedigree database. This search uses a modified soundex coding and an inverted file structure based on a composite key. The navigational search uses a set of pedigree traversal operations (individual...

  11. An improved rank based disease prediction using web navigation patterns on bio-medical databases

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2017-12-01

    Full Text Available Applying machine learning techniques to on-line biomedical databases is a challenging task, as this data is collected from large number of sources and it is multi-dimensional. Also retrieval of relevant document from large repository such as gene document takes more processing time and an increased false positive rate. Generally, the extraction of biomedical document is based on the stream of prior observations of gene parameters taken at different time periods. Traditional web usage models such as Markov, Bayesian and Clustering models are sensitive to analyze the user navigation patterns and session identification in online biomedical database. Moreover, most of the document ranking models on biomedical database are sensitive to sparsity and outliers. In this paper, a novel user recommendation system was implemented to predict the top ranked biomedical documents using the disease type, gene entities and user navigation patterns. In this recommendation system, dynamic session identification, dynamic user identification and document ranking techniques were used to extract the highly relevant disease documents on the online PubMed repository. To verify the performance of the proposed model, the true positive rate and runtime of the model was compared with that of traditional static models such as Bayesian and Fuzzy rank. Experimental results show that the performance of the proposed ranking model is better than the traditional models.

  12. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases.

    Science.gov (United States)

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases

  13. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  14. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  15. Effective Image Database Search via Dimensionality Reduction

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Aanæs, Henrik

    2008-01-01

    Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabul......Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction......, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building...... of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this...

  16. [Biomedical information on the internet using search engines. A one-year trial].

    Science.gov (United States)

    Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina

    2004-01-01

    The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.

  17. Simplified validation of borderline hits of database searches

    OpenAIRE

    Thomas, Henrik; Shevchenko, Andrej

    2008-01-01

    Along with unequivocal hits produced by matching multiple MS/MS spectra to database sequences, LC-MS/MS analysis often yields a large number of hits of borderline statistical confidence. To simplify their validation, we propose to use rapid de novo interpretation of all acquired MS/MS spectra and, with the help of a simple software tool, display the candidate sequences together with each database search hit. We demonstrate that comparing hit database sequences and independent de novo interpre...

  18. International biomedical law in search for its normative status.

    Science.gov (United States)

    Krajewska, Atina

    2012-01-01

    The broad and multifaceted problem of global health law and global health governance has been attracting increasing attention in the last few decades. The global community has failed to establish international legal regime that deals comprehensively with the 'technological revolution'. The latter has posed complex questions to regions of the world with widely differing cultural perspectives. At the same time, an increasing number of governmental and non-state actors have become significantly involved in the sector. They use legal, political, and other forms of decision-making that result in regulatory instruments of contrasting normative status. Law created in this heterogeneous environment has been said to be fragmented, inconsistent, and exacerbating uncertainties. Therefore, claims have been made that a centralised and institutionalised system would help address the problems of transparency, legitimacy and efficiency. Nevertheless, little scholarly consideration is paid to the normative status of international biomedical law. This paper explores whether formalisation and "constitutionalisation" of biomedical law are indeed inevitable for its establishment as a separate regulatory regime. It does so by analysing the proliferation of biomedical law in light of two the theory of fragmentation and the theory of global legal pluralism. Investigating the problem in this way helps determine the theoretical framework and methodology of future studies of biomedical law at the international level. This in turn should help its future development in a more consistent and harmonised manner.

  19. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  20. Data integration and knowledge discovery in biomedical databases. Reliable information from unreliable sources

    Directory of Open Access Journals (Sweden)

    A Mitnitski

    2003-01-01

    Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.

  1. MICA: desktop software for comprehensive searching of DNA databases

    Directory of Open Access Journals (Sweden)

    Glick Benjamin S

    2006-10-01

    Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.

  2. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  3. Searching mixed DNA profiles directly against profile databases.

    Science.gov (United States)

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  5. Searching Harvard Business Review Online. . . Lessons in Searching a Full Text Database.

    Science.gov (United States)

    Tenopir, Carol

    1985-01-01

    This article examines the Harvard Business Review Online (HBRO) database (bibliographic description fields, abstracts, extracted information, full text, subject descriptors) and reports on 31 sample HBRO searches conducted in Bibliographic Retrieval Services to test differences between searching full text and searching bibliographic record. Sample…

  6. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    Science.gov (United States)

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  7. BioN∅T: A searchable database of biomedical negated sentences

    Directory of Open Access Journals (Sweden)

    Agarwal Shashank

    2011-10-01

    Full Text Available Abstract Background Negated biomedical events are often ignored by text-mining applications; however, such events carry scientific significance. We report on the development of BioN∅T, a database of negated sentences that can be used to extract such negated events. Description Currently BioN∅T incorporates ≈32 million negated sentences, extracted from over 336 million biomedical sentences from three resources: ≈2 million full-text biomedical articles in Elsevier and the PubMed Central, as well as ≈20 million abstracts in PubMed. We evaluated BioN∅T on three important genetic disorders: autism, Alzheimer's disease and Parkinson's disease, and found that BioN∅T is able to capture negated events that may be ignored by experts. Conclusions The BioN∅T database can be a useful resource for biomedical researchers. BioN∅T is freely available at http://bionot.askhermes.org/. In future work, we will develop semantic web related technologies to enrich BioN∅T.

  8. Forensic utilization of familial searches in DNA databases.

    Science.gov (United States)

    Gershaw, Cassandra J; Schweighardt, Andrew J; Rourke, Linda C; Wallace, Margaret M

    2011-01-01

    DNA evidence is widely recognized as an invaluable tool in the process of investigation and identification, as well as one of the most sought after types of evidence for presentation to a jury. In the United States, the development of state and federal DNA databases has greatly impacted the forensic community by creating an efficient, searchable system that can be used to eliminate or include suspects in an investigation based on matching DNA profiles - the profile already in the database to the profile of the unknown sample in evidence. Recent changes in legislation have begun to allow for the possibility to expand the parameters of DNA database searches, taking into account the possibility of familial searches. This article discusses prospective positive outcomes of utilizing familial DNA searches and acknowledges potential negative outcomes, thereby presenting both sides of this very complicated, rapidly evolving situation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  10. PFTijah: text search in an XML database system

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Rode, H.; van Os, R.; Flokstra, Jan

    2006-01-01

    This paper introduces the PFTijah system, a text search system that is integrated with an XML/XQuery database management system. We present examples of its use, we explain some of the system internals, and discuss plans for future work. PFTijah is part of the open source release of MonetDB/XQuery.

  11. A practical approach for inexpensive searches of radiology report databases.

    Science.gov (United States)

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  12. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases

    DEFF Research Database (Denmark)

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian

    2015-01-01

    biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ...... journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared...

  13. Semantic similarity measures in the biomedical domain by leveraging a web search engine.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Chang, Wen-Yung; Chen, Chi-Huang; Weng, Yung-Ching

    2013-07-01

    Various researches in web related semantic similarity measures have been deployed. However, measuring semantic similarity between two terms remains a challenging task. The traditional ontology-based methodologies have a limitation that both concepts must be resided in the same ontology tree(s). Unfortunately, in practice, the assumption is not always applicable. On the other hand, if the corpus is sufficiently adequate, the corpus-based methodologies can overcome the limitation. Now, the web is a continuous and enormous growth corpus. Therefore, a method of estimating semantic similarity is proposed via exploiting the page counts of two biomedical concepts returned by Google AJAX web search engine. The features are extracted as the co-occurrence patterns of two given terms P and Q, by querying P, Q, as well as P AND Q, and the web search hit counts of the defined lexico-syntactic patterns. These similarity scores of different patterns are evaluated, by adapting support vector machines for classification, to leverage the robustness of semantic similarity measures. Experimental results validating against two datasets: dataset 1 provided by A. Hliaoutakis; dataset 2 provided by T. Pedersen, are presented and discussed. In dataset 1, the proposed approach achieves the best correlation coefficient (0.802) under SNOMED-CT. In dataset 2, the proposed method obtains the best correlation coefficient (SNOMED-CT: 0.705; MeSH: 0.723) with physician scores comparing with measures of other methods. However, the correlation coefficients (SNOMED-CT: 0.496; MeSH: 0.539) with coder scores received opposite outcomes. In conclusion, the semantic similarity findings of the proposed method are close to those of physicians' ratings. Furthermore, the study provides a cornerstone investigation for extracting fully relevant information from digitizing, free-text medical records in the National Taiwan University Hospital database.

  14. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  15. Combined semantic and similarity search in medical image databases

    Science.gov (United States)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  16. The development of large-scale de-identified biomedical databases in the age of genomics-principles and challenges.

    Science.gov (United States)

    Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K

    2018-04-10

    Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.

  17. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  18. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  19. The database search problem: a question of rational decision making.

    Science.gov (United States)

    Gittelson, S; Biedermann, A; Bozza, S; Taroni, F

    2012-10-10

    This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. SciRide Finder: a citation-based paradigm in biomedical literature search.

    Science.gov (United States)

    Volanakis, Adam; Krawczyk, Konrad

    2018-04-18

    There are more than 26 million peer-reviewed biomedical research items according to Medline/PubMed. This breadth of information is indicative of the progress in biomedical sciences on one hand, but an overload for scientists performing literature searches on the other. A major portion of scientific literature search is to find statements, numbers and protocols that can be cited to build an evidence-based narrative for a new manuscript. Because science builds on prior knowledge, such information has likely been written out and cited in an older manuscript. Thus, Cited Statements, pieces of text from scientific literature supported by citing other peer-reviewed publications, carry significant amount of condensed information on prior art. Based on this principle, we propose a literature search service, SciRide Finder (finder.sciride.org), which constrains the search corpus to such Cited Statements only. We demonstrate that Cited Statements can carry different information to this found in titles/abstracts and full text, giving access to alternative literature search results than traditional search engines. We further show how presenting search results as a list of Cited Statements allows researchers to easily find information to build an evidence-based narrative for their own manuscripts.

  1. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  2. Optimal database combinations for literature searches in systematic reviews : a prospective exploratory study

    NARCIS (Netherlands)

    Bramer, W. M.; Rethlefsen, Melissa L.; Kleijnen, Jos; Franco, Oscar H.

    2017-01-01

    Background: Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases

  3. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  4. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    Science.gov (United States)

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the

  5. How to Search, Write, Prepare and Publish the Scientific Papers in the Biomedical Journals

    Science.gov (United States)

    Masic, Izet

    2011-01-01

    This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level. PMID:23572850

  6. How to search, write, prepare and publish the scientific papers in the biomedical journals.

    Science.gov (United States)

    Masic, Izet

    2011-06-01

    This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level.

  7. Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature.

    Science.gov (United States)

    Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang

    2015-06-06

    Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating

  8. Enriching Great Britain's National Landslide Database by searching newspaper archives

    Science.gov (United States)

    Taylor, Faith E.; Malamud, Bruce D.; Freeborough, Katy; Demeritt, David

    2015-11-01

    Our understanding of where landslide hazard and impact will be greatest is largely based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in Great Britain by searching an electronic archive of regional newspapers. In Great Britain, the British Geological Survey (BGS) is responsible for updating and maintaining records of landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of more than 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. We aim to supplement the richness of the NLD by (i) identifying additional landslide events, (ii) acting as an additional source of confirmation of events existing in the NLD and (iii) adding more detail to existing database entries. This is done by systematically searching the Nexis UK digital archive of 568 regional newspapers published in the UK. In this paper, we construct a robust Boolean search criterion by experimenting with landslide terminology for four training periods. We then apply this search to all articles published in 2006 and 2012. This resulted in the addition of 111 records of landslide events to the NLD over the 2 years investigated (2006 and 2012). We also find that we were able to obtain information about landslide impact for 60-90% of landslide events identified from newspaper articles. Spatial and temporal patterns of additional landslides identified from newspaper articles are broadly in line with those existing in the NLD, confirming that the NLD is a representative sample of landsliding in Great Britain. This method could now be applied to more time periods and/or other hazards to add richness to databases and thus improve our ability to forecast future events based on records of past events.

  9. Supporting ontology-based keyword search over medical databases.

    Science.gov (United States)

    Kementsietsidis, Anastasios; Lim, Lipyeow; Wang, Min

    2008-11-06

    The proliferation of medical terms poses a number of challenges in the sharing of medical information among different stakeholders. Ontologies are commonly used to establish relationships between different terms, yet their role in querying has not been investigated in detail. In this paper, we study the problem of supporting ontology-based keyword search queries on a database of electronic medical records. We present several approaches to support this type of queries, study the advantages and limitations of each approach, and summarize the lessons learned as best practices.

  10. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    Science.gov (United States)

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  11. Discovering biomedical semantic relations in PubMed queries for information retrieval and database curation.

    Science.gov (United States)

    Huang, Chung-Chi; Lu, Zhiyong

    2016-01-01

    Identifying relevant papers from the literature is a common task in biocuration. Most current biomedical literature search systems primarily rely on matching user keywords. Semantic search, on the other hand, seeks to improve search accuracy by understanding the entities and contextual relations in user keywords. However, past research has mostly focused on semantically identifying biological entities (e.g. chemicals, diseases and genes) with little effort on discovering semantic relations. In this work, we aim to discover biomedical semantic relations in PubMed queries in an automated and unsupervised fashion. Specifically, we focus on extracting and understanding the contextual information (or context patterns) that is used by PubMed users to represent semantic relations between entities such as 'CHEMICAL-1 compared to CHEMICAL-2' With the advances in automatic named entity recognition, we first tag entities in PubMed queries and then use tagged entities as knowledge to recognize pattern semantics. More specifically, we transform PubMed queries into context patterns involving participating entities, which are subsequently projected to latent topics via latent semantic analysis (LSA) to avoid the data sparseness and specificity issues. Finally, we mine semantically similar contextual patterns or semantic relations based on LSA topic distributions. Our two separate evaluation experiments of chemical-chemical (CC) and chemical-disease (CD) relations show that the proposed approach significantly outperforms a baseline method, which simply measures pattern semantics by similarity in participating entities. The highest performance achieved by our approach is nearly 0.9 and 0.85 respectively for the CC and CD task when compared against the ground truth in terms of normalized discounted cumulative gain (nDCG), a standard measure of ranking quality. These results suggest that our approach can effectively identify and return related semantic patterns in a ranked order

  12. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  13. Text Mining Genotype-Phenotype Relationships from Biomedical Literature for Database Curation and Precision Medicine.

    Science.gov (United States)

    Singhal, Ayush; Simmons, Michael; Lu, Zhiyong

    2016-11-01

    The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease

  14. Text Mining Genotype-Phenotype Relationships from Biomedical Literature for Database Curation and Precision Medicine.

    Directory of Open Access Journals (Sweden)

    Ayush Singhal

    2016-11-01

    Full Text Available The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed. Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD, diabetes mellitus, and cystic fibrosis. We then evaluate our approach in two ways: (1 a direct comparison with the state of the art using benchmark datasets; (2 a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79 over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB, we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets

  15. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  16. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  17. Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.

    Science.gov (United States)

    Campagne, Fabien

    2008-02-29

    The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute

  18. PIR search result - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available e filtered with Expect values lower than 1e-10. Number of data entries 1,549,409 ...he searches. Data analysis method Performed blastx searches against the PIR protein database. The results ar

  19. pSort search result - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...name: kome_psort_search_result.zip File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_psort_searc...abase Description Download License Update History of This Database Site Policy | Contact Us pSort search result - KOME | LSDB Archive ...

  20. A Review of Published Articles in the Field of Biomedical Nanotechnology in Medline Database during 2000-2010

    OpenAIRE

    Peyman Sheikhzade

    2015-01-01

    Background and objectives : Nanotechnology is a new technology which is increasingly used over the past decade. Due to its great significance, governments are tending to invest greatly on the research and development on nanotechnology in various sectors and aspects. The purpose of this study was to determine the status of biomedical nanotechnology publications over the past ten years (2010-2000) in Medline/ PubMed. Material and Methods : This was a descriptive study. The Medline database wa...

  1. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai; Chen, Zonghai; Liu, Chengfei; Huang, He; Zhang, Xiangliang

    2016-01-01

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate

  2. MetaboSearch: tool for mass-based metabolite identification using multiple databases.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.

  3. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  4. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (psearching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Efficiency of Database Search for Identification of Mutated and Modified Proteins via Mass Spectrometry

    OpenAIRE

    Pevzner, Pavel A.; Mulyukov, Zufar; Dancik, Vlado; Tang, Chris L

    2001-01-01

    Although protein identification by matching tandem mass spectra (MS/MS) against protein databases is a widespread tool in mass spectrometry, the question about reliability of such searches remains open. Absence of rigorous significance scores in MS/MS database search makes it difficult to discard random database hits and may lead to erroneous protein identification, particularly in the case of mutated or post-translationally modified peptides. This problem is especially important for high-thr...

  6. Searching Databases without Query-Building Aids: Implications for Dyslexic Users

    Science.gov (United States)

    Berget, Gerd; Sandnes, Frode Eika

    2015-01-01

    Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…

  7. Term Relevance Feedback and Mediated Database Searching: Implications for Information Retrieval Practice and Systems Design.

    Science.gov (United States)

    Spink, Amanda

    1995-01-01

    This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)

  8. A student's guide to searching the literature using online databases

    Science.gov (United States)

    Miller, Casey W.; Belyea, Dustin; Chabot, Michelle; Messina, Troy

    2012-02-01

    A method is described to empower students to efficiently perform general and specific literature searches using online resources [Miller et al., Am. J. Phys. 77, 1112 (2009)]. The method was tested on multiple groups, including undergraduate and graduate students with varying backgrounds in scientific literature searches. Students involved in this study showed marked improvement in their awareness of how and where to find scientific information. Repeated exposure to literature searching methods appears worthwhile, starting early in the undergraduate career, and even in graduate school orientation.

  9. Searching for religion and mental health studies required health, social science, and grey literature databases.

    Science.gov (United States)

    Wright, Judy M; Cottrell, David J; Mir, Ghazala

    2014-07-01

    To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    Science.gov (United States)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  11. Enabling Searches on Wavelengths in a Hyperspectral Indices Database

    Science.gov (United States)

    Piñuela, F.; Cerra, D.; Müller, R.

    2017-10-01

    Spectral indices derived from hyperspectral reflectance measurements are powerful tools to estimate physical parameters in a non-destructive and precise way for several fields of applications, among others vegetation health analysis, coastal and deep water constituents, geology, and atmosphere composition. In the last years, several micro-hyperspectral sensors have appeared, with both full-frame and push-broom acquisition technologies, while in the near future several hyperspectral spaceborne missions are planned to be launched. This is fostering the use of hyperspectral data in basic and applied research causing a large number of spectral indices to be defined and used in various applications. Ad hoc search engines are therefore needed to retrieve the most appropriate indices for a given application. In traditional systems, query input parameters are limited to alphanumeric strings, while characteristics such as spectral range/ bandwidth are not used in any existing search engine. Such information would be relevant, as it enables an inverse type of search: given the spectral capabilities of a given sensor or a specific spectral band, find all indices which can be derived from it. This paper describes a tool which enables a search as described above, by using the central wavelength or spectral range used by a given index as a search parameter. This offers the ability to manage numeric wavelength ranges in order to select indices which work at best in a given set of wavelengths or wavelength ranges.

  12. Social Work Literature Searching: Current Issues with Databases and Online Search Engines

    Science.gov (United States)

    McGinn, Tony; Taylor, Brian; McColgan, Mary; McQuilkan, Janice

    2016-01-01

    Objectives: To compare the performance of a range of search facilities; and to illustrate the execution of a comprehensive literature search for qualitative evidence in social work. Context: Developments in literature search methods and comparisons of search facilities help facilitate access to the best available evidence for social workers.…

  13. Usability Testing of a Large, Multidisciplinary Library Database: Basic Search and Visual Search

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2006-09-01

    Full Text Available Visual search interfaces have been shown by researchers to assist users with information search and retrieval. Recently, several major library vendors have added visual search interfaces or functions to their products. For public service librarians, perhaps the most critical area of interest is the extent to which visual search interfaces and text-based search interfaces support research. This study presents the results of eight full-scale usability tests of both the EBSCOhost Basic Search and Visual Search in the context of a large liberal arts university.

  14. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  15. Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies.

    Science.gov (United States)

    Delaney, Aogán; Tamás, Peter A

    2018-03-01

    Despite recognition that database search alone is inadequate even within the health sciences, it appears that reviewers in fields that have adopted systematic review are choosing to rely primarily, or only, on database search for information retrieval. This commentary reminds readers of factors that call into question the appropriateness of default reliance on database searches particularly as systematic review is adapted for use in new and lower consensus fields. It then discusses alternative methods for information retrieval that require development, formalisation, and evaluation. Our goals are to encourage reviewers to reflect critically and transparently on their choice of information retrieval methods and to encourage investment in research on alternatives. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  17. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    National Research Council Canada - National Science Library

    Ortega-Binderberger, Michael

    2002-01-01

    ... as a critical area of research. This thesis explores how to enhance database systems with content based search over arbitrary abstract data types in a similarity based framework with query refinement...

  18. STEPS: a grid search methodology for optimized peptide identification filtering of MS/MS database search results.

    Science.gov (United States)

    Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D

    2013-03-01

    For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. The new ENSDF search system NESSY: IBM/PC nuclear spectroscopy database

    International Nuclear Information System (INIS)

    Boboshin, I.N.; Varlamov, V.V.

    1996-01-01

    The universal relational nuclear structure and decay database NESSY (New ENSDF Search SYstem) developed for the IBM/PC and compatible PCs, and based on the international file ENSDF (Evaluated Nuclear Structure Data File), is described. The NESSY provides the possibility of high efficiency processing (the search and retrieval of any kind of physical data) of the information from ENSDF. The principles of the database development are described and examples of applications are presented. (orig.)

  20. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG.

    Science.gov (United States)

    Younger, Paula; Boddy, Kate

    2009-06-01

    The researchers involved in this study work at Exeter Health library and at the Complementary Medicine Unit, Peninsula School of Medicine and Dentistry (PCMD). Within this collaborative environment it is possible to access the electronic resources of three institutions. This includes access to AMED and other databases using different interfaces. The aim of this study was to investigate whether searching different interfaces to the AMED allied health and complementary medicine database produced the same results when using identical search terms. The following Internet-based AMED interfaces were searched: DIALOG DataStar; EBSCOhost and OVID SP_UI01.00.02. Search results from all three databases were saved in an endnote database to facilitate analysis. A checklist was also compiled comparing interface features. In our initial search, DIALOG returned 29 hits, OVID 14 and Ebsco 8. If we assume that DIALOG returned 100% of potential hits, OVID initially returned only 48% of hits and EBSCOhost only 28%. In our search, a researcher using the Ebsco interface to carry out a simple search on AMED would miss over 70% of possible search hits. Subsequent EBSCOhost searches on different subjects failed to find between 21 and 86% of the hits retrieved using the same keywords via DIALOG DataStar. In two cases, the simple EBSCOhost search failed to find any of the results found via DIALOG DataStar. Depending on the interface, the number of hits retrieved from the same database with the same simple search can vary dramatically. Some simple searches fail to retrieve a substantial percentage of citations. This may result in an uninformed literature review, research funding application or treatment intervention. In addition to ensuring that keywords, spelling and medical subject headings (MeSH) accurately reflect the nature of the search, database users should include wildcards and truncation and adapt their search strategy substantially to retrieve the maximum number of appropriate

  1. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai

    2016-09-09

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate suggestions individually and return the top-k best of them. However, the top-k suggestions have high redundancy with respect to the topics. To provide informative suggestions, the returned k suggestions are expected to be diverse, i.e., maximizing the relevance to the user query and the diversity with respect to topics that the user might be interested in simultaneously. In this paper, an objective function considering both factors is defined for evaluating a suggestion set. We show that maximizing the objective function is a submodular function maximization problem subject to n matroid constraints, which is an NP-hard problem. An greedy approximate algorithm with an approximation ratio O((Formula presented.)) is also proposed. Experimental results show that our suggestion outperforms other methods on providing relevant and diverse suggestions. © 2016 Springer Science+Business Media New York

  2. EVALUASI PEMANFAATAN JURNAL DALAM DATABASE "EBSCO BIOMEDICAL REFERENCE COLLECTION" DI UNIT PERPUSTAKAAN DAN INFORMATIKA KEDOKTERAN (UPIK FAKULTAS KEDOKTERAN UGM YOGYAKARTA

    Directory of Open Access Journals (Sweden)

    Eka Wardhani S.

    2015-12-01

    Full Text Available Evaluasi terhadap pemanfaatan koleksi sangat diperlukan untuk mengetahui seberapa besar koleksi tersebut diakses dan dimanfaatkan oleh pengguna. Ebsco Biomedical Reference Collection (Ebsco BRC merupakan salah satu database jurnal yang berparadigma akses. Evaluasi pemanfaatan jurnal dalam database Ebsco BRC merupakan penelitian tentang pemanfaatan koleksi perpustakaan yang dilakukan di UPIK (Unit Perpustakaan dan Informatika Kedokteran Fakultas Kedokteran Universitas Gadjah Mada Yogyakarta. Penelitian ini bertujuan untuk mengetahui tingkat keterpakaian dan pemanfaatan jumal oleh sivitas akademika di FK UGM. Evaluasi dilakukan dengan metode deskriptif dengan pendekatan data kuantitatif dan kualitatif. . Instrumen yang digunakan dalam evaluasi adalah kuesioner dan usage statistics report. Hasil Penelitian ini menunjukkan bahwa tingkat keterpakaian jurnal berdasarkan judul yang ada tinggi (97,96%, akan tetapi tingkat pengaksesannya belum dilakukan secara maksimal. Rata-rata pengaksesan jurnal setiap harinya 25%. Dari data usage statistics report dapat diketahui sebanyak 12 judul jumal yang diakses lebih dari 1000 kali yang dinyatakan sebagai jumal yang paling sering diakses oleh pengguna. Saran peneliti berdasarkan hasil penelitian yang diperoleh adalah bahwa kegiatan melanggan koleksi database Ebsco dapat terus dilakukan , akan tetapi UPIK harus berusaha meningkatkan sosialisasi koleksi, aksesibilitas, fasilitas, dan bimbingan bagi pengguna dalam melakukan penelusuran dalam database tersebut agar dapat dimanfaatkan secara maksimal. Kata Kunci: Evaluasi Koleksi, Ebsco

  3. PLAST: parallel local alignment search tool for database comparison

    Directory of Open Access Journals (Sweden)

    Lavenier Dominique

    2009-10-01

    Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  4. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  5. muBLASTP: database-indexed protein sequence search on multicore CPUs.

    Science.gov (United States)

    Zhang, Jing; Misra, Sanchit; Wang, Hao; Feng, Wu-Chun

    2016-11-04

    The Basic Local Alignment Search Tool (BLAST) is a fundamental program in the life sciences that searches databases for sequences that are most similar to a query sequence. Currently, the BLAST algorithm utilizes a query-indexed approach. Although many approaches suggest that sequence search with a database index can achieve much higher throughput (e.g., BLAT, SSAHA, and CAFE), they cannot deliver the same level of sensitivity as the query-indexed BLAST, i.e., NCBI BLAST, or they can only support nucleotide sequence search, e.g., MegaBLAST. Due to different challenges and characteristics between query indexing and database indexing, the existing techniques for query-indexed search cannot be used into database indexed search. muBLASTP, a novel database-indexed BLAST for protein sequence search, delivers identical hits returned to NCBI BLAST. On Intel Haswell multicore CPUs, for a single query, the single-threaded muBLASTP achieves up to a 4.41-fold speedup for alignment stages, and up to a 1.75-fold end-to-end speedup over single-threaded NCBI BLAST. For a batch of queries, the multithreaded muBLASTP achieves up to a 5.7-fold speedups for alignment stages, and up to a 4.56-fold end-to-end speedup over multithreaded NCBI BLAST. With a newly designed index structure for protein database and associated optimizations in BLASTP algorithm, we re-factored BLASTP algorithm for modern multicore processors that achieves much higher throughput with acceptable memory footprint for the database index.

  6. Database search for safety information on cosmetic ingredients.

    Science.gov (United States)

    Pauwels, Marleen; Rogiers, Vera

    2007-12-01

    Ethical considerations with respect to experimental animal use and regulatory testing are worldwide under heavy discussion and are, in certain cases, taken up in legislative measures. The most explicit example is the European cosmetic legislation, establishing a testing ban on finished cosmetic products since 11 September 2004 and enforcing that the safety of a cosmetic product is assessed by taking into consideration "the general toxicological profile of the ingredients, their chemical structure and their level of exposure" (OJ L151, 32-37, 23 June 1993; OJ L066, 26-35, 11 March 2003). Therefore the availability of referenced and reliable information on cosmetic ingredients becomes a dire necessity. Given the high-speed progress of the World Wide Web services and the concurrent drastic increase in free access to information, identification of relevant data sources and evaluation of the scientific value and quality of the retrieved data, are crucial. Based upon own practical experience, a survey is put together of freely and commercially available data sources with their individual description, field of application, benefits and drawbacks. It should be mentioned that the search strategies described are equally useful as a starting point for any quest for safety data on chemicals or chemical-related substances in general.

  7. search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Carrie L. Iwema

    2016-07-01

    Full Text Available The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1 to make their research immediately and freely available and (2 to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint.

  8. A searching and reporting system for relational databases using a graph-based metadata representation.

    Science.gov (United States)

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  9. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    Science.gov (United States)

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-07

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  10. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  11. Finding and sharing : new approaches to registries of databases and services for the biomedical sciences

    NARCIS (Netherlands)

    Smedley, Damian; Schofield, Paul; Chen, Chao-Kung; Aidinis, Vassilis; Ainali, Chrysanthi; Bard, Jonathan; Balling, Rudi; Birney, Ewan; Blake, Andrew; Bongcam-Rudloff, Erik; Brookes, Anthony J.; Cesareni, Gianni; Chandras, Christina; Eppig, Janan; Flicek, Paul; Gkoutos, Georgios; Greenaway, Simon; Gruenberger, Michael; Heriche, Jean-Karim; Lyall, Andrew; Mallon, Ann-Marie; Muddyman, Dawn; Reisinger, Florian; Ringwald, Martin; Rosenthal, Nadia; Schughart, Klaus; Swertz, Morris; Thorisson, Gudmundur A.; Zouberakis, Michael; Hancock, John M.

    2010-01-01

    The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them. Despite a rapid acceleration in uptake

  12. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  13. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  14. A Bayesian network approach to the database search problem in criminal proceedings

    Science.gov (United States)

    2012-01-01

    Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions

  15. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    Science.gov (United States)

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  16. Preliminary comparison of the Essie and PubMed search engines for answering clinical questions using MD on Tap, a PDA-based program for accessing biomedical literature.

    Science.gov (United States)

    Sutton, Victoria R; Hauser, Susan E

    2005-01-01

    MD on Tap, a PDA application that searches and retrieves biomedical literature, is specifically designed for use by mobile healthcare professionals. With the goal of improving the usability of the application, a preliminary comparison was made of two search engines (PubMed and Essie) to determine which provided most efficient path to the desired clinically-relevant information.

  17. Parallel database search and prime factorization with magnonic holographic memory devices

    Energy Technology Data Exchange (ETDEWEB)

    Khitun, Alexander [Electrical and Computer Engineering Department, University of California - Riverside, Riverside, California 92521 (United States)

    2015-12-28

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  18. Parallel database search and prime factorization with magnonic holographic memory devices

    Science.gov (United States)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  19. Parallel database search and prime factorization with magnonic holographic memory devices

    International Nuclear Information System (INIS)

    Khitun, Alexander

    2015-01-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed

  20. Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5

    Science.gov (United States)

    Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.

    2015-02-01

    We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.

  1. A Web-based Tool for SDSS and 2MASS Database Searches

    Science.gov (United States)

    Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.

    We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.

  2. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  3. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    International Nuclear Information System (INIS)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López

    2013-01-01

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing

  4. Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.

    Science.gov (United States)

    Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.

  5. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  6. Quantum Partial Searching Algorithm of a Database with Several Target Items

    International Nuclear Information System (INIS)

    Pu-Cha, Zhong; Wan-Su, Bao; Yun, Wei

    2009-01-01

    Choi and Korepin [Quantum Information Processing 6(2007)243] presented a quantum partial search algorithm of a database with several target items which can find a target block quickly when each target block contains the same number of target items. Actually, the number of target items in each target block is arbitrary. Aiming at this case, we give a condition to guarantee performance of the partial search algorithm to be performed and the number of queries to oracle of the algorithm to be minimized. In addition, by further numerical computing we come to the conclusion that the more uniform the distribution of target items, the smaller the number of queries

  7. Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server

    Directory of Open Access Journals (Sweden)

    Arie Nugraha

    2014-07-01

    Full Text Available Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description already stored inside a relational DBMS such as MariaDB (a fork of MySQL or PostgreSQL, Sphinx Search Server (Sphinx is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

  8. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  9. Identification of Alternative Splice Variants Using Unique Tryptic Peptide Sequences for Database Searches.

    Science.gov (United States)

    Tran, Trung T; Bollineni, Ravi C; Strozynski, Margarita; Koehler, Christian J; Thiede, Bernd

    2017-07-07

    Alternative splicing is a mechanism in eukaryotes by which different forms of mRNAs are generated from the same gene. Identification of alternative splice variants requires the identification of peptides specific for alternative splice forms. For this purpose, we generated a human database that contains only unique tryptic peptides specific for alternative splice forms from Swiss-Prot entries. Using this database allows an easy access to splice variant-specific peptide sequences that match to MS data. Furthermore, we combined this database without alternative splice variant-1-specific peptides with human Swiss-Prot. This combined database can be used as a general database for searching of LC-MS data. LC-MS data derived from in-solution digests of two different cell lines (LNCaP, HeLa) and phosphoproteomics studies were analyzed using these two databases. Several nonalternative splice variant-1-specific peptides were found in both cell lines, and some of them seemed to be cell-line-specific. Control and apoptotic phosphoproteomes from Jurkat T cells revealed several nonalternative splice variant-1-specific peptides, and some of them showed clear quantitative differences between the two states.

  10. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    Science.gov (United States)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  11. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion?

    Science.gov (United States)

    Lawrence, D W

    2008-12-01

    To assess what is lost if only one literature database is searched for articles relevant to injury prevention and safety promotion (IPSP) topics. Serial textword (keyword, free-text) searches using multiple synonym terms for five key IPSP topics (bicycle-related brain injuries, ethanol-impaired driving, house fires, road rage, and suicidal behaviors among adolescents) were conducted in four of the bibliographic databases that are most used by IPSP professionals: EMBASE, MEDLINE, PsycINFO, and Web of Science. Through a systematic procedure, an inventory of articles on each topic in each database was conducted to identify the total unduplicated count of all articles on each topic, the number of articles unique to each database, and the articles available if only one database is searched. No single database included all of the relevant articles on any topic, and the database with the broadest coverage differed by topic. A search of only one literature database will return 16.7-81.5% (median 43.4%) of the available articles on any of five key IPSP topics. Each database contributed unique articles to the total bibliography for each topic. A literature search performed in only one database will, on average, lead to a loss of more than half of the available literature on a topic.

  12. Colil: a database and search service for citation contexts in the life sciences domain.

    Science.gov (United States)

    Fujiwara, Toyofumi; Yamamoto, Yasunori

    2015-01-01

    To promote research activities in a particular research area, it is important to efficiently identify current research trends, advances, and issues in that area. Although review papers in the research area can suffice for this purpose in general, researchers are not necessarily able to obtain these papers from research aspects of their interests at the time they are required. Therefore, the utilization of the citation contexts of papers in a research area has been considered as another approach. However, there are few search services to retrieve citation contexts in the life sciences domain; furthermore, efficiently obtaining citation contexts is becoming difficult due to the large volume and rapid growth of life sciences papers. Here, we introduce the Colil (Comments on Literature in Literature) database to store citation contexts in the life sciences domain. By using the Resource Description Framework (RDF) and a newly compiled vocabulary, we built the Colil database and made it available through the SPARQL endpoint. In addition, we developed a web-based search service called Colil that searches for a cited paper in the Colil database and then returns a list of citation contexts for it along with papers relevant to it based on co-citations. The citation contexts in the Colil database were extracted from full-text papers of the PubMed Central Open Access Subset (PMC-OAS), which includes 545,147 papers indexed in PubMed. These papers are distributed across 3,171 journals and cite 5,136,741 unique papers that correspond to approximately 25 % of total PubMed entries. By utilizing Colil, researchers can easily refer to a set of citation contexts and relevant papers based on co-citations for a target paper. Colil helps researchers to comprehend life sciences papers in a research area more efficiently and makes their biological research more efficient.

  13. CUDASW++: optimizing Smith-Waterman sequence database searches for CUDA-enabled graphics processing units

    Directory of Open Access Journals (Sweden)

    Maskell Douglas L

    2009-05-01

    Full Text Available Abstract Background The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware. Findings Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS. Conclusion CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  14. Dialysis search filters for PubMed, Ovid MEDLINE, and Embase databases.

    Science.gov (United States)

    Iansavichus, Arthur V; Haynes, R Brian; Lee, Christopher W C; Wilczynski, Nancy L; McKibbon, Ann; Shariff, Salimah Z; Blake, Peter G; Lindsay, Robert M; Garg, Amit X

    2012-10-01

    Physicians frequently search bibliographic databases, such as MEDLINE via PubMed, for best evidence for patient care. The objective of this study was to develop and test search filters to help physicians efficiently retrieve literature related to dialysis (hemodialysis or peritoneal dialysis) from all other articles indexed in PubMed, Ovid MEDLINE, and Embase. A diagnostic test assessment framework was used to develop and test robust dialysis filters. The reference standard was a manual review of the full texts of 22,992 articles from 39 journals to determine whether each article contained dialysis information. Next, 1,623,728 unique search filters were developed, and their ability to retrieve relevant articles was evaluated. The high-performance dialysis filters consisted of up to 65 search terms in combination. These terms included the words "dialy" (truncated), "uremic," "catheters," and "renal transplant wait list." These filters reached peak sensitivities of 98.6% and specificities of 98.5%. The filters' performance remained robust in an independent validation subset of articles. These empirically derived and validated high-performance search filters should enable physicians to effectively retrieve dialysis information from PubMed, Ovid MEDLINE, and Embase.

  15. Protein backbone angle restraints from searching a database for chemical shift and sequence homology

    Energy Technology Data Exchange (ETDEWEB)

    Cornilescu, Gabriel; Delaglio, Frank; Bax, Ad [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States)

    1999-03-15

    Chemical shifts of backbone atoms in proteins are exquisitely sensitive to local conformation, and homologous proteins show quite similar patterns of secondary chemical shifts. The inverse of this relation is used to search a database for triplets of adjacent residues with secondary chemical shifts and sequence similarity which provide the best match to the query triplet of interest. The database contains 13C{alpha}, 13C{beta}, 13C', 1H{alpha} and 15N chemical shifts for 20 proteins for which a high resolution X-ray structure is available. The computer program TALOS was developed to search this database for strings of residues with chemical shift and residue type homology. The relative importance of the weighting factors attached to the secondary chemical shifts of the five types of resonances relative to that of sequence similarity was optimized empirically. TALOS yields the 10 triplets which have the closest similarity in secondary chemical shift and amino acid sequence to those of the query sequence. If the central residues in these 10 triplets exhibit similar {phi} and {psi} backbone angles, their averages can reliably be used as angular restraints for the protein whose structure is being studied. Tests carried out for proteins of known structure indicate that the root-mean-square difference (rmsd) between the output of TALOS and the X-ray derived backbone angles is about 15 deg. Approximately 3% of the predictions made by TALOS are found to be in error.

  16. Laser-assisted development of titanium alloys: the search for new biomedical materials

    Science.gov (United States)

    Almeida, Amelia; Gupta, Dheeraj; Vilar, Rui

    2011-02-01

    Ti-alloys used in prosthetic applications are mostly alloys initially developed for aeronautical applications, so their behavior was not optimized for medical use. A need remains to design new alloys for biomedical applications, where requirements such as biocompatibility, in-body durability, specific manufacturing ability, and cost effectiveness are considered. Materials for this application must present excellent biocompatibility, ductility, toughness and wear and corrosion resistance, a large laser processing window and low sensitivity to changes in the processing parameters. Laser deposition has been investigated in order to access its applicability to laser based manufactured implants. In this study, variable powder feed rate laser cladding has been used as a method for the combinatorial investigation of new alloy systems that offers a unique possibility for the rapid and exhaustive preparation of a whole range of alloys with compositions variable along a single clad track. This method was used as to produce composition gradient Ti-Mo alloys. Mo has been used since it is among the few elements biocompatible, non-toxic β-Ti phase stabilizers. Alloy tracks with compositions in the range 0-19 wt.%Mo were produced and characterized in detail as a function of composition using microscale testing procedures for screening of compositions with promising properties. Microstructural analysis showed that alloys with Mo content above 8% are fully formed of β phase grains. However, these β grains present a cellular substructure that is associated to a Ti and Mo segregation pattern that occurs during solidification. Ultramicroindentation tests carried out to evaluate the alloys' hardness and Young's modulus showed that Ti-13%Mo alloys presented the lowest hardness and Young's modulus (70 GPa) closer to that of bone than common Ti alloys, thus showing great potential for implant applications.

  17. mirPub: a database for searching microRNA publications.

    Science.gov (United States)

    Vergoulis, Thanasis; Kanellos, Ilias; Kostoulas, Nikos; Georgakilas, Georgios; Sellis, Timos; Hatzigeorgiou, Artemis; Dalamagas, Theodore

    2015-05-01

    Identifying, amongst millions of publications available in MEDLINE, those that are relevant to specific microRNAs (miRNAs) of interest based on keyword search faces major obstacles. References to miRNA names in the literature often deviate from standard nomenclature for various reasons, since even the official nomenclature evolves. For instance, a single miRNA name may identify two completely different molecules or two different names may refer to the same molecule. mirPub is a database with a powerful and intuitive interface, which facilitates searching for miRNA literature, addressing the aforementioned issues. To provide effective search services, mirPub applies text mining techniques on MEDLINE, integrates data from several curated databases and exploits data from its user community following a crowdsourcing approach. Other key features include an interactive visualization service that illustrates intuitively the evolution of miRNA data, tag clouds summarizing the relevance of publications to particular diseases, cell types or tissues and access to TarBase 6.0 data to oversee genes related to miRNA publications. mirPub is freely available at http://www.microrna.gr/mirpub/. vergoulis@imis.athena-innovation.gr or dalamag@imis.athena-innovation.gr Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  18. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    Directory of Open Access Journals (Sweden)

    Rognes Torbjørn

    2011-06-01

    Full Text Available Abstract Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  19. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    Science.gov (United States)

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  20. Database with web interface and search engine as a diagnostics tool for electromagnetic calorimeter

    CERN Document Server

    Paluoja, Priit

    2017-01-01

    During 2016 data collection, the Compact Muon Solenoid Data Acquisition (CMS DAQ) system has shown a very good reliability. Nevertheless, the high complexity of the hardware and the software involved is, by its nature, prone to some occasional problems. As CMS subdetector, electromagnetic calorimeter (ECAL) is affected in the same way. Some of the issues are not predictable and can appear during the year more than once such as components getting noisy, power shortcuts or failing communication between machines. The chain detection-diagnosis-intervention must be as fast as possible to minimise the downtime of the detector. The aim of this project was to create a diagnostic software for ECAL crew, which consists of database and its web interface that allows to search, add and edit the contents of the database.

  1. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  2. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences

    Directory of Open Access Journals (Sweden)

    Gibbs Mark J

    2008-02-01

    Full Text Available Abstract Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  3. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences.

    Science.gov (United States)

    Fourment, Mathieu; Gibbs, Mark J

    2008-02-05

    Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  4. Quantum Query Complexity for Searching Multiple Marked States from an Unsorted Database

    International Nuclear Information System (INIS)

    Shang Bin

    2007-01-01

    An important and usual sort of search problems is to find all marked states from an unsorted database with a large number of states. Grover's original quantum search algorithm is for finding single marked state with uncertainty, and it has been generalized to the case of multiple marked states, as well as been modified to find single marked state with certainty. However, the query complexity for finding all multiple marked states has not been addressed. We use a generalized Long's algorithm with high precision to solve such a problem. We calculate the approximate query complexity, which increases with the number of marked states and with the precision that we demand. In the end we introduce an algorithm for the problem on a 'duality computer' and show its advantage over other algorithms.

  5. Making a search engine for Indocean - A database of abstracts: An experience

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.; Haravu, L.J.

    stream_size 23701 stream_content_type text/plain stream_name Inf_Manage_Trends_Issues_2003_307.pdf.txt stream_source_info Inf_Manage_Trends_Issues_2003_307.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8... Information Mallagement : Trends and Issues (Festschrift ill honour of Prof S. Seetharama) 52 . Making a Search Engine for Indocean - A Database of Abstracts : An Experience Murari P Tapaswi* and L J Haravu** *Documentation Officer. National Information...

  6. Allie: a database and a search service of abbreviations and long forms

    Science.gov (United States)

    Yamamoto, Yasunori; Yamaguchi, Atsuko; Bono, Hidemasa; Takagi, Toshihisa

    2011-01-01

    Many abbreviations are used in the literature especially in the life sciences, and polysemous abbreviations appear frequently, making it difficult to read and understand scientific papers that are outside of a reader’s expertise. Thus, we have developed Allie, a database and a search service of abbreviations and their long forms (a.k.a. full forms or definitions). Allie searches for abbreviations and their corresponding long forms in a database that we have generated based on all titles and abstracts in MEDLINE. When a user query matches an abbreviation, Allie returns all potential long forms of the query along with their bibliographic data (i.e. title and publication year). In addition, for each candidate, co-occurring abbreviations and a research field in which it frequently appears in the MEDLINE data are displayed. This function helps users learn about the context in which an abbreviation appears. To deal with synonymous long forms, we use a dictionary called GENA that contains domain-specific terms such as gene, protein or disease names along with their synonymic information. Conceptually identical domain-specific terms are regarded as one term, and then conceptually identical abbreviation-long form pairs are grouped taking into account their appearance in MEDLINE. To keep up with new abbreviations that are continuously introduced, Allie has an automatic update system. In addition, the database of abbreviations and their long forms with their corresponding PubMed IDs is constructed and updated weekly. Database URL: The Allie service is available at http://allie.dbcls.jp/. PMID:21498548

  7. Protein structure determination by exhaustive search of Protein Data Bank derived databases.

    Science.gov (United States)

    Stokes-Rees, Ian; Sliz, Piotr

    2010-12-14

    Parallel sequence and structure alignment tools have become ubiquitous and invaluable at all levels in the study of biological systems. We demonstrate the application and utility of this same parallel search paradigm to the process of protein structure determination, benefitting from the large and growing corpus of known structures. Such searches were previously computationally intractable. Through the method of Wide Search Molecular Replacement, developed here, they can be completed in a few hours with the aide of national-scale federated cyberinfrastructure. By dramatically expanding the range of models considered for structure determination, we show that small (less than 12% structural coverage) and low sequence identity (less than 20% identity) template structures can be identified through multidimensional template scoring metrics and used for structure determination. Many new macromolecular complexes can benefit significantly from such a technique due to the lack of known homologous protein folds or sequences. We demonstrate the effectiveness of the method by determining the structure of a full-length p97 homologue from Trichoplusia ni. Example cases with the MHC/T-cell receptor complex and the EmoB protein provide systematic estimates of minimum sequence identity, structure coverage, and structural similarity required for this method to succeed. We describe how this structure-search approach and other novel computationally intensive workflows are made tractable through integration with the US national computational cyberinfrastructure, allowing, for example, rapid processing of the entire Structural Classification of Proteins protein fragment database.

  8. PubMed Phrases, an open set of coherent phrases for searching biomedical literature

    Science.gov (United States)

    Kim, Sun; Yeganova, Lana; Comeau, Donald C.; Wilbur, W. John; Lu, Zhiyong

    2018-01-01

    In biomedicine, key concepts are often expressed by multiple words (e.g., ‘zinc finger protein’). Previous work has shown treating a sequence of words as a meaningful unit, where applicable, is not only important for human understanding but also beneficial for automatic information seeking. Here we present a collection of PubMed® Phrases that are beneficial for information retrieval and human comprehension. We define these phrases as coherent chunks that are logically connected. To collect the phrase set, we apply the hypergeometric test to detect segments of consecutive terms that are likely to appear together in PubMed. These text segments are then filtered using the BM25 ranking function to ensure that they are beneficial from an information retrieval perspective. Thus, we obtain a set of 705,915 PubMed Phrases. We evaluate the quality of the set by investigating PubMed user click data and manually annotating a sample of 500 randomly selected noun phrases. We also analyze and discuss the usage of these PubMed Phrases in literature search. PMID:29893755

  9. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  10. The DNA database search controversy revisited: bridging the Bayesian-frequentist gap.

    Science.gov (United States)

    Storvik, Geir; Egeland, Thore

    2007-09-01

    Two different quantities have been suggested for quantification of evidence in cases where a suspect is found by a search through a database of DNA profiles. The likelihood ratio, typically motivated from a Bayesian setting, is preferred by most experts in the field. The so-called np rule has been suggested through frequentist arguments and has been suggested by the American National Research Council and Stockmarr (1999, Biometrics55, 671-677). The two quantities differ substantially and have given rise to the DNA database search controversy. Although several authors have criticized the different approaches, a full explanation of why these differences appear is still lacking. In this article we show that a P-value in a frequentist hypothesis setting is approximately equal to the result of the np rule. We argue, however, that a more reasonable procedure in this case is to use conditional testing, in which case a P-value directly related to posterior probabilities and the likelihood ratio is obtained. This way of viewing the problem bridges the gap between the Bayesian and frequentist approaches. At the same time it indicates that the np rule should not be used to quantify evidence.

  11. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  12. Analysis of Users' Searches of CD-ROM Databases in the National and University Library in Zagreb.

    Science.gov (United States)

    Jokic, Maja

    1997-01-01

    Investigates the search behavior of CD-ROM database users in Zagreb (Croatia) libraries: one group needed a minimum of technical assistance, and the other was completely independent. Highlights include the use of questionnaires and transaction log analysis and the need for end-user education. The questionnaire and definitions of search process…

  13. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    Science.gov (United States)

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  14. Real-Time Ligand Binding Pocket Database Search Using Local Surface Descriptors

    Science.gov (United States)

    Chikhi, Rayan; Sael, Lee; Kihara, Daisuke

    2010-01-01

    Due to the increasing number of structures of unknown function accumulated by ongoing structural genomics projects, there is an urgent need for computational methods for characterizing protein tertiary structures. As functions of many of these proteins are not easily predicted by conventional sequence database searches, a legitimate strategy is to utilize structure information in function characterization. Of a particular interest is prediction of ligand binding to a protein, as ligand molecule recognition is a major part of molecular function of proteins. Predicting whether a ligand molecule binds a protein is a complex problem due to the physical nature of protein-ligand interactions and the flexibility of both binding sites and ligand molecules. However, geometric and physicochemical complementarity is observed between the ligand and its binding site in many cases. Therefore, ligand molecules which bind to a local surface site in a protein can be predicted by finding similar local pockets of known binding ligands in the structure database. Here, we present two representations of ligand binding pockets and utilize them for ligand binding prediction by pocket shape comparison. These representations are based on mapping of surface properties of binding pockets, which are compactly described either by the two dimensional pseudo-Zernike moments or the 3D Zernike descriptors. These compact representations allow a fast real-time pocket searching against a database. Thorough benchmark study employing two different datasets show that our representations are competitive with the other existing methods. Limitations and potentials of the shape-based methods as well as possible improvements are discussed. PMID:20455259

  15. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  16. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    Science.gov (United States)

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  17. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    Science.gov (United States)

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  18. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  19. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Directory of Open Access Journals (Sweden)

    Piotr Minkiewicz

    2016-12-01

    Full Text Available Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  20. The Open Spectral Database: an open platform for sharing and searching spectral data.

    Science.gov (United States)

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  1. The pedagogical benefits of a lexical database (SciE-Lex to assist the production of publishable biomedical texts by EAL writers

    Directory of Open Access Journals (Sweden)

    Natalia Judith Laso

    2017-04-01

    Full Text Available Research has demonstrated that it is challenging for English as an Additional Language (EAL writers to acquire phraseological competence in academic English and develop a good working knowledge of discipline-specific formulaic language. This paper aims to explore if SciE-Lex, a powerful lexical database of biomedical research articles, can be exploited by EAL writers to enhance their command of formulaic language in biomedical English published writing. Our paper reports on the challenges associated with formulaic language (namely collocations for EAL writers, it reflects on the benefits of using a lexical database and it evaluates a pedagogical approach to helping EAL writers produce publishable texts. It specifically highlights results from two writing workshops conducted for EAL writers (medical researchers in the present study. The workshops involved medical researchers working on drafts of their writing using SciE-Lex. Our paper reports on the specific benefits of using SciE-Lex as demonstrated by revisions in the writing produced by the EAL medical researchers. This contribution aims to contribute to current discussion on English for Research Publication Purposes (ERPP for the EAL community who now form the main contributors to research knowledge dissemination.

  2. Searching the protein structure database for ligand-binding site similarities using CPASS v.2

    Directory of Open Access Journals (Sweden)

    Caprez Adam

    2011-01-01

    Full Text Available Abstract Background A recent analysis of protein sequences deposited in the NCBI RefSeq database indicates that ~8.5 million protein sequences are encoded in prokaryotic and eukaryotic genomes, where ~30% are explicitly annotated as "hypothetical" or "uncharacterized" protein. Our Comparison of Protein Active-Site Structures (CPASS v.2 database and software compares the sequence and structural characteristics of experimentally determined ligand binding sites to infer a functional relationship in the absence of global sequence or structure similarity. CPASS is an important component of our Functional Annotation Screening Technology by NMR (FAST-NMR protocol and has been successfully applied to aid the annotation of a number of proteins of unknown function. Findings We report a major upgrade to our CPASS software and database that significantly improves its broad utility. CPASS v.2 is designed with a layered architecture to increase flexibility and portability that also enables job distribution over the Open Science Grid (OSG to increase speed. Similarly, the CPASS interface was enhanced to provide more user flexibility in submitting a CPASS query. CPASS v.2 now allows for both automatic and manual definition of ligand-binding sites and permits pair-wise, one versus all, one versus list, or list versus list comparisons. Solvent accessible surface area, ligand root-mean square difference, and Cβ distances have been incorporated into the CPASS similarity function to improve the quality of the results. The CPASS database has also been updated. Conclusions CPASS v.2 is more than an order of magnitude faster than the original implementation, and allows for multiple simultaneous job submissions. Similarly, the CPASS database of ligand-defined binding sites has increased in size by ~ 38%, dramatically increasing the likelihood of a positive search result. The modification to the CPASS similarity function is effective in reducing CPASS similarity scores

  3. Protein backbone chemical shifts predicted from searching a database for torsion angle and sequence homology

    International Nuclear Information System (INIS)

    Shen Yang; Bax, Ad

    2007-01-01

    Chemical shifts of nuclei in or attached to a protein backbone are exquisitely sensitive to their local environment. A computer program, SPARTA, is described that uses this correlation with local structure to predict protein backbone chemical shifts, given an input three-dimensional structure, by searching a newly generated database for triplets of adjacent residues that provide the best match in φ/ψ/χ 1 torsion angles and sequence similarity to the query triplet of interest. The database contains 15 N, 1 H N , 1 H α , 13 C α , 13 C β and 13 C' chemical shifts for 200 proteins for which a high resolution X-ray (≤2.4 A) structure is available. The relative importance of the weighting factors for the φ/ψ/χ 1 angles and sequence similarity was optimized empirically. The weighted, average secondary shifts of the central residues in the 20 best-matching triplets, after inclusion of nearest neighbor, ring current, and hydrogen bonding effects, are used to predict chemical shifts for the protein of known structure. Validation shows good agreement between the SPARTA-predicted and experimental shifts, with standard deviations of 2.52, 0.51, 0.27, 0.98, 1.07 and 1.08 ppm for 15 N, 1 H N , 1 H α , 13 C α , 13 C β and 13 C', respectively, including outliers

  4. Building and evaluating an informatics tool to facilitate analysis of a biomedical literature search service in an academic medical center library.

    Science.gov (United States)

    Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha

    2013-01-01

    This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.

  5. Matrix-product-state simulation of an extended Brueschweiler bulk-ensemble database search

    International Nuclear Information System (INIS)

    SaiToh, Akira; Kitagawa, Masahiro

    2006-01-01

    Brueschweiler's database search in a spin Liouville space can be efficiently simulated on a conventional computer without error as long as the simulation cost of the internal circuit of an oracle function is polynomial, unlike the fact that in true NMR experiments, it suffers from an exponential decrease in the variation of a signal intensity. With the simulation method using the matrix-product-state proposed by Vidal [G. Vidal, Phys. Rev. Lett. 91, 147902 (2003)], we perform such a simulation. We also show the extensions of the algorithm without utilizing the J-coupling or DD-coupling splitting of frequency peaks in observation: searching can be completed with a single query in polynomial postoracle circuit complexities in an extension; multiple solutions of an oracle can be found in another extension whose query complexity is linear in the key length and in the number of solutions (this extension is to find all of marked keys). These extended algorithms are also simulated with the same simulation method

  6. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  7. Information retrieval from the INIS database. Is the new online search system poorer than the old one?

    International Nuclear Information System (INIS)

    Adamek, Petr

    2011-01-01

    A brief overview of the search options for the INIS database is presented, categorized into offline and online systems, and their assets and drawbacks are described. In the Online section, the old system on the BASIS platform and the new system on the Google Search Appliance platform are compared. The capabilities of the new system seem to be more limited than those of the old system. (author)

  8. A framework for intelligent data acquisition and real-time database searching for shotgun proteomics.

    Science.gov (United States)

    Graumann, Johannes; Scheltema, Richard A; Zhang, Yong; Cox, Jürgen; Mann, Matthias

    2012-03-01

    In the analysis of complex peptide mixtures by MS-based proteomics, many more peptides elute at any given time than can be identified and quantified by the mass spectrometer. This makes it desirable to optimally allocate peptide sequencing and narrow mass range quantification events. In computer science, intelligent agents are frequently used to make autonomous decisions in complex environments. Here we develop and describe a framework for intelligent data acquisition and real-time database searching and showcase selected examples. The intelligent agent is implemented in the MaxQuant computational proteomics environment, termed MaxQuant Real-Time. It analyzes data as it is acquired on the mass spectrometer, constructs isotope patterns and SILAC pair information as well as controls MS and tandem MS events based on real-time and prior MS data or external knowledge. Re-implementing a top10 method in the intelligent agent yields similar performance to the data dependent methods running on the mass spectrometer itself. We demonstrate the capabilities of MaxQuant Real-Time by creating a real-time search engine capable of identifying peptides "on-the-fly" within 30 ms, well within the time constraints of a shotgun fragmentation "topN" method. The agent can focus sequencing events onto peptides of specific interest, such as those originating from a specific gene ontology (GO) term, or peptides that are likely modified versions of already identified peptides. Finally, we demonstrate enhanced quantification of SILAC pairs whose ratios were poorly defined in survey spectra. MaxQuant Real-Time is flexible and can be applied to a large number of scenarios that would benefit from intelligent, directed data acquisition. Our framework should be especially useful for new instrument types, such as the quadrupole-Orbitrap, that are currently becoming available.

  9. Complementary Value of Databases for Discovery of Scholarly Literature: A User Survey of Online Searching for Publications in Art History

    Science.gov (United States)

    Nemeth, Erik

    2010-01-01

    Discovery of academic literature through Web search engines challenges the traditional role of specialized research databases. Creation of literature outside academic presses and peer-reviewed publications expands the content for scholarly research within a particular field. The resulting body of literature raises the question of whether scholars…

  10. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    Science.gov (United States)

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  11. Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?

    Science.gov (United States)

    Riley, Cheryl; Wales, Barbara

    Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…

  12. The effect of wild card designations and rare alleles in forensic DNA database searches

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Bright, Jo-Anne; Buckleton, John S

    2015-01-01

    Forensic DNA databases are powerful tools used for the identification of persons of interest in criminal investigations. Typically, they consist of two parts: (1) a database containing DNA profiles of known individuals and (2) a database of DNA profiles associated with crime scenes. The risk...... of adventitious or chance matches between crimes and innocent people increases as the number of profiles within a database grows and more data is shared between various forensic DNA databases, e.g. from different jurisdictions. The DNA profiles obtained from crime scenes are often partial because crime samples...

  13. Application of an automated natural language processing (NLP) workflow to enable federated search of external biomedical content in drug discovery and development.

    Science.gov (United States)

    McEntire, Robin; Szalkowski, Debbie; Butler, James; Kuo, Michelle S; Chang, Meiping; Chang, Man; Freeman, Darren; McQuay, Sarah; Patel, Jagruti; McGlashen, Michael; Cornell, Wendy D; Xu, Jinghai James

    2016-05-01

    External content sources such as MEDLINE(®), National Institutes of Health (NIH) grants and conference websites provide access to the latest breaking biomedical information, which can inform pharmaceutical and biotechnology company pipeline decisions. The value of the sites for industry, however, is limited by the use of the public internet, the limited synonyms, the rarity of batch searching capability and the disconnected nature of the sites. Fortunately, many sites now offer their content for download and we have developed an automated internal workflow that uses text mining and tailored ontologies for programmatic search and knowledge extraction. We believe such an efficient and secure approach provides a competitive advantage to companies needing access to the latest information for a range of use cases and complements manually curated commercial sources. Copyright © 2016. Published by Elsevier Ltd.

  14. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  15. Validation of SmartRank: A likelihood ratio software for searching national DNA databases with complex DNA profiles.

    Science.gov (United States)

    Benschop, Corina C G; van de Merwe, Linda; de Jong, Jeroen; Vanvooren, Vanessa; Kempenaers, Morgane; Kees van der Beek, C P; Barni, Filippo; Reyes, Eusebio López; Moulin, Léa; Pene, Laurent; Haned, Hinda; Sijen, Titia

    2017-07-01

    Searching a national DNA database with complex and incomplete profiles usually yields very large numbers of possible matches that can present many candidate suspects to be further investigated by the forensic scientist and/or police. Current practice in most forensic laboratories consists of ordering these 'hits' based on the number of matching alleles with the searched profile. Thus, candidate profiles that share the same number of matching alleles are not differentiated and due to the lack of other ranking criteria for the candidate list it may be difficult to discern a true match from the false positives or notice that all candidates are in fact false positives. SmartRank was developed to put forward only relevant candidates and rank them accordingly. The SmartRank software computes a likelihood ratio (LR) for the searched profile and each profile in the DNA database and ranks database entries above a defined LR threshold according to the calculated LR. In this study, we examined for mixed DNA profiles of variable complexity whether the true donors are retrieved, what the number of false positives above an LR threshold is and the ranking position of the true donors. Using 343 mixed DNA profiles over 750 SmartRank searches were performed. In addition, the performance of SmartRank and CODIS were compared regarding DNA database searches and SmartRank was found complementary to CODIS. We also describe the applicable domain of SmartRank and provide guidelines. The SmartRank software is open-source and freely available. Using the best practice guidelines, SmartRank enables obtaining investigative leads in criminal cases lacking a suspect. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Biomedical engineering principles

    CERN Document Server

    Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N

    2011-01-01

    Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem

  17. Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).

    Science.gov (United States)

    Brown, Elliot G

    2003-01-01

    The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by

  18. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  19. Review and Comparison of the Search Effectiveness and User Interface of Three Major Online Chemical Databases

    Science.gov (United States)

    Bharti, Neelam; Leonard, Michelle; Singh, Shailendra

    2016-01-01

    Online chemical databases are the largest source of chemical information and, therefore, the main resource for retrieving results from published journals, books, patents, conference abstracts, and other relevant sources. Various commercial, as well as free, chemical databases are available. SciFinder, Reaxys, and Web of Science are three major…

  20. Uso de bases de datos bibliográficas por investigadores biomédicos latinoamericanos hispanoparlantes: estudio transversal The use of bibliographic databases by Spanish-speaking Latin American biomedical researchers: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Edgar Guillermo Ospina

    2005-04-01

    bases de datos fue similar en todos los países estudiados, sin diferencias significativas en cuanto al tipo de acceso (formal, informal o libre y el grado de habilidad. Del total, 87% reconocieron no haber incluido referencias importantes en artículos publicados por no disponer del texto completo y 56% afirmaron haber citado artículos que no habían leído. Además, 7,6% de los encuestados reconocieron haber consultado bases de datos de acceso restringido mediante claves prestadas o discos copiados. Más de dos tercios de los autores manifestaron que obtenían los textos completos de los artículos mediante fotocopia o directamente de los autores. CONCLUSIONES: Es necesario entrenar a los investigadores latinoamericanos en la utilización de las bases de datos de uso más frecuente -especialmente MEDLINE- y mejorar su acceso a las fuentes bibliográficas biomédicas, como medidas esenciales para fomentar el desarrollo de la producción científica en la Región.OBJECTIVE: To describe how Spanish-speaking biomedical professionals in Latin America access and utilize bibliographic databases. METHODS: Based on a MEDLINE search, 2 515 articles published between August 2002 and August 2003 were identified that dealt with and/or had authors from 16 countries: Argentina, Bolivia, Chile, Colombia, Costa Rica, Cuba, Ecuador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Uruguay, and Venezuela. The search was limited to references to basic science, clinical science, or social medicine. A survey was sent by e-mail to researchers who lived in 15 of the 16 countries (the exception being Nicaragua. The survey asked about the researcher's area of work (basic science, clinical science, or public health, the level of skill in using databases, the frequency and type of access to the databases most utilized, the impact from not having access to the full text of articles when preparing a manuscript, and how the respondent usually obtained the full-text version of

  1. Specialist Bibliographic Databases.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  2. Specialist Bibliographic Databases

    Science.gov (United States)

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  3. Fast quantum search algorithm for databases of arbitrary size and its implementation in a cavity QED system

    International Nuclear Information System (INIS)

    Li, H.Y.; Wu, C.W.; Liu, W.T.; Chen, P.X.; Li, C.Z.

    2011-01-01

    We propose a method for implementing the Grover search algorithm directly in a database containing any number of items based on multi-level systems. Compared with the searching procedure in the database with qubits encoding, our modified algorithm needs fewer iteration steps to find the marked item and uses the carriers of the information more economically. Furthermore, we illustrate how to realize our idea in cavity QED using Zeeman's level structure of atoms. And the numerical simulation under the influence of the cavity and atom decays shows that the scheme could be achieved efficiently within current state-of-the-art technology. -- Highlights: ► A modified Grover algorithm is proposed for searching in an arbitrary dimensional Hilbert space. ► Our modified algorithm requires fewer iteration steps to find the marked item. ► The proposed method uses the carriers of the information more economically. ► A scheme for a six-item Grover search in cavity QED is proposed. ► Numerical simulation under decays shows that the scheme can be achieved with enough fidelity.

  4. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  5. Mascot search results - CREATE portal | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available search(/contents-en/) != -1 || url.search(/index-e.html/) != -1 ) { document.getElementById(lang).innerHTML=.../) != -1 ) { url = url.replace(-e.html,.html); document.getElementById(lang).innerHTML=[ Japanese |...en/,/jp/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if ( url.search(//contents...//) != -1 ) { url = url.replace(/contents/,/contents-en/); document.getElementById(lang).innerHTML=[ Japanes...e(/contents-en/,/contents/); document.getElementById(lang).innerHTML=[ Japanese | English ]; } else if( url.

  6. Google Scholar Out-Performs Many Subscription Databases when Keyword Searching. A Review of: Walters, W. H. (2009. Google Scholar search performance: Comparative recall and precision. portal: Libraries and the Academy, 9(1, 5-24.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2010-09-01

    Full Text Available Objective – To compare the search performance (i.e., recall and precision of Google Scholar with that of 11 other bibliographic databases when using a keyword search to find references on later-life migration. Design – Comparative database evaluation. Setting – Not stated in the article. It appears from the author’s affiliation that this research took place in an academic institution of higher learning. Subjects – Twelve databases were compared: Google Scholar, Academic Search Elite, AgeLine, ArticleFirst, EconLit, Geobase, Medline, PAIS International, Popline, Social Sciences Abstracts, Social Sciences Citation Index, and SocIndex. Methods – The relevant literature on later-life migration was pre-identified as a set of 155 journal articles published from 1990 to 2000. The author selected these articles from database searches, citation tracking, journal scans, and consultations with social sciences colleagues. Each database was evaluated with regards to its performance in finding references to these 155 papers.Elderly and migration were the keywords used to conduct the searches in each of the 12 databases, since these were the words that were the most frequently used in the titles of the 155 relevant articles. The search was performed in the most basic search interface of each database that allowed limiting results by the needed publication dates (1990-2000. Search results were sorted by relevance when possible (for 9 out of the 12 databases, and by date when the relevance sorting option was not available. Recall and precision statistics were then calculated from the search results. Recall is the number of relevant results obtained in the database for a search topic, divided by all the potential results which can be obtained on that topic (in this case, 155 references. Precision is the number of relevant results obtained in the database for a search topic, divided by the total number of results that were obtained in the database on

  7. Biomedical waste management in Ayurveda hospitals - current practices & future prospectives.

    Science.gov (United States)

    Rajan, Renju; Robin, Delvin T; M, Vandanarani

    2018-03-16

    Biomedical waste management is an integral part of traditional and contemporary system of health care. The paper focuses on the identification and classification of biomedical wastes in Ayurvedic hospitals, current practices of its management in Ayurveda hospitals and its future prospective. Databases like PubMed (1975-2017 Feb), Scopus (1960-2017), AYUSH Portal, DOAJ, DHARA and Google scholar were searched. We used the medical subject headings 'biomedical waste' and 'health care waste' for identification and classification. The terms 'biomedical waste management', 'health care waste management' alone and combined with 'Ayurveda' or 'Ayurvedic' for current practices and recent advances in the treatment of these wastes were used. We made a humble attempt to categorize the biomedical wastes from Ayurvedic hospitals as the available data about its grouping is very scarce. Proper biomedical waste management is the mainstay of hospital cleanliness, hospital hygiene and maintenance activities. Current disposal techniques adopted for Ayurveda biomedical wastes are - sewage/drains, incineration and land fill. But these methods are having some merits as well as demerits. Our review has identified a number of interesting areas for future research such as the logical application of bioremediation techniques in biomedical waste management and the usage of effective micro-organisms and solar energy in waste disposal. Copyright © 2017 Transdisciplinary University, Bangalore and World Ayurveda Foundation. Published by Elsevier B.V. All rights reserved.

  8. Verification of Single-Peptide Protein Identifications by the Application of Complementary Database Search Algorithms

    National Research Council Canada - National Science Library

    Rohrbough, James G; Breci, Linda; Merchant, Nirav; Miller, Susan; Haynes, Paul A

    2005-01-01

    .... One such technique, known as the Multi-Dimensional Protein Identification Technique, or MudPIT, involves the use of computer search algorithms that automate the process of identifying proteins...

  9. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    International Nuclear Information System (INIS)

    Ginzinger, Simon W.; Coles, Murray

    2009-01-01

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods

  10. SimShiftDB; local conformational restraints derived from chemical shift similarity searches on a large synthetic database

    Energy Technology Data Exchange (ETDEWEB)

    Ginzinger, Simon W. [Center of Applied Molecular Engineering, University of Salzburg, Department of Molecular Biology, Division of Bioinformatics (Austria)], E-mail: simon@came.sbg.ac.at; Coles, Murray [Max-Planck-Institute for Developmental Biology, Department of Protein Evolution (Germany)], E-mail: Murray.Coles@tuebingen.mpg.de

    2009-03-15

    We present SimShiftDB, a new program to extract conformational data from protein chemical shifts using structural alignments. The alignments are obtained in searches of a large database containing 13,000 structures and corresponding back-calculated chemical shifts. SimShiftDB makes use of chemical shift data to provide accurate results even in the case of low sequence similarity, and with even coverage of the conformational search space. We compare SimShiftDB to HHSearch, a state-of-the-art sequence-based search tool, and to TALOS, the current standard tool for the task. We show that for a significant fraction of the predicted similarities, SimShiftDB outperforms the other two methods. Particularly, the high coverage afforded by the larger database often allows predictions to be made for residues not involved in canonical secondary structure, where TALOS predictions are both less frequent and more error prone. Thus SimShiftDB can be seen as a complement to currently available methods.

  11. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    Science.gov (United States)

    2002-01-01

    to the OODBMS approach. The ORDBMS approach produced such research prototypes as Postgres [155], and Starburst [67] and commercial products such as...Kemnitz. The POSTGRES Next-Generation Database Management System. Communications of the ACM, 34(10):78–92, 1991. [156] Michael Stonebreaker and Dorothy

  12. Ariadne: a database search engine for identification and chemical analysis of RNA using tandem mass spectrometry data.

    Science.gov (United States)

    Nakayama, Hiroshi; Akiyama, Misaki; Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-04-01

    We present here a method to correlate tandem mass spectra of sample RNA nucleolytic fragments with an RNA nucleotide sequence in a DNA/RNA sequence database, thereby allowing tandem mass spectrometry (MS/MS)-based identification of RNA in biological samples. Ariadne, a unique web-based database search engine, identifies RNA by two probability-based evaluation steps of MS/MS data. In the first step, the software evaluates the matches between the masses of product ions generated by MS/MS of an RNase digest of sample RNA and those calculated from a candidate nucleotide sequence in a DNA/RNA sequence database, which then predicts the nucleotide sequences of these RNase fragments. In the second step, the candidate sequences are mapped for all RNA entries in the database, and each entry is scored for a function of occurrences of the candidate sequences to identify a particular RNA. Ariadne can also predict post-transcriptional modifications of RNA, such as methylation of nucleotide bases and/or ribose, by estimating mass shifts from the theoretical mass values. The method was validated with MS/MS data of RNase T1 digests of in vitro transcripts. It was applied successfully to identify an unknown RNA component in a tRNA mixture and to analyze post-transcriptional modification in yeast tRNA(Phe-1).

  13. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  14. Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment

    Science.gov (United States)

    2012-09-01

    Suspicious Activity Reporting Initiative ODNI Office of the Director of National Intelligence OSINT Open Source Intelligence PERF Police Executive...Fusion centers are encouraged to explore all available information sources to enhance the intelligence analysis process. It follows then that fusion...WSIC also utilizes ACCURINT, a web-based, subscription service. ACCURINT searches open source information and is able to collect and collate

  15. Combining history of medicine and library instruction: an innovative approach to teaching database searching to medical students.

    Science.gov (United States)

    Timm, Donna F; Jones, Dee; Woodson, Deidra; Cyrus, John W

    2012-01-01

    Library faculty members at the Health Sciences Library at the LSU Health Shreveport campus offer a database searching class for third-year medical students during their surgery rotation. For a number of years, students completed "ten-minute clinical challenges," but the instructors decided to replace the clinical challenges with innovative exercises using The Edwin Smith Surgical Papyrus to emphasize concepts learned. The Surgical Papyrus is an online resource that is part of the National Library of Medicine's "Turning the Pages" digital initiative. In addition, vintage surgical instruments and historic books are displayed in the classroom to enhance the learning experience.

  16. From Shakespeare to Star Trek and beyond: a Medline search for literary and other allusions in biomedical titles.

    Science.gov (United States)

    Goodman, Neville W

    2005-12-24

    To document biomedical paper titles containing literary and other allusions. Retrospective survey. Medline (1951 to mid-2005) through Dialog Datastar. Allusions to Shakespeare, Hans Christian Andersen, proverbs, the Bible, Lewis Carroll, and movie titles, corrected and scaled for five year periods 1950-4 to 2000-4. More than 1400 Shakespearean allusions exist, a third of them to "What's in a name" and another third to Hamlet-mostly to "To be or not to be." The trend of increasing use of allusive titles, identified from Shakespeare and Andersen, is paralleled by allusions to Carroll and proverbs; the trend of biblical allusions is also upward but is more erratic. Trends for newer allusions are also upwards, including the previously surveyed "paradigm shift." Allusive titles are likely to be to editorial or comment rather than to original research. The similar trends are presumably a mark of a particular learnt author behaviour. Newer allusions may be becoming more popular than older ones. Allusive titles can be unhelpful to reviewers and researchers, and many are now clichés. Whether they attract readers or citations is unknown, but better ways of gaining attention exist.

  17. Literature searches on Ayurveda: An update.

    Science.gov (United States)

    Aggithaya, Madhur G; Narahari, Saravu R

    2015-01-01

    The journals that publish on Ayurveda are increasingly indexed by popular medical databases in recent years. However, many Eastern journals are not indexed biomedical journal databases such as PubMed. Literature searches for Ayurveda continue to be challenging due to the nonavailability of active, unbiased dedicated databases for Ayurvedic literature. In 2010, authors identified 46 databases that can be used for systematic search of Ayurvedic papers and theses. This update reviewed our previous recommendation and identified current and relevant databases. To update on Ayurveda literature search and strategy to retrieve maximum publications. Author used psoriasis as an example to search previously listed databases and identify new. The population, intervention, control, and outcome table included keywords related to psoriasis and Ayurvedic terminologies for skin diseases. Current citation update status, search results, and search options of previous databases were assessed. Eight search strategies were developed. Hundred and five journals, both biomedical and Ayurveda, which publish on Ayurveda, were identified. Variability in databases was explored to identify bias in journal citation. Five among 46 databases are now relevant - AYUSH research portal, Annotated Bibliography of Indian Medicine, Digital Helpline for Ayurveda Research Articles (DHARA), PubMed, and Directory of Open Access Journals. Search options in these databases are not uniform, and only PubMed allows complex search strategy. "The Researches in Ayurveda" and "Ayurvedic Research Database" (ARD) are important grey resources for hand searching. About 44/105 (41.5%) journals publishing Ayurvedic studies are not indexed in any database. Only 11/105 (10.4%) exclusive Ayurveda journals are indexed in PubMed. AYUSH research portal and DHARA are two major portals after 2010. It is mandatory to search PubMed and four other databases because all five carry citations from different groups of journals. The hand

  18. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  19. Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.

    Science.gov (United States)

    Wang, Jianqi; Zhang, Yajie; Yu, Yonghao

    2015-07-01

    A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.

  20. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they

  1. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  2. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-01-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  3. Using a Native XML Database for Encoded Archival Description Search and Retrieval

    Directory of Open Access Journals (Sweden)

    Alan Cornish

    2017-09-01

    Full Text Available This article is an attempt to develop Geographic Information Systems (GIS technology into an analytical tool for examining the relationships between the height of the bookshelves and the behavior of library readers in utilizing books within a library. The tool would contain a database to store book-use information and some GIS maps to represent bookshelves. Upon analyzing the data stored in the database, different frequencies of book use across bookshelf layers are displayed on the maps. The tool would provide a wonderful means of visualization through which analysts can quickly realize the spatial distribution of books used in a library. This article reveals that readers tend to pull books out of the bookshelf layers that are easily reachable by human eyes and hands, and thus opens some issues for librarians to reconsider the management of library collections.

  4. Heart research advances using database search engines, Human Protein Atlas and the Sydney Heart Bank.

    Science.gov (United States)

    Li, Amy; Estigoy, Colleen; Raftery, Mark; Cameron, Darryl; Odeberg, Jacob; Pontén, Fredrik; Lal, Sean; Dos Remedios, Cristobal G

    2013-10-01

    This Methodological Review is intended as a guide for research students who may have just discovered a human "novel" cardiac protein, but it may also help hard-pressed reviewers of journal submissions on a "novel" protein reported in an animal model of human heart failure. Whether you are an expert or not, you may know little or nothing about this particular protein of interest. In this review we provide a strategic guide on how to proceed. We ask: How do you discover what has been published (even in an abstract or research report) about this protein? Everyone knows how to undertake literature searches using PubMed and Medline but these are usually encyclopaedic, often producing long lists of papers, most of which are either irrelevant or only vaguely relevant to your query. Relatively few will be aware of more advanced search engines such as Google Scholar and even fewer will know about Quertle. Next, we provide a strategy for discovering if your "novel" protein is expressed in the normal, healthy human heart, and if it is, we show you how to investigate its subcellular location. This can usually be achieved by visiting the website "Human Protein Atlas" without doing a single experiment. Finally, we provide a pathway to discovering if your protein of interest changes its expression level with heart failure/disease or with ageing. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  5. Searching the databases: a quick look at Amazon and two other online catalogues.

    Science.gov (United States)

    Potts, Hilary

    2003-01-01

    The Amazon Online Catalogue was compared with the Library of Congress Catalogue and the British Library Catalogue, both also available online, by searching on both neutral (Gay, Lesbian, Homosexual) and pejorative (Perversion, Sex Crime) subject terms, and also by searches using Boolean logic in an attempt to identify Lesbian Fiction items and religion-based anti-gay material. Amazon was much more likely to be the first port of call for non-academic enquiries. Although excluding much material necessary for academic research, it carried more information about the individual books and less historical homophobic baggage in its terminology than the great national catalogues. Its back catalogue of second-hand books outnumbered those in print. Current attitudes may partially be gauged by the relative numbers of titles published under each heading--e.g., there may be an inverse relationship between concern about child sex abuse and homophobia, more noticeable in U.S. because of the activities of the religious right.

  6. Zirconia in biomedical applications.

    Science.gov (United States)

    Chen, Yen-Wei; Moussi, Joelle; Drury, Jeanie L; Wataha, John C

    2016-10-01

    The use of zirconia in medicine and dentistry has rapidly expanded over the past decade, driven by its advantageous physical, biological, esthetic, and corrosion properties. Zirconia orthopedic hip replacements have shown superior wear-resistance over other systems; however, risk of catastrophic fracture remains a concern. In dentistry, zirconia has been widely adopted for endosseous implants, implant abutments, and all-ceramic crowns. Because of an increasing demand for esthetically pleasing dental restorations, zirconia-based ceramic restorations have become one of the dominant restorative choices. Areas covered: This review provides an updated overview of the applications of zirconia in medicine and dentistry with a focus on dental applications. The MEDLINE electronic database (via PubMed) was searched, and relevant original and review articles from 2010 to 2016 were included. Expert commentary: Recent data suggest that zirconia performs favorably in both orthopedic and dental applications, but quality long-term clinical data remain scarce. Concerns about the effects of wear, crystalline degradation, crack propagation, and catastrophic fracture are still debated. The future of zirconia in biomedical applications will depend on the generation of these data to resolve concerns.

  7. High serum folate is associated with reduced biochemical recurrence after radical prostatectomy: Results from the SEARCH Database

    Directory of Open Access Journals (Sweden)

    Daniel M. Moreira

    2013-06-01

    Full Text Available Introduction To analyze the association between serum levels of folate and risk of biochemical recurrence after radical prostatectomy among men from the Shared Equal Access Regional Cancer Hospital (SEARCH database. Materials and Methods Retrospective analysis of 135 subjects from the SEARCH database treated between 1991-2009 with available preoperative serum folate levels. Patients' characteristics at the time of the surgery were analyzed with ranksum and linear regression. Uni- and multivariable analyses of folate levels (log-transformed and time to biochemical recurrence were performed with Cox proportional hazards. Results The median preoperative folate level was 11.6ng/mL (reference = 1.5-20.0ng/mL. Folate levels were significantly lower among African-American men than Caucasians (P = 0.003. In univariable analysis, higher folate levels were associated with more recent year of surgery (P < 0.001 and lower preoperative PSA (P = 0.003. In univariable analysis, there was a trend towards lower risk of biochemical recurrence among men with high folate levels (HR = 0.61, 95%CI = 0.37-1.03, P = 0.064. After adjustments for patients characteristics' and pre- and post-operative clinical and pathological findings, higher serum levels of folate were independently associated with lower risk for biochemical recurrence (HR = 0.42, 95%CI = 0.20-0.89, P = 0.023. Conclusion In a cohort of men undergoing radical prostatectomy at several VAs across the country, higher serum folate levels were associated with lower PSA and lower risk for biochemical failure. While the source of the folate in the serum in this study is unknown (i.e. diet vs. supplement, these findings, if confirmed, suggest a potential role of folic acid supplementation or increased consumption of folate rich foods to reduce the risk of recurrence.

  8. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload

  9. Students are Confident Using Federated Search Tools as much as Single Databases. A Review of: Armstrong, A. (2009. Student perceptions of federated searching vs. single database searching. Reference Services Review, 37(3, 291-303. doi:10.1108/00907320910982785

    Directory of Open Access Journals (Sweden)

    Deena Yanofsky

    2011-09-01

    Full Text Available Objective – To measure students’ perceptions of the ease-of-use and efficacy of a federated search tool versus a single multidisciplinary database.Design – An evaluation worksheet, employing a combination of quantitative and qualitative questions.Setting – A required, first-year English composition course taught at the University of Illinois at Chicago (UIC.Subjects – Thirty-one undergraduate students completed and submitted the worksheet.Methods – Students attended two library instruction sessions. The first session introduced participants to basic Boolean searching (using AND only, selecting appropriate keywords and searching for books in the library catalogue. In the second library session, students were handed an evaluation worksheet and, with no introduction to the process of searching article databases, were asked to find relevant articles on a research topic of their own choosing using both a federated search tool and a single multidisciplinary database. The evaluation worksheet was divided into four sections: step-by-step instructions for accessing the single multidisciplinary database and the federated search tool; space to record search strings in both resources; space to record the titles of up to five relevant articles; and a series of quantitative and qualitative questions regarding ease-of-use, relevancy of results, overall preference (if any between the two resources, likeliness of future use and other preferred research tools. Half of the participants received a worksheet with instructions to search the federated search tool before the single database; the order was reversed for the other half of the students. The evaluation worksheet was designed to be completed in one hour.Participant responses to qualitative questions were analyzed, codified and grouped into thematic categories. If a student mentioned more than one factor in responding to a question, their response was recorded in multiple categories.Main Results

  10. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  11. Native Health Research Database

    Science.gov (United States)

    ... Indian Health Board) Welcome to the Native Health Database. Please enter your search terms. Basic Search Advanced ... To learn more about searching the Native Health Database, click here. Tutorial Video The NHD has made ...

  12. [Biomedical informatics].

    Science.gov (United States)

    Capurro, Daniel; Soto, Mauricio; Vivent, Macarena; Lopetegui, Marcelo; Herskovic, Jorge R

    2011-12-01

    Biomedical Informatics is a new discipline that arose from the need to incorporate information technologies to the generation, storage, distribution and analysis of information in the domain of biomedical sciences. This discipline comprises basic biomedical informatics, and public health informatics. The development of the discipline in Chile has been modest and most projects have originated from the interest of individual people or institutions, without a systematic and coordinated national development. Considering the unique features of health care system of our country, research in the area of biomedical informatics is becoming an imperative.

  13. Automatic sorting of toxicological information into the IUCLID (International Uniform Chemical Information Database) endpoint-categories making use of the semantic search engine Go3R.

    Science.gov (United States)

    Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert

    2014-06-01

    The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    Science.gov (United States)

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  15. First postoperative PSA is associated with outcomes in patients with node positive prostate cancer: Results from the SEARCH database.

    Science.gov (United States)

    McDonald, Michelle L; Howard, Lauren E; Aronson, William J; Terris, Martha K; Cooperberg, Matthew R; Amling, Christopher L; Freedland, Stephen J; Kane, Christopher J

    2018-05-01

    To analyze factors associated with metastases, prostate cancer-specific mortality, and all-cause mortality in pN1 patients. We analyzed 3,642 radical prostatectomy patients within the Shared Equal Access Regional Cancer Hospital (SEARCH) database. Pathologic Gleason grade, number of lymph nodes (LN) removed, and first postoperative prostate-specific antigen (PSA) (PSA. Of 3,642 patients, 124 (3.4%) had pN1. There were 71 (60%) patients with 1 positive LN, 32 (27%) with 2 positive LNs, and 15 (13%) with ≥3. Among men with pN1, first postoperative PSA wasPSA ≥0.2 ng/ml (P = 0.005) were associated with metastases. First postoperative PSA ≥0.2ng/ml was associated with metastasis on multivariable analysis (P = 0.046). Log-rank analysis revealed a more favorable metastases-free survival in patients with a first postoperative PSAPSAPSA ≥0.2ng/ml were more likely to develop metastases. First postoperative PSA may be useful in identifying pN1 patients who harbor distant disease and aid in secondary treatment decisions. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Preference vs. Authority: A Comparison of Student Searching in a Subject-Specific Indexing and Abstracting Database and a Customized Discovery Layer

    Science.gov (United States)

    Dahlen, Sarah P. C.; Hanson, Kathlene

    2017-01-01

    Discovery layers provide a simplified interface for searching library resources. Libraries with limited finances make decisions about retaining indexing and abstracting databases when similar information is available in discovery layers. These decisions should be informed by student success at finding quality information as well as satisfaction…

  17. Evidential significance of automotive paint trace evidence using a pattern recognition based infrared library search engine for the Paint Data Query Forensic Database.

    Science.gov (United States)

    Lavine, Barry K; White, Collin G; Allen, Matthew D; Fasasi, Ayuba; Weakley, Andrew

    2016-10-01

    A prototype library search engine has been further developed to search the infrared spectral libraries of the paint data query database to identify the line and model of a vehicle from the clear coat, surfacer-primer, and e-coat layers of an intact paint chip. For this study, search prefilters were developed from 1181 automotive paint systems spanning 3 manufacturers: General Motors, Chrysler, and Ford. The best match between each unknown and the spectra in the hit list generated by the search prefilters was identified using a cross-correlation library search algorithm that performed both a forward and backward search. In the forward search, spectra were divided into intervals and further subdivided into windows (which corresponds to the time lag for the comparison) within those intervals. The top five hits identified in each search window were compiled; a histogram was computed that summarized the frequency of occurrence for each library sample, with the IR spectra most similar to the unknown flagged. The backward search computed the frequency and occurrence of each line and model without regard to the identity of the individual spectra. Only those lines and models with a frequency of occurrence greater than or equal to 20% were included in the final hit list. If there was agreement between the forward and backward search results, the specific line and model common to both hit lists was always the correct assignment. Samples assigned to the same line and model by both searches are always well represented in the library and correlate well on an individual basis to specific library samples. For these samples, one can have confidence in the accuracy of the match. This was not the case for the results obtained using commercial library search algorithms, as the hit quality index scores for the top twenty hits were always greater than 99%. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    Science.gov (United States)

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to

  19. Millennial Students’ Online Search Strategies are Associated With Their Mental Models of Search. A Review of: Holman, L. (2011. Millennial students’ mental models of search: Implications for academic librarians and database developers. Journal of Academic Librarianship, 37(1, 19-27. doi:10.1016/j.acalib.2010.10.003

    Directory of Open Access Journals (Sweden)

    Leslie Bussert

    2011-09-01

    Full Text Available Objective – To examine first-year college students’ information seeking behaviours and determine whether their mental models of the search process influence their ability to effectively search for and find scholarly materials.Design – Mixed methods including contextual inquiry, concept mapping, observation, and interviews.Setting – University of Baltimore, a public institution in Maryland, United States of America, offering undergraduate, graduate, and professional degrees.Subjects – A total of 21 first-year undergraduate students, ages 16 to 19 years, undertaking research assignments for which they chose to use online resources.Methods – First-year students were recruited in the fall of 2008 and met with the researcher in a university usability lab for about one hour over a three week period. The researcher observed and videotaped the students as they conducted research in their chosen search engines or article databases. The searches were captured using software, and students were encouraged to think aloud about their research process, search strategies, and anticipated search results. Observation sessions concluded with a 10-question interview incorporating a review of the keywords the student used, the student’s reflection on the success of his or her searches, and possible alternate keywords. The interview also offered prompts to help the researcher learn about students’ conceptualizations of search tools’ utilization of keywords to generate results. The researcher then asked the students to provide a visual diagram of the relationship between their search terms and the items retrieved in the search tool.Data were analyzed by identifying the 21 different search tools used by the students and categorizing all 210 searches and student diagrams for further analysis. A scheme similar to Guinee, Eagleton, and Hall’s (2003 characterized the student searches into four categories: simple single-term searches, topic plus focus

  20. Successful aging: considering non-biomedical constructs

    Directory of Open Access Journals (Sweden)

    Carver LF

    2016-11-01

    Full Text Available Lisa F Carver,1 Diane Buchanan2 1Department of Sociology, Queen’s University Kingston, ON, Canada; 2School of Nursing, Queen’s University Kingston, ON, Canada Objectives: Successful aging continues to be applied in a variety of contexts and is defined using a number of different constructs. Although previous reviews highlight the multidimensionality of successful aging, a few have focused exclusively on non-biomedical factors, as was done here. Methods: This scoping review searched Ovid Medline database for peer-reviewed English-language articles published between 2006 and 2015, offering a model of successful aging and involving research with older adults. Results: Seventy-two articles were reviewed. Thirty-five articles met the inclusion criteria. Common non-biomedical constructs associated with successful aging included engagement, optimism and/or positive attitude, resilience, spirituality and/or religiosity, self-efficacy and/or self-esteem, and gerotranscendence. Discussion: Successful aging is a complex process best described using a multidimensional model. Given that the majority of elders will experience illness and/or disease during the life course, public health initiatives that promote successful aging need to employ non-biomedical constructs, facilitating the inclusion of elders living with disease and/or disability. Keywords: successful aging, resilience, gerotranscendence, engagement, optimism

  1. Biomedical photonics handbook biomedical diagnostics

    CERN Document Server

    Vo-Dinh, Tuan

    2014-01-01

    Shaped by Quantum Theory, Technology, and the Genomics RevolutionThe integration of photonics, electronics, biomaterials, and nanotechnology holds great promise for the future of medicine. This topic has recently experienced an explosive growth due to the noninvasive or minimally invasive nature and the cost-effectiveness of photonic modalities in medical diagnostics and therapy. The second edition of the Biomedical Photonics Handbook presents fundamental developments as well as important applications of biomedical photonics of interest to scientists, engineers, manufacturers, teachers, studen

  2. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  3. Biomedical nanotechnology.

    Science.gov (United States)

    Hurst, Sarah J

    2011-01-01

    This chapter summarizes the roles of nanomaterials in biomedical applications, focusing on those highlighted in this volume. A brief history of nanoscience and technology and a general introduction to the field are presented. Then, the chemical and physical properties of nanostructures that make them ideal for use in biomedical applications are highlighted. Examples of common applications, including sensing, imaging, and therapeutics, are given. Finally, the challenges associated with translating this field from the research laboratory to the clinic setting, in terms of the larger societal implications, are discussed.

  4. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2015-01-01

    Full Text Available The Smith-Waterman (SW algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  5. A comparison of three design tree based search algorithms for the detection of engineering parts constructed with CATIA V5 in large databases

    Directory of Open Access Journals (Sweden)

    Robin Roj

    2014-07-01

    Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.

  6. Biomedical Engineering

    CERN Document Server

    Suh, Sang C; Tanik, Murat M

    2011-01-01

    Biomedical Engineering: Health Care Systems, Technology and Techniques is an edited volume with contributions from world experts. It provides readers with unique contributions related to current research and future healthcare systems. Practitioners and researchers focused on computer science, bioinformatics, engineering and medicine will find this book a valuable reference.

  7. CUDASW++2.0: enhanced Smith-Waterman protein database search on CUDA-enabled GPUs based on SIMT and virtualized SIMD abstractions

    Directory of Open Access Journals (Sweden)

    Schmidt Bertil

    2010-04-01

    Full Text Available Abstract Background Due to its high sensitivity, the Smith-Waterman algorithm is widely used for biological database searches. Unfortunately, the quadratic time complexity of this algorithm makes it highly time-consuming. The exponential growth of biological databases further deteriorates the situation. To accelerate this algorithm, many efforts have been made to develop techniques in high performance architectures, especially the recently emerging many-core architectures and their associated programming models. Findings This paper describes the latest release of the CUDASW++ software, CUDASW++ 2.0, which makes new contributions to Smith-Waterman protein database searches using compute unified device architecture (CUDA. A parallel Smith-Waterman algorithm is proposed to further optimize the performance of CUDASW++ 1.0 based on the single instruction, multiple thread (SIMT abstraction. For the first time, we have investigated a partitioned vectorized Smith-Waterman algorithm using CUDA based on the virtualized single instruction, multiple data (SIMD abstraction. The optimized SIMT and the partitioned vectorized algorithms were benchmarked, and remarkably, have similar performance characteristics. CUDASW++ 2.0 achieves performance improvement over CUDASW++ 1.0 as much as 1.74 (1.72 times using the optimized SIMT algorithm and up to 1.77 (1.66 times using the partitioned vectorized algorithm, with a performance of up to 17 (30 billion cells update per second (GCUPS on a single-GPU GeForce GTX 280 (dual-GPU GeForce GTX 295 graphics card. Conclusions CUDASW++ 2.0 is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant performance improvement over CUDASW++ 1.0 using either the optimized SIMT algorithm or the partitioned vectorized algorithm for Smith-Waterman protein database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.

  8. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    Science.gov (United States)

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  9. Searching the Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) database improves systematic reviews.

    Science.gov (United States)

    Clark, Otavio Augusto Camara; Castro, Aldemar Araujo

    2002-02-01

    An unbiased systematic review (SR) should analyse as many articles as possible in order to provide the best evidence available. However, many SR use only databases with high English-language content as sources for articles. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) indexes 670 journals from the Latin American and Caribbean health literature but is seldom used in these SR. Our objective is to evaluate if LILACS should be used as a routine source of articles for SR. First we identified SR published in 1997 in five medical journals with a high impact factor. Then we searched LILACS for articles that could match the inclusion criteria of these SR. We also checked if the authors had already identified these articles located in LILACS. In all, 64 SR were identified. Two had already searched LILACS and were excluded. In 39 of 62 (63%) SR a LILACS search identified articles that matched the inclusion criteria. In 5 (8%) our search was inconclusive and in 18 (29%) no articles were found in LILACS. Therefore, in 71% (44/72) of cases, a LILACS search could have been useful to the authors. This proportion remains the same if we consider only the 37 SR that performed a meta-analysis. In only one case had the article identified in LILACS already been located elsewhere by the authors' strategy. LILACS is an under-explored and unique source of articles whose use can improve the quality of systematic reviews. This database should be used as a routine source to identify studies for systematic reviews.

  10. PubFocus: semantic MEDLINE/PubMed citations analytics through integration of controlled biomedical dictionaries and ranking algorithm

    Directory of Open Access Journals (Sweden)

    Chuong Cheng-Ming

    2006-10-01

    Full Text Available Abstract Background Understanding research activity within any given biomedical field is important. Search outputs generated by MEDLINE/PubMed are not well classified and require lengthy manual citation analysis. Automation of citation analytics can be very useful and timesaving for both novices and experts. Results PubFocus web server automates analysis of MEDLINE/PubMed search queries by enriching them with two widely used human factor-based bibliometric indicators of publication quality: journal impact factor and volume of forward references. In addition to providing basic volumetric statistics, PubFocus also prioritizes citations and evaluates authors' impact on the field of search. PubFocus also analyses presence and occurrence of biomedical key terms within citations by utilizing controlled vocabularies. Conclusion We have developed citations' prioritisation algorithm based on journal impact factor, forward referencing volume, referencing dynamics, and author's contribution level. It can be applied either to the primary set of PubMed search results or to the subsets of these results identified through key terms from controlled biomedical vocabularies and ontologies. NCI (National Cancer Institute thesaurus and MGD (Mouse Genome Database mammalian gene orthology have been implemented for key terms analytics. PubFocus provides a scalable platform for the integration of multiple available ontology databases. PubFocus analytics can be adapted for input sources of biomedical citations other than PubMed.

  11. Figure mining for biomedical research.

    Science.gov (United States)

    Rodriguez-Esteban, Raul; Iossifov, Ivan

    2009-08-15

    Figures from biomedical articles contain valuable information difficult to reach without specialized tools. Currently, there is no search engine that can retrieve specific figure types. This study describes a retrieval method that takes advantage of principles in image understanding, text mining and optical character recognition (OCR) to retrieve figure types defined conceptually. A search engine was developed to retrieve tables and figure types to aid computational and experimental research. http://iossifovlab.cshl.edu/figurome/.

  12. [Advanced online search techniques and dedicated search engines for physicians].

    Science.gov (United States)

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  13. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    Science.gov (United States)

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive.

  14. International patent applications for non-injectable naloxone for opioid overdose reversal: Exploratory search and retrieve analysis of the PatentScope database.

    Science.gov (United States)

    McDonald, Rebecca; Danielsson Glende, Øyvind; Dale, Ola; Strang, John

    2018-02-01

    Non-injectable naloxone formulations are being developed for opioid overdose reversal, but only limited data have been published in the peer-reviewed domain. Through examination of a hitherto-unsearched database, we expand public knowledge of non-injectable formulations, tracing their development and novelty, with the aim to describe and compare their pharmacokinetic properties. (i) The PatentScope database of the World Intellectual Property Organization was searched for relevant English-language patent applications; (ii) Pharmacokinetic data were extracted, collated and analysed; (iii) PubMed was searched using Boolean search query '(nasal OR intranasal OR nose OR buccal OR sublingual) AND naloxone AND pharmacokinetics'. Five hundred and twenty-two PatentScope and 56 PubMed records were identified: three published international patent applications and five peer-reviewed papers were eligible. Pharmacokinetic data were available for intranasal, sublingual, and reference routes. Highly concentrated formulations (10-40 mg mL -1 ) had been developed and tested. Sublingual bioavailability was very low (1%; relative to intravenous). Non-concentrated intranasal spray (1 mg mL -1 ; 1 mL per nostril) had low bioavailability (11%). Concentrated intranasal formulations (≥10 mg mL -1 ) had bioavailability of 21-42% (relative to intravenous) and 26-57% (relative to intramuscular), with peak concentrations (dose-adjusted C max  = 0.8-1.7 ng mL -1 ) reached in 19-30 min (t max ). Exploratory analysis identified intranasal bioavailability as associated positively with dose and negatively with volume. We find consistent direction of development of intranasal sprays to high-concentration, low-volume formulations with bioavailability in the 20-60% range. These have potential to deliver a therapeutic dose in 0.1 mL volume. [McDonald R, Danielsson Glende Ø, Dale O, Strang J. International patent applications for non-injectable naloxone for opioid overdose reversal

  15. A scoping review of competencies for scientific editors of biomedical journals.

    Science.gov (United States)

    Galipeau, James; Barbour, Virginia; Baskin, Patricia; Bell-Syer, Sally; Cobey, Kelly; Cumpston, Miranda; Deeks, Jon; Garner, Paul; MacLehose, Harriet; Shamseer, Larissa; Straus, Sharon; Tugwell, Peter; Wager, Elizabeth; Winker, Margaret; Moher, David

    2016-02-02

    Biomedical journals are the main route for disseminating the results of health-related research. Despite this, their editors operate largely without formal training or certification. To our knowledge, no body of literature systematically identifying core competencies for scientific editors of biomedical journals exists. Therefore, we aimed to conduct a scoping review to determine what is known on the competency requirements for scientific editors of biomedical journals. We searched the MEDLINE®, Cochrane Library, Embase®, CINAHL, PsycINFO, and ERIC databases (from inception to November 2014) and conducted a grey literature search for research and non-research articles with competency-related statements (i.e. competencies, knowledge, skills, behaviors, and tasks) pertaining to the role of scientific editors of peer-reviewed health-related journals. We also conducted an environmental scan, searched the results of a previous environmental scan, and searched the websites of existing networks, major biomedical journal publishers, and organizations that offer resources for editors. A total of 225 full-text publications were included, 25 of which were research articles. We extracted a total of 1,566 statements possibly related to core competencies for scientific editors of biomedical journals from these publications. We then collated overlapping or duplicate statements which produced a list of 203 unique statements. Finally, we grouped these statements into seven emergent themes: (1) dealing with authors, (2) dealing with peer reviewers, (3) journal publishing, (4) journal promotion, (5) editing, (6) ethics and integrity, and (7) qualities and characteristics of editors. To our knowledge, this scoping review is the first attempt to systematically identify possible competencies of editors. Limitations are that (1) we may not have captured all aspects of a biomedical editor's work in our searches, (2) removing redundant and overlapping items may have led to the

  16. Where the bugs are: analyzing distributions of bacterial phyla by descriptor keyword search in the nucleotide database.

    Science.gov (United States)

    Squartini, Andrea

    2011-07-26

    The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.

  17. Search for 5'-leader regulatory RNA structures based on gene annotation aided by the RiboGap database.

    Science.gov (United States)

    Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan

    2017-03-15

    The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    Science.gov (United States)

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean

  19. Statistical Measures Alone Cannot Determine Which Database (BNI, CINAHL, MEDLINE, or EMBASE Is the Most Useful for Searching Undergraduate Nursing Topics. A Review of: Stokes, P., Foster, A., & Urquhart, C. (2009. Beyond relevance and recall: Testing new user-centred measures of database performance. Health Information and Libraries Journal, 26(3, 220-231.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-03-01

    Full Text Available Objective – The research project sought to determine which of four databases was the most useful for searching undergraduate nursing topics. Design – Comparative database evaluation. Setting – Nursing and midwifery students at Homerton School of Health Studies (now part of Anglia Ruskin University, Cambridge, United Kingdom, in 2005-2006. Subjects – The subjects were four databases: British Nursing Index (BNI, CINAHL, MEDLINE, and EMBASE.Methods – This was a comparative study using title searches to compare BNI (BritishNursing Index, CINAHL, MEDLINE and EMBASE.According to the authors, this is the first study to compare BNI with other databases. BNI is a database produced by British libraries that indexes the nursing and midwifery literature. It covers over 240 British journals, and includes references to articles from health sciences journals that are relevant to nurses and midwives (British Nursing Index, n.d..The researchers performed keyword searches in the title field of the four databases for the dissertation topics of nine nursing and midwifery students enrolled in undergraduate dissertation modules. The list of titles of journals articles on their topics were given to the students and they were asked to judge the relevancy of the citations. The title searches were evaluated in each of the databases using the following criteria: • precision (the number of relevant results obtained in the database for a search topic, divided by the total number of results obtained in the database search;• recall (the number of relevant results obtained in the database for a search topic, divided by the total number of relevant results obtained on that topic from all four database searches;• novelty (the number of relevant results that were unique in the database search, which was calculated as a percentage of the total number of relevant results found in the database;• originality (the number of unique relevant results obtained in the

  20. Robots for hazardous duties: Military, space, and nuclear facility applications. (Latest citations from the NTIS bibliographic database). Published Search

    International Nuclear Information System (INIS)

    1993-09-01

    The bibliography contains citations concerning the design and application of robots used in place of humans where the environment could be hazardous. Military applications include autonomous land vehicles, robotic howitzers, and battlefield support operations. Space operations include docking, maintenance, mission support, and intra-vehicular and extra-vehicular activities. Nuclear applications include operations within the containment vessel, radioactive waste operations, fueling operations, and plant security. Many of the articles reference control techniques and the use of expert systems in robotic operations. Applications involving industrial manufacturing, walking robots, and robot welding are cited in other published searches in this series. (Contains a minimum of 183 citations and includes a subject term index and title list.)

  1. Undergraduates Prefer Federated Searching to Searching Databases Individually. A Review of: Belliston, C. Jeffrey, Jared L. Howland, & Brian C. Roberts. “Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-Added Functionality.” College & Research Libraries 68.6 (Nov. 2007: 472-86.

    Directory of Open Access Journals (Sweden)

    Genevieve Gore

    2008-09-01

    Full Text Available Objective – To determine whether use offederated searching by undergraduates saves time, meets their information needs, is preferred over searching databases individually, and provides results of higher quality. Design – Crossover study.Setting – Three American universities, all members of the Consortium of Church Libraries & Archives (CCLA: BYU (Brigham Young University, a large research university; BYUH (Brigham Young University – Hawaii, a small baccalaureate college; and BYUI (Brigham Young University – Idaho, a large baccalaureate collegeSubjects – Ninety-five participants recruited via e-mail invitations sent to a random sample of currently enrolled undergraduates at BYU, BYUH, and BYUI.Methods – Participants were given written directions to complete a literature search for journal articles on two biology-related topics using two search methods: 1. federated searching with WebFeat® (implemented in the same way for this study at the three universities and 2. a hyperlinked list of databases to search individually. Both methods used the same set of seven databases. Each topic was assigned in random order to one of the two search methods, also assigned in random order, for a total of two searches per participant. The time to complete the searches was recorded. Students compiled their list of citations, which were later normalized and graded. To analyze the quality of the citations, one quantitative rubric was created by librarians and one qualitative rubric was approved by a faculty member at BYU. The librarian-created rubric included the journal impact factor (from ISI’s Journal Citation Reports®, the proportion of citations from peer-reviewed journals (determined from Ulrichsweb.com™ to total citations, and the timeliness of the articles. The faculty-approved rubric included three criteria: relevance to the topic, quality of the individual citations (good quality: primary research results, peer-reviewed sources, and

  2. A Review of Abstracting and Indexing Services for Biomedical Journals

    Directory of Open Access Journals (Sweden)

    Sarita Bhardwaj

    2017-10-01

    Full Text Available The days are gone when the researchers used to go to library to look for the articles of their choice. With the introduction of electronic era, searching an article online has become easier. This has been possible due to the availability of various Abstracting and Indexing (A & I services in the world. Of more than 400 online A & I services available, only a few like Google and Thomson Reuters cover all disciplines. Most A & I services cover just one discipline allowing them to cover their area in more depth. There are many databases and indexing services for biomedical journals, most important ones being PubMed/Medline, Scopus, and Web of Science (ISI. This article gives a review of various databases and indexes available for dental journals in the world.

  3. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  4. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  5. Biomedical engineering and nanotechnology

    International Nuclear Information System (INIS)

    Pawar, S.H.; Khyalappa, R.J.; Yakhmi, J.V.

    2009-01-01

    This book is predominantly a compilation of papers presented in the conference which is focused on the development in biomedical materials, biomedical devises and instrumentation, biomedical effects of electromagnetic radiation, electrotherapy, radiotherapy, biosensors, biotechnology, bioengineering, tissue engineering, clinical engineering and surgical planning, medical imaging, hospital system management, biomedical education, biomedical industry and society, bioinformatics, structured nanomaterial for biomedical application, nano-composites, nano-medicine, synthesis of nanomaterial, nano science and technology development. The papers presented herein contain the scientific substance to suffice the academic directivity of the researchers from the field of biomedicine, biomedical engineering, material science and nanotechnology. Papers relevant to INIS are indexed separately

  6. NAMED ENTITY RECOGNITION FROM BIOMEDICAL TEXT -AN INFORMATION EXTRACTION TASK

    Directory of Open Access Journals (Sweden)

    N. Kanya

    2016-07-01

    Full Text Available Biomedical Text Mining targets the Extraction of significant information from biomedical archives. Bio TM encompasses Information Retrieval (IR and Information Extraction (IE. The Information Retrieval will retrieve the relevant Biomedical Literature documents from the various Repositories like PubMed, MedLine etc., based on a search query. The IR Process ends up with the generation of corpus with the relevant document retrieved from the Publication databases based on the query. The IE task includes the process of Preprocessing of the document, Named Entity Recognition (NER from the documents and Relationship Extraction. This process includes Natural Language Processing, Data Mining techniques and machine Language algorithm. The preprocessing task includes tokenization, stop word Removal, shallow parsing, and Parts-Of-Speech tagging. NER phase involves recognition of well-defined objects such as genes, proteins or cell-lines etc. This process leads to the next phase that is extraction of relationships (IE. The work was based on machine learning algorithm Conditional Random Field (CRF.

  7. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  8. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  9. Biomedical applications of nanotechnology.

    Science.gov (United States)

    Ramos, Ana P; Cruz, Marcos A E; Tovani, Camila B; Ciancaglini, Pietro

    2017-04-01

    The ability to investigate substances at the molecular level has boosted the search for materials with outstanding properties for use in medicine. The application of these novel materials has generated the new research field of nanobiotechnology, which plays a central role in disease diagnosis, drug design and delivery, and implants. In this review, we provide an overview of the use of metallic and metal oxide nanoparticles, carbon-nanotubes, liposomes, and nanopatterned flat surfaces for specific biomedical applications. The chemical and physical properties of the surface of these materials allow their use in diagnosis, biosensing and bioimaging devices, drug delivery systems, and bone substitute implants. The toxicology of these particles is also discussed in the light of a new field referred to as nanotoxicology that studies the surface effects emerging from nanostructured materials.

  10. International symposium on Biomedical Data Infrastructure (BDI 2013)

    CERN Document Server

    Dhillon, Sarinder; Advances in biomedical infrastructure 2013

    2013-01-01

    Current Biomedical Databases are independently administered in geographically distinct locations, lending them almost ideally to adoption of intelligent data management approaches. This book focuses on research issues, problems and opportunities in Biomedical Data Infrastructure identifying new issues and directions for future research in Biomedical Data and Information Retrieval, Semantics in Biomedicine, and Biomedical Data Modeling and Analysis. The book will be a useful guide for researchers, practitioners, and graduate-level students interested in learning state-of-the-art development in biomedical data management.

  11. [Design and establishment of modern literature database about acupuncture Deqi].

    Science.gov (United States)

    Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao

    2015-02-01

    A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.

  12. Disbiome database: linking the microbiome to disease.

    Science.gov (United States)

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  13. Pathological and Biochemical Outcomes among African-American and Caucasian Men with Low Risk Prostate Cancer in the SEARCH Database: Implications for Active Surveillance Candidacy.

    Science.gov (United States)

    Leapman, Michael S; Freedland, Stephen J; Aronson, William J; Kane, Christopher J; Terris, Martha K; Walker, Kelly; Amling, Christopher L; Carroll, Peter R; Cooperberg, Matthew R

    2016-11-01

    Racial disparities in the incidence and risk profile of prostate cancer at diagnosis among African-American men are well reported. However, it remains unclear whether African-American race is independently associated with adverse outcomes in men with clinical low risk disease. We retrospectively analyzed the records of 895 men in the SEARCH (Shared Equal Access Regional Cancer Hospital) database in whom clinical low risk prostate cancer was treated with radical prostatectomy. Associations of African-American and Caucasian race with pathological biochemical recurrence outcomes were examined using chi-square, logistic regression, log rank and Cox proportional hazards analyses. We identified 355 African-American and 540 Caucasian men with low risk tumors in the SEARCH cohort who were followed a median of 6.3 years. Following adjustment for relevant covariates African-American race was not significantly associated with pathological upgrading (OR 1.33, p = 0.12), major upgrading (OR 0.58, p = 0.10), up-staging (OR 1.09, p = 0.73) or positive surgical margins (OR 1.04, p = 0.81). Five-year recurrence-free survival rates were 73.4% in African-American men and 78.4% in Caucasian men (log rank p = 0.18). In a Cox proportional hazards analysis model African-American race was not significantly associated with biochemical recurrence (HR 1.11, p = 0.52). In a cohort of patients at clinical low risk who were treated with prostatectomy in an equal access health system with a high representation of African-American men we observed no significant differences in the rates of pathological upgrading, up-staging or biochemical recurrence. These data support continued use of active surveillance in African-American men. Upgrading and up-staging remain concerning possibilities for all men regardless of race. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. Mortality of people with chronic fatigue syndrome: a retrospective cohort study in England and Wales from the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Clinical Record Interactive Search (CRIS) Register.

    Science.gov (United States)

    Roberts, Emmert; Wessely, Simon; Chalder, Trudie; Chang, Chin-Kuo; Hotopf, Matthew

    2016-04-16

    Mortality associated with chronic fatigue syndrome is uncertain. We investigated mortality in individuals diagnosed with chronic fatigue syndrome in secondary and tertiary care using data from the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Clinical Record Interactive Search (CRIS) register. We calculated standardised mortality ratios (SMRs) for all-cause, suicide-specific, and cancer-specific mortality for a 7-year observation period using the number of deaths observed in SLaM records compared with age-specific and sex-specific mortality statistics for England and Wales. Study participants were included if they had had contact with the chronic fatigue service (referral, discharge, or case note entry) and received a diagnosis of chronic fatigue syndrome. We identified 2147 cases of chronic fatigue syndrome from CRIS and 17 deaths from Jan 1, 2007, to Dec 31, 2013. 1533 patients were women of whom 11 died, and 614 were men of whom six died. There was no significant difference in age-standardised and sex-standardised mortality ratios (SMRs) for all-cause mortality (SMR 1·14, 95% CI 0·65-1·85; p=0·67) or cancer-specific mortality (1·39, 0·60-2·73; p=0·45) in patients with chronic fatigue syndrome when compared with the general population in England and Wales. This remained the case when deaths from suicide were removed from the analysis. There was a significant increase in suicide-specific mortality (SMR 6·85, 95% CI 2·22-15·98; p=0·002). We did not note increased all-cause mortality in people with chronic fatigue syndrome, but our findings show a substantial increase in mortality from suicide. This highlights the need for clinicians to be aware of the increased risk of completed suicide and to assess suicidality adequately in patients with chronic fatigue syndrome. National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London

  15. Personalized Search

    CERN Document Server

    AUTHOR|(SzGeCERN)749939

    2015-01-01

    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS.

  16. HMMerThread: detecting remote, functional conserved domains in entire genomes by combining relaxed sequence-database searches with fold recognition.

    Directory of Open Access Journals (Sweden)

    Charles Richard Bradshaw

    Full Text Available Conserved domains in proteins are one of the major sources of functional information for experimental design and genome-level annotation. Though search tools for conserved domain databases such as Hidden Markov Models (HMMs are sensitive in detecting conserved domains in proteins when they share sufficient sequence similarity, they tend to miss more divergent family members, as they lack a reliable statistical framework for the detection of low sequence similarity. We have developed a greatly improved HMMerThread algorithm that can detect remotely conserved domains in highly divergent sequences. HMMerThread combines relaxed conserved domain searches with fold recognition to eliminate false positive, sequence-based identifications. With an accuracy of 90%, our software is able to automatically predict highly divergent members of conserved domain families with an associated 3-dimensional structure. We give additional confidence to our predictions by validation across species. We have run HMMerThread searches on eight proteomes including human and present a rich resource of remotely conserved domains, which adds significantly to the functional annotation of entire proteomes. We find ∼4500 cross-species validated, remotely conserved domain predictions in the human proteome alone. As an example, we find a DNA-binding domain in the C-terminal part of the A-kinase anchor protein 10 (AKAP10, a PKA adaptor that has been implicated in cardiac arrhythmias and premature cardiac death, which upon stress likely translocates from mitochondria to the nucleus/nucleolus. Based on our prediction, we propose that with this HLH-domain, AKAP10 is involved in the transcriptional control of stress response. Further remotely conserved domains we discuss are examples from areas such as sporulation, chromosome segregation and signalling during immune response. The HMMerThread algorithm is able to automatically detect the presence of remotely conserved domains in

  17. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  18. Biomedical engineering fundamentals

    CERN Document Server

    Bronzino, Joseph D

    2014-01-01

    Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Engineering Fundamentals, the first volume of the handbook, presents material from respected scientists with diverse backgrounds in physiological systems, biomechanics, biomaterials, bioelectric phenomena, and neuroengineering. More than three dozen specific topics are examined, including cardia

  19. [Systematic literature search in PubMed : A short introduction].

    Science.gov (United States)

    Blümle, A; Lagrèze, W A; Motschall, E

    2018-03-01

    In order to identify current (and relevant) evidence for a specific clinical question within the unmanageable amount of information available, solid skills in performing a systematic literature search are essential. An efficient approach is to searchbiomedical database containing relevant literature citations of study reports. The best known database is MEDLINE, which is searchable for free via the PubMed interface. In this article, we explain step by step how to perform a systematic literature search via PubMed by means of an example research question in the field of ophthalmology. First, we demonstrate how to translate the clinical problem into a well-framed and searchable research question, how to identify relevant search terms and how to conduct a text word search and a search with keywords in medical subject headings (MeSH) terms. We then show how to limit the number of search results if the search yields too many irrelevant hits and how to increase the number in the case of too few citations. Finally, we summarize all essential principles that guide a literature search via PubMed.

  20. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  1. PubMed searches: overview and strategies for clinicians.

    Science.gov (United States)

    Lindsey, Wesley T; Olin, Bernie R

    2013-04-01

    PubMed is a biomedical and life sciences database maintained by a division of the National Library of Medicine known as the National Center for Biotechnology Information (NCBI). It is a large resource with more than 5600 journals indexed and greater than 22 million total citations. Searches conducted in PubMed provide references that are more specific for the intended topic compared with other popular search engines. Effective PubMed searches allow the clinician to remain current on the latest clinical trials, systematic reviews, and practice guidelines. PubMed continues to evolve by allowing users to create a customized experience through the My NCBI portal, new arrangements and options in search filters, and supporting scholarly projects through exportation of citations to reference managing software. Prepackaged search options available in the Clinical Queries feature also allow users to efficiently search for clinical literature. PubMed also provides information regarding the source journals themselves through the Journals in NCBI Databases link. This article provides an overview of the PubMed database's structure and features as well as strategies for conducting an effective search.

  2. Introduction to biomedical engineering

    CERN Document Server

    Enderle, John D; Blanchard, Susan M

    2005-01-01

    Under the direction of John Enderle, Susan Blanchard and Joe Bronzino, leaders in the field have contributed chapters on the most relevant subjects for biomedical engineering students. These chapters coincide with courses offered in all biomedical engineering programs so that it can be used at different levels for a variety of courses of this evolving field. Introduction to Biomedical Engineering, Second Edition provides a historical perspective of the major developments in the biomedical field. Also contained within are the fundamental principles underlying biomedical engineering design, analysis, and modeling procedures. The numerous examples, drill problems and exercises are used to reinforce concepts and develop problem-solving skills making this book an invaluable tool for all biomedical students and engineers. New to this edition: Computational Biology, Medical Imaging, Genomics and Bioinformatics. * 60% update from first edition to reflect the developing field of biomedical engineering * New chapters o...

  3. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  4. Proteomic analysis of Pinus radiata needles: 2-DE map and protein identification by LC/MS/MS and substitution-tolerant database searching.

    Science.gov (United States)

    Valledor, Luis; Castillejo, Maria A; Lenz, Christof; Rodríguez, Roberto; Cañal, Maria J; Jorrín, Jesús

    2008-07-01

    Pinus radiata is one of the most economically important forest tree species, with a worldwide production of around 370 million m (3) of wood per year. Current selection of elite trees to be used in conservation and breeding programes requires the physiological and molecular characterization of available populations. To identify key proteins related to tree growth, productivity and responses to environmental factors, a proteomic approach is being utilized. In this paper, we present the first report of the 2-DE protein reference map of physiologically mature P. radiata needles, as a basis for subsequent differential expression proteomic studies related to growth, development, biomass production and responses to stresses. After TCA/acetone protein extraction of needle tissue, 549 +/- 21 well-resolved spots were detected in Coommassie-stained gels within the 5-8 pH and 10-100 kDa M(r) ranges. The analytical and biological variance determined for 450 spots were of 31 and 42%, respectively. After LC/MS/MS analysis of in-gel tryptic digested spots, proteins were identified by using the novel Paragon algorithm that tolerates amino acid substitution in the first-pass search. It allowed the confident identification of 115 out of the 150 protein spots subjected to MS, quite unusual high percentage for a poor sequence database, as is the case of P. radiata. Proteins were classified into 12 or 18 groups based on their corresponding cell component or biological process/pathway categories, respectively. Carbohydrate metabolism and photosynthetic enzymes predominate in the 2-DE protein profile of P. radiata needles.

  5. Race and time from diagnosis to radical prostatectomy: does equal access mean equal timely access to the operating room?--Results from the SEARCH database.

    Science.gov (United States)

    Bañez, Lionel L; Terris, Martha K; Aronson, William J; Presti, Joseph C; Kane, Christopher J; Amling, Christopher L; Freedland, Stephen J

    2009-04-01

    African American men with prostate cancer are at higher risk for cancer-specific death than Caucasian men. We determine whether significant delays in management contribute to this disparity. We hypothesize that in an equal-access health care system, time interval from diagnosis to treatment would not differ by race. We identified 1,532 African American and Caucasian men who underwent radical prostatectomy (RP) from 1988 to 2007 at one of four Veterans Affairs Medical Centers that comprise the Shared Equal-Access Regional Cancer Hospital (SEARCH) database with known biopsy date. We compared time from biopsy to RP between racial groups using linear regression adjusting for demographic and clinical variables. We analyzed risk of potential clinically relevant delays by determining odds of delays >90 and >180 days. Median time interval from diagnosis to RP was 76 and 68 days for African Americans and Caucasian men, respectively (P = 0.004). After controlling for demographic and clinical variables, race was not associated with the time interval between diagnosis and RP (P = 0.09). Furthermore, race was not associated with increased risk of delays >90 (P = 0.45) or >180 days (P = 0.31). In a cohort of men undergoing RP in an equal-access setting, there was no significant difference between racial groups with regard to time interval from diagnosis to RP. Thus, equal-access includes equal timely access to the operating room. Given our previous finding of poorer outcomes among African Americans, treatment delays do not seem to explain these observations. Our findings need to be confirmed in patients electing other treatment modalities and in other practice settings.

  6. Pharmacovigilance database search discloses ClC-K channels as a novel target of the AT1 receptor blockers valsartan and olmesartan.

    Science.gov (United States)

    Imbrici, Paola; Tricarico, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Lograno, Marcello Diego; Conte, Diana; Liantonio, Antonella

    2017-07-01

    Human ClC-K chloride channels are highly attractive targets for drug discovery as they have a variety of important physiological functions and are associated with genetic disorders. These channels are crucial in the kidney as they control chloride reabsorption and water diuresis. In addition, loss-of-function mutations of CLCNKB and BSND genes cause Bartter's syndrome (BS), whereas CLCNKA and CLCNKB gain-of-function polymorphisms predispose to a rare form of salt sensitive hypertension. Both disorders lack a personalized therapy that is in most cases only symptomatic. The aim of this study was to identify novel ClC-K ligands from drugs already on the market, by exploiting the pharmacological side activity of drug molecules available from the FDA Adverse Effects Reporting System database. We searched for drugs having a Bartter-like syndrome as a reported side effect, with the assumption that BS could be causatively related to the block of ClC-K channels. The ability of the selected BS-causing drugs to bind and block ClC-K channels was then validated through an integrated experimental and computational approach based on patch clamp electrophysiology in HEK293 cells and molecular docking simulations. Valsartan and olmesartan were able to block ClC-Ka channels and the molecular requirements for effective inhibition of these channels have been identified. These results suggest additional mechanisms of action for these sartans further to their primary AT 1 receptor antagonism and propose these compounds as leads for designing new potent ClC-K ligands. © 2017 The British Pharmacological Society.

  7. Delayed radical prostatectomy for intermediate-risk prostate cancer is associated with biochemical recurrence: possible implications for active surveillance from the SEARCH database.

    Science.gov (United States)

    Abern, Michael R; Aronson, William J; Terris, Martha K; Kane, Christopher J; Presti, Joseph C; Amling, Christopher L; Freedland, Stephen J

    2013-03-01

    Active surveillance (AS) is increasingly accepted as appropriate management for low-risk prostate cancer (PC) patients. It is unknown whether delaying radical prostatectomy (RP) is associated with increased risk of biochemical recurrence (BCR) for men with intermediate-risk PC. We performed a retrospective analysis of 1,561 low and intermediate-risk men from the Shared Equal Access Regional Cancer Hospital (SEARCH) database treated with RP between 1988 and 2011. Patients were stratified by interval between diagnosis and RP (≤ 3, 3-6, 6-9, or >9 months) and by risk using the D'Amico classification. Cox proportional hazard models were used to analyze BCR. Logistic regression was used to analyze positive surgical margins (PSM), extracapsular extension (ECE), and pathologic upgrading. Overall, 813 (52%) men were low-risk, and 748 (48%) intermediate-risk. Median follow-up among men without recurrence was 52.9 months, during which 437 men (38.9%) recurred. For low-risk men, RP delays were unrelated to BCR, ECE, PSM, or upgrading (all P > 0.05). For intermediate-risk men, however, delays >9 months were significantly related to BCR (HR: 2.10, P = 0.01) and PSM (OR: 4.08, P 9 months were associated with BCR in subsets of intermediate-risk men with biopsy Gleason score ≤ 3 + 4 (HR: 2.51, P 9 months predicted greater BCR and PSM risk. If confirmed in future studies, this suggests delayed RP for intermediate-risk PC may compromise outcomes. Copyright © 2012 Wiley Periodicals, Inc.

  8. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  9. The Establishment of the Chinese Full-text Electronic Periodical Database and Service System

    Directory of Open Access Journals (Sweden)

    Huei-Chu Chang

    2003-12-01

    Full Text Available A database covers important journals to critical mass, with powerful search interface, and easy for remote access is the most reasonable electronic resource for users. This article try to start from the project of digitizing bio-medical journals in Taiwan area to the CEPS, discuss the related issues about the selection of journals, the digitized of back issues, the copyright transfer from authors to database producers, the feedback to authors for payment from revenue. It also talks about the flow of journal publishing, marketing, function and the proposed cost-effectiveness in CEPS.[Article content in Chinese

  10. Sierra Leone Journal of Biomedical Research: Submissions

    African Journals Online (AJOL)

    AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING ... Sierra Leone Journal of Biomedical Research (SLJBR) publishes papers in all ... An original article should give sufficient detail of experimental procedures for .... For references cited in a paper which has been accepted for publication but not ...

  11. Effective use of latent semantic indexing and computational linguistics in biological and biomedical applications.

    Science.gov (United States)

    Chen, Hongyu; Martin, Bronwen; Daimon, Caitlin M; Maudsley, Stuart

    2013-01-01

    Text mining is rapidly becoming an essential technique for the annotation and analysis of large biological data sets. Biomedical literature currently increases at a rate of several thousand papers per week, making automated information retrieval methods the only feasible method of managing this expanding corpus. With the increasing prevalence of open-access journals and constant growth of publicly-available repositories of biomedical literature, literature mining has become much more effective with respect to the extraction of biomedically-relevant data. In recent years, text mining of popular databases such as MEDLINE has evolved from basic term-searches to more sophisticated natural language processing techniques, indexing and retrieval methods, structural analysis and integration of literature with associated metadata. In this review, we will focus on Latent Semantic Indexing (LSI), a computational linguistics technique increasingly used for a variety of biological purposes. It is noted for its ability to consistently outperform benchmark Boolean text searches and co-occurrence models at information retrieval and its power to extract indirect relationships within a data set. LSI has been used successfully to formulate new hypotheses, generate novel connections from existing data, and validate empirical data.

  12. Nondestructive testing: Neutron radiography and neutron activation. (Latest citations from the INSPEC: Information services for the physics and engineering communities database). Published Search

    International Nuclear Information System (INIS)

    1993-08-01

    The bibliography contains citations concerning the technology of neutron radiography and neutron activation for nondestructive testing of materials. The development and evaluation of neutron activation analysis and neutron diffraction examination of liquids and solids are presented. Citations also discuss nondestructive assay, verification, evaluation, and multielement analysis of biomedical, environmental, industrial, and geological materials. Nondestructive identification of chemical agents, explosives, weapons, and drugs in sealed containers are explored. (Contains a minimum of 83 citations and includes a subject term index and title list.)

  13. PolySearch2: a significantly improved text-mining system for discovering associations between human diseases, genes, drugs, metabolites, toxins and more.

    Science.gov (United States)

    Liu, Yifeng; Liang, Yongjie; Wishart, David

    2015-07-01

    PolySearch2 (http://polysearch.ca) is an online text-mining system for identifying relationships between biomedical entities such as human diseases, genes, SNPs, proteins, drugs, metabolites, toxins, metabolic pathways, organs, tissues, subcellular organelles, positive health effects, negative health effects, drug actions, Gene Ontology terms, MeSH terms, ICD-10 medical codes, biological taxonomies and chemical taxonomies. PolySearch2 supports a generalized 'Given X, find all associated Ys' query, where X and Y can be selected from the aforementioned biomedical entities. An example query might be: 'Find all diseases associated with Bisphenol A'. To find its answers, PolySearch2 searches for associations against comprehensive collections of free-text collections, including local versions of MEDLINE abstracts, PubMed Central full-text articles, Wikipedia full-text articles and US Patent application abstracts. PolySearch2 also searches 14 widely used, text-rich biological databases such as UniProt, DrugBank and Human Metabolome Database to improve its accuracy and coverage. PolySearch2 maintains an extensive thesaurus of biological terms and exploits the latest search engine technology to rapidly retrieve relevant articles and databases records. PolySearch2 also generates, ranks and annotates associative candidates and present results with relevancy statistics and highlighted key sentences to facilitate user interpretation. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Fundamental of biomedical engineering

    CERN Document Server

    Sawhney, GS

    2007-01-01

    About the Book: A well set out textbook explains the fundamentals of biomedical engineering in the areas of biomechanics, biofluid flow, biomaterials, bioinstrumentation and use of computing in biomedical engineering. All these subjects form a basic part of an engineer''s education. The text is admirably suited to meet the needs of the students of mechanical engineering, opting for the elective of Biomedical Engineering. Coverage of bioinstrumentation, biomaterials and computing for biomedical engineers can meet the needs of the students of Electronic & Communication, Electronic & Instrumenta

  15. The Ontology for Biomedical Investigations.

    Science.gov (United States)

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  16. Blockchain distributed ledger technologies for biomedical and health care applications.

    Science.gov (United States)

    Kuo, Tsung-Ting; Kim, Hyeon-Eui; Ohno-Machado, Lucila

    2017-11-01

    To introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains. Biomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains. The covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  17. Astronomical databases of Nikolaev Observatory

    Science.gov (United States)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  18. GeneView: a comprehensive semantic search engine for PubMed.

    Science.gov (United States)

    Thomas, Philippe; Starlinger, Johannes; Vowinkel, Alexander; Arzt, Sebastian; Leser, Ulf

    2012-07-01

    Research results are primarily published in scientific literature and curation efforts cannot keep up with the rapid growth of published literature. The plethora of knowledge remains hidden in large text repositories like MEDLINE. Consequently, life scientists have to spend a great amount of time searching for specific information. The enormous ambiguity among most names of biomedical objects such as genes, chemicals and diseases often produces too large and unspecific search results. We present GeneView, a semantic search engine for biomedical knowledge. GeneView is built upon a comprehensively annotated version of PubMed abstracts and openly available PubMed Central full texts. This semi-structured representation of biomedical texts enables a number of features extending classical search engines. For instance, users may search for entities using unique database identifiers or they may rank documents by the number of specific mentions they contain. Annotation is performed by a multitude of state-of-the-art text-mining tools for recognizing mentions from 10 entity classes and for identifying protein-protein interactions. GeneView currently contains annotations for >194 million entities from 10 classes for ∼21 million citations with 271,000 full text bodies. GeneView can be searched at http://bc3.informatik.hu-berlin.de/.

  19. Exploration of Global Trend on Biomedical Application of Polyhydroxyalkanoate (PHA): A Patent Survey.

    Science.gov (United States)

    Ponnaiah, Paulraj; Vnoothenei, Nagiah; Chandramohan, Muruganandham; Thevarkattil, Mohamed Javad Pazhayakath

    2018-01-30

    Polyhydroxyalkanoates are bio-based, biodegradable naturally occurring polymers produced by a wide range of organisms, from bacteria to higher mammals. The properties and biocompatibility of PHA make it possible for a wide spectrum of applications. In this context, we analyze the potential applications of PHA in biomedical science by exploring the global trend through the patent survey. The survey suggests that PHA is an attractive candidate in such a way that their applications are widely distributed in the medical industry, drug delivery system, dental material, tissue engineering, packaging material as well as other useful products. In our present study, we explored patents associated with various biomedical applications of polyhydroxyalkanoates. Patent databases of European Patent Office, United States Patent and Trademark Office and World Intellectual Property Organization were mined. We developed an intensive exploration approach to eliminate overlapping patents and sort out significant patents. We demarcated the keywords and search criterions and established search patterns for the database request. We retrieved documents within the recent 6 years, 2010 to 2016 and sort out the collected data stepwise to gather the most appropriate documents in patent families for further scrutiny. By this approach, we retrieved 23,368 patent documents from all the three databases and the patent titles were further analyzed for the relevance of polyhydroxyalkanoates in biomedical applications. This ensued in the documentation of approximately 226 significant patents associated with biomedical applications of polyhydroxyalkanoates and the information was classified into six major groups. Polyhydroxyalkanoates has been patented in such a way that their applications are widely distributed in the medical industry, drug delivery system, dental material, tissue engineering, packaging material as well as other useful products. There are many avenues through which PHA & PHB could be

  20. Biomedical information retrieval across languages.

    Science.gov (United States)

    Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger

    2007-06-01

    This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.

  1. The BioLexicon: a large-scale terminological resource for biomedical text mining

    Directory of Open Access Journals (Sweden)

    Thompson Paul

    2011-10-01

    Full Text Available Abstract Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is

  2. The BioLexicon: a large-scale terminological resource for biomedical text mining

    Science.gov (United States)

    2011-01-01

    Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events) involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized) together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is modelled using the Lexical

  3. Constructing Effective Search Strategies for Electronic Searching.

    Science.gov (United States)

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  4. Biomedical applications engineering tasks

    Science.gov (United States)

    Laenger, C. J., Sr.

    1976-01-01

    The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.

  5. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  6. Hmrbase: a database of hormones and their receptors

    Science.gov (United States)

    Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS

    2009-01-01

    Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug

  7. An Online Database Producer's Memoirs and Memories of an Online Pioneer and The Database Industry: Looking into the Future.

    Science.gov (United States)

    Kollegger, James G.; And Others

    1988-01-01

    In the first of three articles, the producer of Energyline, Energynet, and Tele/Scope recalls the development of the databases and database business strategies. The second describes the development of biomedical online databases, and the third discusses future developments, including full text databases, database producers as online host, and…

  8. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  9. Database in Artificial Intelligence.

    Science.gov (United States)

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  10. Design and implementation of Metta, a metasearch engine for biomedical literature retrieval intended for systematic reviewers.

    Science.gov (United States)

    Smalheiser, Neil R; Lin, Can; Jia, Lifeng; Jiang, Yu; Cohen, Aaron M; Yu, Clement; Davis, John M; Adams, Clive E; McDonagh, Marian S; Meng, Weiyi

    2014-01-01

    Individuals and groups who write systematic reviews and meta-analyses in evidence-based medicine regularly carry out literature searches across multiple search engines linked to different bibliographic databases, and thus have an urgent need for a suitable metasearch engine to save time spent on repeated searches and to remove duplicate publications from initial consideration. Unlike general users who generally carry out searches to find a few highly relevant (or highly recent) articles, systematic reviewers seek to obtain a comprehensive set of articles on a given topic, satisfying specific criteria. This creates special requirements and challenges for metasearch engine design and implementation. We created a federated search tool that is connected to five databases: PubMed, EMBASE, CINAHL, PsycINFO, and the Cochrane Central Register of Controlled Trials. Retrieved bibliographic records were shown online; optionally, results could be de-duplicated and exported in both BibTex and XML format. The query interface was extensively modified in response to feedback from users within our team. Besides a general search track and one focused on human-related articles, we also added search tracks optimized to identify case reports and systematic reviews. Although users could modify preset search options, they were rarely if ever altered in practice. Up to several thousand retrieved records could be exported within a few minutes. De-duplication of records returned from multiple databases was carried out in a prioritized fashion that favored retaining citations returned from PubMed. Systematic reviewers are used to formulating complex queries using strategies and search tags that are specific for individual databases. Metta offers a different approach that may save substantial time but which requires modification of current search strategies and better indexing of randomized controlled trial articles. We envision Metta as one piece of a multi-tool pipeline that will assist

  11. Online Patent Searching: The Realities.

    Science.gov (United States)

    Kaback, Stuart M.

    1983-01-01

    Considers patent subject searching capabilities of major online databases, noting patent claims, "deep-indexed" files, test searches, retrieval of related references, multi-database searching, improvements needed in indexing of chemical structures, full text searching, improvements needed in handling numerical data, and augmenting a…

  12. Literature database aid

    International Nuclear Information System (INIS)

    Wanderer, J.A.

    1991-01-01

    The booklet is to help with the acquisition of original literature either after a conventional literature search or in particular after a database search. It bridges the gap between abbreviated (short) and original (long) titel. This, together with information on the holdings of technical/scientific libraries, facilitates document delivery. 1500 short titles are listed alphabetically. (orig.) [de

  13. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Finding and accessing diagrams in biomedical publications.

    Science.gov (United States)

    Kuhn, Tobias; Luong, ThaiBinh; Krauthammer, Michael

    2012-01-01

    Complex relationships in biomedical publications are often communicated by diagrams such as bar and line charts, which are a very effective way of summarizing and communicating multi-faceted data sets. Given the ever-increasing amount of published data, we argue that the precise retrieval of such diagrams is of great value for answering specific and otherwise hard-to-meet information needs. To this end, we demonstrate the use of advanced image processing and classification for identifying bar and line charts by the shape and relative location of the different image elements that make up the charts. With recall and precisions of close to 90% for the detection of relevant figures, we discuss the use of this technology in an existing biomedical image search engine, and outline how it enables new forms of literature queries over biomedical relationships that are represented in these charts.

  15. Biomedical applications of polymers

    CERN Document Server

    Gebelein, C G

    1991-01-01

    The biomedical applications of polymers span an extremely wide spectrum of uses, including artificial organs, skin and soft tissue replacements, orthopaedic applications, dental applications, and controlled release of medications. No single, short review can possibly cover all these items in detail, and dozens of books andhundreds of reviews exist on biomedical polymers. Only a few relatively recent examples will be cited here;additional reviews are listed under most of the major topics in this book. We will consider each of the majorclassifications of biomedical polymers to some extent, inclu

  16. Handbook of biomedical optics

    CERN Document Server

    Boas, David A

    2011-01-01

    Biomedical optics holds tremendous promise to deliver effective, safe, non- or minimally invasive diagnostics and targeted, customizable therapeutics. Handbook of Biomedical Optics provides an in-depth treatment of the field, including coverage of applications for biomedical research, diagnosis, and therapy. It introduces the theory and fundamentals of each subject, ensuring accessibility to a wide multidisciplinary readership. It also offers a view of the state of the art and discusses advantages and disadvantages of various techniques.Organized into six sections, this handbook: Contains intr

  17. Biomedical Engineering Desk Reference

    CERN Document Server

    Ratner, Buddy D; Schoen, Frederick J; Lemons, Jack E; Dyro, Joseph; Martinsen, Orjan G; Kyle, Richard; Preim, Bernhard; Bartz, Dirk; Grimnes, Sverre; Vallero, Daniel; Semmlow, John; Murray, W Bosseau; Perez, Reinaldo; Bankman, Isaac; Dunn, Stanley; Ikada, Yoshito; Moghe, Prabhas V; Constantinides, Alkis

    2009-01-01

    A one-stop Desk Reference, for Biomedical Engineers involved in the ever expanding and very fast moving area; this is a book that will not gather dust on the shelf. It brings together the essential professional reference content from leading international contributors in the biomedical engineering field. Material covers a broad range of topics including: Biomechanics and Biomaterials; Tissue Engineering; and Biosignal Processing* A hard-working desk reference providing all the essential material needed by biomedical and clinical engineers on a day-to-day basis * Fundamentals, key techniques,

  18. Powering biomedical devices

    CERN Document Server

    Romero, Edwar

    2013-01-01

    From exoskeletons to neural implants, biomedical devices are no less than life-changing. Compact and constant power sources are necessary to keep these devices running efficiently. Edwar Romero's Powering Biomedical Devices reviews the background, current technologies, and possible future developments of these power sources, examining not only the types of biomedical power sources available (macro, mini, MEMS, and nano), but also what they power (such as prostheses, insulin pumps, and muscular and neural stimulators), and how they work (covering batteries, biofluids, kinetic and ther

  19. HIP2: An online database of human plasma proteins from healthy individuals

    Directory of Open Access Journals (Sweden)

    Shen Changyu

    2008-04-01

    Full Text Available Abstract Background With the introduction of increasingly powerful mass spectrometry (MS techniques for clinical research, several recent large-scale MS proteomics studies have sought to characterize the entire human plasma proteome with a general objective for identifying thousands of proteins leaked from tissues in the circulating blood. Understanding the basic constituents, diversity, and variability of the human plasma proteome is essential to the development of sensitive molecular diagnosis and treatment monitoring solutions for future biomedical applications. Biomedical researchers today, however, do not have an integrated online resource in which they can search for plasma proteins collected from different mass spectrometry platforms, experimental protocols, and search software for healthy individuals. The lack of such a resource for comparisons has made it difficult to interpret proteomics profile changes in patients' plasma and to design protein biomarker discovery experiments. Description To aid future protein biomarker studies of disease and health from human plasma, we developed an online database, HIP2 (Healthy Human Individual's Integrated Plasma Proteome. The current version contains 12,787 protein entries linked to 86,831 peptide entries identified using different MS platforms. Conclusion This web-based database will be useful to biomedical researchers involved in biomarker discovery research. This database has been developed to be the comprehensive collection of healthy human plasma proteins, and has protein data captured in a relational database schema built to contain mappings of supporting peptide evidence from several high-quality and high-throughput mass-spectrometry (MS experimental data sets. Users can search for plasma protein/peptide annotations, peptide/protein alignments, and experimental/sample conditions with options for filter-based retrieval to achieve greater analytical power for discovery and validation.

  20. Integrating systems biology models and biomedical ontologies.

    Science.gov (United States)

    Hoehndorf, Robert; Dumontier, Michel; Gennari, John H; Wimalaratne, Sarala; de Bono, Bernard; Cook, Daniel L; Gkoutos, Georgios V

    2011-08-11

    Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms.

  1. License - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...thout notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - SSBD | LSDB Archive ...

  2. Download - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...cess [here]. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - PSCDB | LSDB Archive ...

  3. License - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - ASTRA | LSDB Archive ...

  4. License - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - JSNP | LSDB Archive ...

  5. License - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...out This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - KOME | LSDB Archive ...

  6. Download - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...is Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - ASTRA | LSDB Archive ...

  7. License - RGP gmap | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...nged without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RGP gmap | LSDB Archive ...

  8. License - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ut notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - SAHG | LSDB Archive ...

  9. Download - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - RED | LSDB Archive ...

  10. Download - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...t This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - GRIPDB | LSDB Archive ...

  11. License - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...thout notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RPSD | LSDB Archive ...

  12. License - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...out This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RMOS | LSDB Archive ...

  13. NDE in biomedical engineering

    International Nuclear Information System (INIS)

    Bhagwat, Aditya; Kumar, Pradeep

    2015-01-01

    Biomedical Engineering (BME) is an interdisciplinary field, marking the conjunction of Medical and Engineering disciplines. It combines the design and problem solving skills of engineering with medical and biological sciences to advance health care treatment, including diagnosis, monitoring, and therapy

  14. Biomedical signal analysis

    CERN Document Server

    Rangayyan, Rangaraj M

    2015-01-01

    The book will help assist a reader in the development of techniques for analysis of biomedical signals and computer aided diagnoses with a pedagogical examination of basic and advanced topics accompanied by over 350 figures and illustrations. Wide range of filtering techniques presented to address various applications. 800 mathematical expressions and equations. Practical questions, problems and laboratory exercises. Includes fractals and chaos theory with biomedical applications.

  15. Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval

    Science.gov (United States)

    Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene

    2018-01-01

    Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie

  16. Alkemio: association of chemicals with biomedical topics by text and data mining.

    Science.gov (United States)

    Gijón-Correas, José A; Andrade-Navarro, Miguel A; Fontaine, Jean F

    2014-07-01

    The PubMed® database of biomedical citations allows the retrieval of scientific articles studying the function of chemicals in biology and medicine. Mining millions of available citations to search reported associations between chemicals and topics of interest would require substantial human time. We have implemented the Alkemio text mining web tool and SOAP web service to help in this task. The tool uses biomedical articles discussing chemicals (including drugs), predicts their relatedness to the query topic with a naïve Bayesian classifier and ranks all chemicals by P-values computed from random simulations. Benchmarks on seven human pathways showed good retrieval performance (areas under the receiver operating characteristic curves ranged from 73.6 to 94.5%). Comparison with existing tools to retrieve chemicals associated to eight diseases showed the higher precision and recall of Alkemio when considering the top 10 candidate chemicals. Alkemio is a high performing web tool ranking chemicals for any biomedical topics and it is free to non-commercial users. http://cbdm.mdc-berlin.de/∼medlineranker/cms/alkemio. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. PubMed-based quantitative analysis of biomedical publications in the SAARC countries: 1985-2009.

    Science.gov (United States)

    Azim Majumder, Md Anwarul; Shaban, Sami F; Rahman, Sayeeda; Rahman, Nuzhat; Ahmed, Moslehuddin; Bin Abdulrahman, Khalid A; Islam, Ziauddin

    2012-09-01

    To conduct a geographical analysis of biomedical publications from the South Asian Association for Regional Cooperation (SAARC) countries over the past 25 years (1985-2009) using the PubMed database. A qualitative study. Web-based search during September 2010. A data extraction program, developed by one of the authors (SFS), was used to extract the raw publication counts from the downloaded PubMed data. A search of PubMed was performed for all journals indexed by selecting the advanced search option and entering the country name in the 'affiliation' field. The publications were normalized by total population, adult illiteracy rate, gross domestic product (GDP), secondary school enrollment ratio and Internet usage rate. The number of PubMed-listed papers published by the SAARC countries over the last 25 years totalled 141,783, which is 1.1% of the total papers indexed by PubMed in the same period. India alone produced 90.5% of total publications generated by SAARC countries. The average number of papers published per year from 1985 to 2009 was 5671 and number of publication increased approximately 242-fold. Normalizing by the population (per million) and GDP (per billion), India (133, 27.6%) and Nepal (323, 37.3%) had the highest publications respectively. There was a marked imbalance among the SAARC countries in terms of biomedical research and publication. Because of huge population and the high disease burden, biomedical research and publication output should receive special attention to formulate health policies, re-orient medical education curricula, and alleviate diseases and poverty.

  18. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  19. A search for pre-main sequence stars in the high-latitude molecular clouds. II - A survey of the Einstein database

    Science.gov (United States)

    Caillault, Jean-Pierre; Magnani, Loris

    1990-01-01

    The preliminary results are reported of a survey of every EINSTEIN image which overlaps any high-latitude molecular cloud in a search for X-ray emitting pre-main sequence stars. This survey, together with complementary KPNO and IRAS data, will allow the determination of how prevalent low mass star formation is in these clouds in general and, particularly, in the translucent molecular clouds.

  20. Description of color/race in Brazilian biomedical research.

    Science.gov (United States)

    Ribeiro, Teresa Veronica Catonho; Ferreira, Luzitano Brandão

    2012-01-01

    Over recent years, the terms race and ethnicity have been used to ascertain inequities in public health. However, this use depends on the quality of the data available. This study aimed to investigate the description of color/race in Brazilian scientific journals within the field of biomedicine. Descriptive study with systematic search for scientific articles in the SciELO Brazil database. A wide-ranging systematic search for original articles involving humans, published in 32 Brazilian biomedical scientific journals in the SciELO Brazil database between January and December 2008, was performed. Articles in which the race/ethnicity of the participants was identified were analyzed. In total, 1,180 articles were analyzed. The terms for describing race or ethnicity were often ambiguous and vague. Descriptions of race or ethnicity occurred in 159 articles (13.4%), but only in 42 (26.4%) was there a description of how individuals were identified. In these, race and ethnicity were used almost interchangeably and definition was according to skin color (71.4%), ancestry (19.0%) and self-definition (9.6%). Twenty-two races or ethnicities were cited, and the most common were white (37.3%), black (19.7%), mixed (12.9%), nonwhite (8.1%) and yellow (8.1%). The absence of descriptions of parameters for defining race, as well as the use of vague and ambiguous terms, may hamper and even prevent comparisons between human groups and the use of these data to ascertain inequities in healthcare.

  1. Advances in biomedical engineering

    CERN Document Server

    Brown, J H U

    1976-01-01

    Advances in Biomedical Engineering, Volume 6, is a collection of papers that discusses the role of integrated electronics in medical systems and the usage of biological mathematical models in biological systems. Other papers deal with the health care systems, the problems and methods of approach toward rehabilitation, as well as the future of biomedical engineering. One paper discusses the use of system identification as it applies to biological systems to estimate the values of a number of parameters (for example, resistance, diffusion coefficients) by indirect means. More particularly, the i

  2. Biomedical enhancements as justice.

    Science.gov (United States)

    Nam, Jeesoo

    2015-02-01

    Biomedical enhancements, the applications of medical technology to make better those who are neither ill nor deficient, have made great strides in the past few decades. Using Amartya Sen's capability approach as my framework, I argue in this article that far from being simply permissible, we have a prima facie moral obligation to use these new developments for the end goal of promoting social justice. In terms of both range and magnitude, the use of biomedical enhancements will mark a radical advance in how we compensate the most disadvantaged members of society. © 2013 John Wiley & Sons Ltd.

  3. Advances in biomedical engineering

    CERN Document Server

    Brown, J H U

    1976-01-01

    Advances in Biomedical Engineering, Volume 5, is a collection of papers that deals with application of the principles and practices of engineering to basic and applied biomedical research, development, and the delivery of health care. The papers also describe breakthroughs in health improvements, as well as basic research that have been accomplished through clinical applications. One paper examines engineering principles and practices that can be applied in developing therapeutic systems by a controlled delivery system in drug dosage. Another paper examines the physiological and materials vari

  4. [The system of biomedical scientific information of Serbia].

    Science.gov (United States)

    Dacić, M

    1995-09-01

    Building of the System of biomedical scientific information of Yugoslavia (SBMSI YU) began, by the end of 1980, and the system became operative officially in 1986. After the political disintegration of former Yugoslavia SBMSI of Serbia was formed. SBMSI is developed according to the policy of developing of the System of scientific technologic information of Serbia (SSTI S), and with technical support of SSTI S. Reconstruction of the System is done by using former SBMSI YU as a model. Unlike the former SBMSI YU, SBMSI S owns besides the database Biomedicina Serbica, three important databases: database of doctoral dissertations promoted at University Medical School in Belgrade in the period from 1955-1993, database of Master's theses promoted at the University School of Medicine in Belgrade from 1965-1993; A database of foreign biomedical periodicals in libraries of Serbia.

  5. PIE the search: searching PubMed literature for protein interaction information.

    Science.gov (United States)

    Kim, Sun; Kwon, Dongseop; Shin, Soo-Yong; Wilbur, W John

    2012-02-15

    Finding protein-protein interaction (PPI) information from literature is challenging but an important issue. However, keyword search in PubMed(®) is often time consuming because it requires a series of actions that refine keywords and browse search results until it reaches a goal. Due to the rapid growth of biomedical literature, it has become more difficult for biologists and curators to locate PPI information quickly. Therefore, a tool for prioritizing PPI informative articles can be a useful assistant for finding this PPI-relevant information. PIE (Protein Interaction information Extraction) the search is a web service implementing a competition-winning approach utilizing word and syntactic analyses by machine learning techniques. For easy user access, PIE the search provides a PubMed-like search environment, but the output is the list of articles prioritized by PPI confidence scores. By obtaining PPI-related articles at high rank, researchers can more easily find the up-to-date PPI information, which cannot be found in manually curated PPI databases. http://www.ncbi.nlm.nih.gov/IRET/PIE/.

  6. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  7. Self-correction in biomedical publications and the scientific impact

    Science.gov (United States)

    Gasparyan, Armen Yuri; Ayvazyan, Lilit; Akazhanov, Nurbek A.; Kitas, George D.

    2014-01-01

    Aim To analyze mistakes and misconduct in multidisciplinary and specialized biomedical journals. Methods We conducted searches through PubMed to retrieve errata, duplicate, and retracted publications (as of January 30, 2014). To analyze publication activity and citation profiles of countries, multidisciplinary, and specialized biomedical journals, we referred to the latest data from the SCImago Journal & Country Rank database. Total number of indexed articles and values of the h-index of the fifty most productive countries and multidisciplinary journals were recorded and linked to the number of duplicate and retracted publications in PubMed. Results Our analysis found 2597 correction items. A striking increase in the number of corrections appeared in 2013, which is mainly due to 871 (85.3%) corrections from PLOS One. The number of duplicate publications was 1086. Articles frequently published in duplicate were reviews (15.6%), original studies (12.6%), and case reports (7.6%), whereas top three retracted articles were original studies (10.1%), randomized trials (8.8%), and reviews (7%). A strong association existed between the total number of publications across countries and duplicate (rs = 0.86, P < 0.001) and retracted items (rs = 0.812, P < 0.001). A similar trend was found between country-based h-index values and duplicate and retracted publications. Conclusion The study suggests that the intensified self-correction in biomedicine is due to the attention of readers and authors, who spot errors in their hub of evidence-based information. Digitization and open access confound the staggering increase in correction notices and retractions. PMID:24577829

  8. Self-correction in biomedical publications and the scientific impact.

    Science.gov (United States)

    Gasparyan, Armen Yuri; Ayvazyan, Lilit; Akazhanov, Nurbek A; Kitas, George D

    2014-02-01

    To analyze mistakes and misconduct in multidisciplinary and specialized biomedical journals. We conducted searches through PubMed to retrieve errata, duplicate, and retracted publications (as of January 30, 2014). To analyze publication activity and citation profiles of countries, multidisciplinary, and specialized biomedical journals, we referred to the latest data from the SCImago Journal and Country Rank database. Total number of indexed articles and values of the h-index of the fifty most productive countries and multidisciplinary journals were recorded and linked to the number of duplicate and retracted publications in PubMed. Our analysis found 2597 correction items. A striking increase in the number of corrections appeared in 2013, which is mainly due to 871 (85.3%) corrections from PLOS One. The number of duplicate publications was 1086. Articles frequently published in duplicate were reviews (15.6%), original studies (12.6%), and case reports (7.6%), whereas top three retracted articles were original studies (10.1%), randomized trials (8.8%), and reviews (7%). A strong association existed between the total number of publications across countries and duplicate (rs=0.86, P<0.0001) and retracted items (rs=0.812, P<0.0001). A similar trend was found between country-based h-index values and duplicate and retracted publications. The study suggests that the intensified self-correction in biomedicine is due to the attention of readers and authors, who spot errors in their hub of evidence-based information. Digitization and open access confound the staggering increase in correction notices and retractions.

  9. A search for pre-main-sequence stars in high-latitude molecular clouds. 3: A survey of the Einstein database

    Science.gov (United States)

    Caillault, Jean-Pierre; Magnani, Loris; Fryer, Chris

    1995-01-01

    In order to discern whether the high-latitude molecular clouds are regions of ongoing star formation, we have used X-ray emission as a tracer of youthful stars. The entire Einstein database yields 18 images which overlap 10 of the clouds mapped partially or completely in the CO (1-0) transition, providing a total of approximately 6 deg squared of overlap. Five previously unidentified X-ray sources were detected: one has an optical counterpart which is a pre-main-sequence (PMS) star, and two have normal main-sequence stellar counterparts, while the other two are probably extragalactic sources. The PMS star is located in a high Galactic latitude Lynds dark cloud, so this result is not too suprising. The translucent clouds, though, have yet to reveal any evidence of star formation.

  10. Biomedical Engineering in Modern Society

    Science.gov (United States)

    Attinger, E. O.

    1971-01-01

    Considers definition of biomedical engineering (BME) and how biomedical engineers should be trained. State of the art descriptions of BME and BME education are followed by a brief look at the future of BME. (TS)

  11. Biomedical Image Registration

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th International Workshop on Biomedical Image Registration, WBIR 2018, held in Leiden, The Netherlands, in June 2018. The 11 full and poster papers included in this volume were carefully reviewed and selected from 17 submitted papers. The pap...

  12. Biomedical Data Mining

    NARCIS (Netherlands)

    Peek, N.; Combi, C.; Tucker, A.

    2009-01-01

    Objective: To introduce the special topic of Methods of Information in Medicine on data mining in biomedicine, with selected papers from two workshops on Intelligent Data Analysis in bioMedicine (IDAMAP) held in Verona (2006) and Amsterdam (2007). Methods: Defining the field of biomedical data

  13. Careers in biomedical engineering.

    Science.gov (United States)

    Madrid, R E; Rotger, V I; Herrera, M C

    2010-01-01

    Although biomedical engineering was started in Argentina about 35 years ago, it has had a sustained growth for the last 25 years in human resources, with the emergence of new undergraduate and postgraduate careers, as well as in research, knowledge, technological development, and health care.

  14. Anatomy for Biomedical Engineers

    Science.gov (United States)

    Carmichael, Stephen W.; Robb, Richard A.

    2008-01-01

    There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that…

  15. Biomedical research applications

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The biomedical research Panel believes that the Calutron facility at Oak Ridge is a national and international resource of immense scientific value and of fundamental importance to continued biomedical research. This resource is essential to the development of new isotope uses in biology and medicine. It should therefore be nurtured by adequate support and operated in a way that optimizes its services to the scientific and technological community. The Panel sees a continuing need for a reliable supply of a wide variety of enriched stable isotopes. The past and present utilization of stable isotopes in biomedical research is documented in Appendix 7. Future requirements for stable isotopes are impossible to document, however, because of the unpredictability of research itself. Nonetheless we expect the demand for isotopes to increase in parallel with the continuing expansion of biomedical research as a whole. There are a number of promising research projects at the present time, and these are expected to lead to an increase in production requirements. The Panel also believes that a high degree of priority should be given to replacing the supplies of the 65 isotopes (out of the 224 previously available enriched isotopes) no longer available from ORNL

  16. Inaccurate Citations in Biomedical Journalism: Effect on the Impact Factor of the American Journal of Roentgenology.

    Science.gov (United States)

    Karabulut, Nevzat

    2017-03-01

    The aim of this study is to investigate the frequency of incorrect citations and its effects on the impact factor of a specific biomedical journal: the American Journal of Roentgenology. The Cited Reference Search function of Thomson Reuters' Web of Science database (formerly the Institute for Scientific Information's Web of Knowledge database) was used to identify erroneous citations. This was done by entering the journal name into the Cited Work field and entering "2011-2012" into the Cited Year(s) field. The errors in any part of the inaccurately cited references (e.g., author names, title, year, volume, issue, and page numbers) were recorded, and the types of errors (i.e., absent, deficient, or mistyped) were analyzed. Erroneous citations were corrected using the Suggest a Correction function of the Web of Science database. The effect of inaccurate citations on the impact factor of the AJR was calculated. Overall, 183 of 1055 citable articles published in 2011-2012 were inaccurately cited 423 times (mean [± SD], 2.31 ± 4.67 times; range, 1-44 times). Of these 183 articles, 110 (60.1%) were web-only articles and 44 (24.0%) were print articles. The most commonly identified errors were page number errors (44.8%) and misspelling of an author's name (20.2%). Incorrect citations adversely affected the impact factor of the AJR by 0.065 in 2012 and by 0.123 in 2013. Inaccurate citations are not infrequent in biomedical journals, yet they can be detected and corrected using the Web of Science database. Although the accuracy of references is primarily the responsibility of authors, the journal editorial office should also define a periodic inaccurate citation check task and correct erroneous citations to reclaim unnecessarily lost credit.

  17. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  18. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  19. Optimization of partial search

    International Nuclear Information System (INIS)

    Korepin, Vladimir E

    2005-01-01

    A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm. (letter to the editor)

  20. Detection and identification of drugs and toxicants in human body fluids by liquid chromatography-tandem mass spectrometry under data-dependent acquisition control and automated database search.

    Science.gov (United States)

    Oberacher, Herbert; Schubert, Birthe; Libiseller, Kathrin; Schweissgut, Anna

    2013-04-03

    Systematic toxicological analysis (STA) is aimed at detecting and identifying all substances of toxicological relevance (i.e. drugs, drugs of abuse, poisons and/or their metabolites) in biological material. Particularly, gas chromatography-mass spectrometry (GC/MS) represents a competent and commonly applied screening and confirmation tool. Herein, we present an untargeted liquid chromatography-tandem mass spectrometry (LC/MS/MS) assay aimed to complement existing GC/MS screening for the detection and identification of drugs in blood, plasma and urine samples. Solid-phase extraction was accomplished on mixed-mode cartridges. LC was based on gradient elution in a miniaturized C18 column. High resolution electrospray ionization-MS/MS in positive ion mode with data-dependent acquisition control was used to generate tandem mass spectral information that enabled compound identification via automated library search in the "Wiley Registry of Tandem Mass Spectral Data, MSforID". Fitness of the developed LC/MS/MS method for application in STA in terms of selectivity, detection capability and reliability of identification (sensitivity/specificity) was demonstrated with blank samples, certified reference materials, proficiency test samples, and authentic casework samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  2. Biomedical signals, imaging, and informatics

    CERN Document Server

    Bronzino, Joseph D

    2014-01-01

    Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Signals, Imaging, and Informatics, the third volume of the handbook, presents material from respected scientists with diverse backgrounds in biosignal processing, medical imaging, infrared imaging, and medical informatics.More than three dozen specific topics are examined, including biomedical s

  3. Biomedical data integration in computational drug design and bioinformatics.

    Science.gov (United States)

    Seoane, Jose A; Aguiar-Pulido, Vanessa; Munteanu, Cristian R; Rivero, Daniel; Rabunal, Juan R; Dorado, Julian; Pazos, Alejandro

    2013-03-01

    In recent years, in the post genomic era, more and more data is being generated by biological high throughput technologies, such as proteomics and transcriptomics. This omics data can be very useful, but the real challenge is to analyze all this data, as a whole, after integrating it. Biomedical data integration enables making queries to different, heterogeneous and distributed biomedical data sources. Data integration solutions can be very useful not only in the context of drug design, but also in biomedical information retrieval, clinical diagnosis, system biology, etc. In this review, we analyze the most common approaches to biomedical data integration, such as federated databases, data warehousing, multi-agent systems and semantic technology, as well as the solutions developed using these approaches in the past few years.

  4. Design of a Bioactive Small Molecule that Targets the Myotonic Dystrophy Type 1 RNA Via an RNA Motif-Ligand Database & Chemical Similarity Searching

    Science.gov (United States)

    Parkesh, Raman; Childs-Disney, Jessica L.; Nakamori, Masayuki; Kumar, Amit; Wang, Eric; Wang, Thomas; Hoskins, Jason; Tran, Tuan; Housman, David; Thornton, Charles A.; Disney, Matthew D.

    2012-01-01

    Myotonic dystrophy type 1 (DM1) is a triplet repeating disorder caused by expanded CTG repeats in the 3′ untranslated region of the dystrophia myotonica protein kinase (DMPK) gene. The transcribed repeats fold into an RNA hairpin with multiple copies of a 5′CUG/3′GUC motif that binds the RNA splicing regulator muscleblind-like 1 protein (MBNL1). Sequestration of MBNL1 by expanded r(CUG) repeats causes splicing defects in a subset of pre-mRNAs including the insulin receptor, the muscle-specific chloride ion channel, Sarco(endo)plasmic reticulum Ca2+ ATPase 1 (Serca1/Atp2a1), and cardiac troponin T (cTNT). Based on these observations, the development of small molecule ligands that target specifically expanded DM1 repeats could serve as therapeutics. In the present study, computational screening was employed to improve the efficacy of pentamidine and Hoechst 33258 ligands that have been shown previously to target the DM1 triplet repeat. A series of inhibitors of the RNA-protein complex with low micromolar IC50’s, which are >20-fold more potent than the query compounds, were identified. Importantly, a bis-benzimidazole identified from the Hoechst query improves DM1-associated pre-mRNA splicing defects in cell and mouse models of DM1 (when dosed with 1 mM and 100 mg/kg, respectively). Since Hoechst 33258 was identified as a DM1 binder through analysis of an RNA motif-ligand database, these studies suggest that lead ligands targeting RNA with improved biological activity can be identified by using a synergistic approach that combines analysis of known RNA-ligand interactions with virtual screening. PMID:22300544

  5. Identification of specific markers for amphetamine synthesised from the pre-precursor APAAN following the Leuckart route and retrospective search for APAAN markers in profiling databases from Germany and the Netherlands.

    Science.gov (United States)

    Hauser, Frank M; Rößler, Thorsten; Hulshof, Janneke W; Weigel, Diana; Zimmermann, Ralf; Pütz, Michael

    2018-04-01

    α-Phenylacetoacetonitrile (APAAN) is one of the most important pre-precursors for amphetamine production in recent years. This assumption is based on seizure data but there is little analytical data available showing how much amphetamine really originated from APAAN. In this study, several syntheses of amphetamine following the Leuckart route were performed starting from different organic compounds including APAAN. The organic phases were analysed using gas chromatography-mass spectrometry (GC-MS) to search for signals caused by possible APAAN markers. Three compounds were discovered, isolated, and based on the performed syntheses it was found that they are highly specific for the use of APAAN. Using mass spectra, high resolution MS and nuclear magnetic resonance (NMR) data the compounds were characterised and identified as 2-phenyl-2-butenenitrile, 3-amino-2-phenyl-2-butenenitrile, and 4-amino-6-methyl-5-phenylpyrimidine. To investigate their significance, they were searched in data from seized amphetamine samples to determine to what extent they were present in illicitly produced amphetamine. Data of more than 580 cases from amphetamine profiling databases in Germany and the Netherlands were used for this purpose. These databases allowed analysis of the yearly occurrence of the markers going back to 2009. The markers revealed a trend that was in agreement with seizure reports and reflected an increasing use of APAAN from 2010 on. This paper presents experimental proof that APAAN is indeed the most important pre-precursor of amphetamine in recent years. It also illustrates how important it is to look for new ways to identify current trends in drug production since such trends can change within a few years. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Standardization of Keyword Search Mode

    Science.gov (United States)

    Su, Di

    2010-01-01

    In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…

  7. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  8. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  9. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  10. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  12. Figure text extraction in biomedical literature.

    Directory of Open Access Journals (Sweden)

    Daehyun Kim

    2011-01-01

    Full Text Available Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures.We first evaluated an off-the-shelf Optical Character Recognition (OCR tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons.The evaluation on 382 figures (9,643 figure texts in total randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for

  13. Optical Polarizationin Biomedical Applications

    CERN Document Server

    Tuchin, Valery V; Zimnyakov, Dmitry A

    2006-01-01

    Optical Polarization in Biomedical Applications introduces key developments in optical polarization methods for quantitative studies of tissues, while presenting the theory of polarization transfer in a random medium as a basis for the quantitative description of polarized light interaction with tissues. This theory uses the modified transfer equation for Stokes parameters and predicts the polarization structure of multiple scattered optical fields. The backscattering polarization matrices (Jones matrix and Mueller matrix) important for noninvasive medical diagnostic are introduced. The text also describes a number of diagnostic techniques such as CW polarization imaging and spectroscopy, polarization microscopy and cytometry. As a new tool for medical diagnosis, optical coherent polarization tomography is analyzed. The monograph also covers a range of biomedical applications, among them cataract and glaucoma diagnostics, glucose sensing, and the detection of bacteria.

  14. Biomedical publications profile and trends in gulf cooperation council countries.

    Science.gov (United States)

    Al-Maawali, Almundher; Al Busadi, Ahmed; Al-Adawi, Samir

    2012-02-01

    There is a dearth of studies examining the relationship between research output and other socio-demographic indicators in the Gulf Cooperation Council (GCC) countries (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates). The three interrelated aims of this study were, first, to ascertain the number of biomedical publications in the GCC from 1970 to 2010; second, to establish the rate of publication according population size during the same period and, third, to gauge the relationship between the number of publications and specific socio-economic parameters. The Medline database was searched in October 2010 by affiliation, year and publication type from 1970 to 2010. Data obtained were normalised to the number of publications per million of the population, gross domestic product, and the number of physicians in each country. The number of articles from the GCC region published over this 40 year period was 25,561. Saudi Arabia had the highest number followed by Kuwait, UAE, and then Oman. Kuwait had the highest profile of publication when normalised to population size, followed by Qatar. Oman is the lowest in this ranking. Overall, the six countries showed a rising trend in publication numbers with Oman having a significant increase from 1990 to 2005. There was a significant relationship between the number of physicians and the number of publications. The research productivity from GGC has experienced complex and fluctuating growth in the past 40 years. Future prospects for increasing research productivity are discussed with particular reference to the situation in Oman.

  15. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    Science.gov (United States)

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  16. Retracted Publications in the Biomedical Literature from Open Access Journals.

    Science.gov (United States)

    Wang, Tao; Xing, Qin-Rui; Wang, Hui; Chen, Wei

    2018-03-07

    The number of articles published in open access journals (OAJs) has increased dramatically in recent years. Simultaneously, the quality of publications in these journals has been called into question. Few studies have explored the retraction rate from OAJs. The purpose of the current study was to determine the reasons for retractions of articles from OAJs in biomedical research. The Medline database was searched through PubMed to identify retracted publications in OAJs. The journals were identified by the Directory of Open Access Journals. Data were extracted from each retracted article, including the time from publication to retraction, causes, journal impact factor, and country of origin. Trends in the characteristics related to retraction were determined. Data from 621 retracted studies were included in the analysis. The number and rate of retractions have increased since 2010. The most common reasons for retraction are errors (148), plagiarism (142), duplicate publication (101), fraud/suspected fraud (98) and invalid peer review (93). The number of retracted articles from OAJs has been steadily increasing. Misconduct was the primary reason for retraction. The majority of retracted articles were from journals with low impact factors and authored by researchers from China, India, Iran, and the USA.

  17. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  18. Update History of This Database - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SSBD Update History of This Database Date Update contents 2016/07/25 SSBD English archive si...tion Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SSBD | LSDB Archive ... ...te is opened. 2013/09/03 SSBD ( http://ssbd.qbic.riken.jp/ ) is opened. About This Database Database Descrip

  19. Update History of This Database - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us SAHG Update History of This Database Date Update contents 2016/05/09 SAHG English archive si...te is opened. 2009/10 SAHG ( http://bird.cbrc.jp/sahg ) is opened. About This Database Database Description ...Download License Update History of This Database Site Policy | Contact Us Update History of This Database - SAHG | LSDB Archive ...

  20. Update History of This Database - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us DMPD Update History of This Database Date Update contents 2010/03/29 DMPD English archive si....jp/macrophage/ ) is released. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - DMPD | LSDB Archive ...

  1. Update History of This Database - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RMOS Update History of This Database Date Update contents 2015/10/27 RMOS English archive si...12 RMOS (http://cdna01.dna.affrc.go.jp/RMOS/) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - RMOS | LSDB Archive ...

  2. Ocean Drilling Program: Janus Web Database

    Science.gov (United States)

    JANUS Database Send questions/comments about the online database Request data not available online Janus database Search the ODP/TAMU web site ODP's main web site Janus Data Model Data Migration Overview in Janus Data Types and Examples Leg 199, sunrise. Janus Web Database ODP and IODP data are stored in

  3. Online Petroleum Industry Bibliographic Databases: A Review.

    Science.gov (United States)

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  4. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  5. Biomedical and Health Informatics Education – the IMIA Years

    Science.gov (United States)

    2016-01-01

    Summary Objective This paper presents the development of medical informatics education during the years from the establishment of the International Medical Informatics Association (IMIA) until today. Method A search in the literature was performed using search engines and appropriate keywords as well as a manual selection of papers. The search covered English language papers and was limited to search on papers title and abstract only. Results The aggregated papers were analyzed on the basis of the subject area, origin, time span, and curriculum development, and conclusions were drawn. Conclusions From the results, it is evident that IMIA has played a major role in comparing and integrating the Biomedical and Health Informatics educational efforts across the different levels of education and the regional distribution of educators and institutions. A large selection of references is presented facilitating future work on the field of education in biomedical and health informatics. PMID:27488405

  6. Online Databases for Health Professionals

    OpenAIRE

    Marshall, Joanne Gard

    1987-01-01

    Recent trends in the marketing of electronic information technology have increased interest among health professionals in obtaining direct access to online biomedical databases such as Medline. During 1985, the Canadian Medical Association (CMA) and Telecom Canada conducted an eight-month trial of the use made of online information retrieval systems by 23 practising physicians and one pharmacist. The results of this project demonstrated both the value and the limitations of these systems in p...

  7. Download - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...Database Description Download License Update History of This Database Site Policy | Contact Us Download - SAHG | LSDB Archive ...

  8. License - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e Database Description Download License Update History of This Database Site Policy | Contact Us License - GRIPDB | LSDB Archive ...

  9. License - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...se Database Description Download License Update History of This Database Site Policy | Contact Us License - GETDB | LSDB Archive ...

  10. Download - Metabolonote | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... Database Description Download License Update History of This Database Site Policy | Contact Us Download - Metabolonote | LSDB Archive ...

  11. Image BOSS: a biomedical object storage system

    Science.gov (United States)

    Stacy, Mahlon C.; Augustine, Kurt E.; Robb, Richard A.

    1997-05-01

    Researchers using biomedical images have data management needs which are oriented perpendicular to clinical PACS. The image BOSS system is designed to permit researchers to organize and select images based on research topic, image metadata, and a thumbnail of the image. Image information is captured from existing images in a Unix based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. In addition, the ImageBOSS is designed to provide an extensible infrastructure for future content-based queries directly on the images.

  12. A scoping review protocol on the roles and tasks of peer reviewers in the manuscript review process in biomedical journals.

    Science.gov (United States)

    Glonti, Ketevan; Cauchi, Daniel; Cobo, Erik; Boutron, Isabelle; Moher, David; Hren, Darko

    2017-10-22

    The primary functions of peer reviewers are poorly defined. Thus far no body of literature has systematically identified the roles and tasks of peer reviewers of biomedical journals. A clear establishment of these can lead to improvements in the peer review process. The purpose of this scoping review is to determine what is known on the roles and tasks of peer reviewers. We will use the methodological framework first proposed by Arksey and O'Malley and subsequently adapted by Levac et al and the Joanna Briggs Institute. The scoping review will include all study designs, as well as editorials, commentaries and grey literature. The following eight electronic databases will be searched (from inception to May 2017): Cochrane Library, Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, EMBASE, MEDLINE, PsycINFO, Scopus and Web of Science. Two reviewers will use inclusion and exclusion criteria based on the 'Population-Concept-Context' framework to independently screen titles and abstracts of articles considered for inclusion. Full-text screening of relevant eligible articles will also be carried out by two reviewers. The search strategy for grey literature will include searching in websites of existing networks, biomedical journal publishers and organisations that offer resources for peer reviewers. In addition we will review journal guidelines to peer reviewers on how to perform the manuscript review. Journals will be selected using the 2016 journal impact factor. We will identify and assess the top five, middle five and lowest-ranking five journals across all medical specialties. This scoping review will undertake a secondary analysis of data already collected and does not require ethical approval. The results will be disseminated through journals and conferences targeting stakeholders involved in peer review in biomedical research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the

  13. Evaluation of Federated Searching Options for the School Library

    Science.gov (United States)

    Abercrombie, Sarah E.

    2008-01-01

    Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…

  14. DDPC: Dragon database of genes associated with prostate cancer

    KAUST Repository

    Maqungo, Monique; Kaur, Mandeep; Kwofie, Samuel K.; Radovanovic, Aleksandar; Schaefer, Ulf; Schmeier, Sebastian; Oppon, Ekow; Christoffels, Alan; Bajic, Vladimir B.

    2010-01-01

    associated with Prostate Cancer (DDPC) as an integrated knowledgebase of genes experimentally verified as implicated in PC. DDPC is distinctive from other databases in that (i) it provides pre-compiled biomedical text-mining information on PC, which otherwise

  15. Enrolment and Retention of African Women in Biomedical Research ...

    African Journals Online (AJOL)

    Relevant biomedical research literatures on Human Research Participants from Scirus, Pubmed and Medline computerized search were critically evaluated and highlighted. Information was also obtained from research ethics training as well as texts and journals in the medical libraries of the research ethics departments of ...

  16. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...

  17. Compound image segmentation of published biomedical figures.

    Science.gov (United States)

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  18. Three-dimensional biomedical imaging

    International Nuclear Information System (INIS)

    Robb, R.A.

    1985-01-01

    Scientists in biomedical imaging provide researchers, physicians, and academicians with an understanding of the fundamental theories and practical applications of three-dimensional biomedical imaging methodologies. Succinct descriptions of each imaging modality are supported by numerous diagrams and illustrations which clarify important concepts and demonstrate system performance in a variety of applications. Comparison of the different functional attributes, relative advantages and limitations, complementary capabilities, and future directions of three-dimensional biomedical imaging modalities are given. Volume 1: Introductions to Three-Dimensional Biomedical Imaging Photoelectronic-Digital Imaging for Diagnostic Radiology. X-Ray Computed Tomography - Basic Principles. X-Ray Computed Tomography - Implementation and Applications. X-Ray Computed Tomography: Advanced Systems and Applications in Biomedical Research and Diagnosis. Volume II: Single Photon Emission Computed Tomography. Position Emission Tomography (PET). Computerized Ultrasound Tomography. Fundamentals of NMR Imaging. Display of Multi-Dimensional Biomedical Image Information. Summary and Prognostications

  19. Beyond MEDLINE for literature searches.

    Science.gov (United States)

    Conn, Vicki S; Isaramalai, Sang-arun; Rath, Sabyasachi; Jantarakupt, Peeranuch; Wadhawan, Rohini; Dash, Yashodhara

    2003-01-01

    To describe strategies for a comprehensive literature search. MEDLINE searches result in limited numbers of studies that are often biased toward statistically significant findings. Diversified search strategies are needed. Empirical evidence about the recall and precision of diverse search strategies is presented. Challenges and strengths of each search strategy are identified. Search strategies vary in recall and precision. Often sensitivity and specificity are inversely related. Valuable search strategies include examination of multiple diverse computerized databases, ancestry searches, citation index searches, examination of research registries, journal hand searching, contact with the "invisible college," examination of abstracts, Internet searches, and contact with sources of synthesized information. Extending searches beyond MEDLINE enables researchers to conduct more systematic comprehensive searches.

  20. Biomedical applications of batteries

    Energy Technology Data Exchange (ETDEWEB)

    Latham, Roger [Faculty of Health and Life Sciences, De Montfort University, The Gateway, Leicester, LE1 9BH (United Kingdom); Linford, Roger [The Research Office, De Montfort University, The Gateway, Leicester, LE1 9BH (United Kingdom); Schlindwein, Walkiria [School of Pharmacy, De Montfort University, The Gateway, Leicester, LE1 9BH (United Kingdom)

    2004-08-31

    An overview is presented of the many ways in which batteries and battery materials are used in medicine and in biomedical studies. These include the use of batteries as power sources for motorised wheelchairs, surgical tools, cardiac pacemakers and defibrillators, dynamic prostheses, sensors and monitors for physiological parameters, neurostimulators, devices for pain relief, and iontophoretic, electroporative and related devices for drug administration. The various types of battery and fuel cell used for this wide range of applications will be considered, together with the potential harmful side effects, including accidental ingestion of batteries and the explosive nature of some of the early cardiac pacemaker battery systems.

  1. Advances in biomedical engineering

    CERN Document Server

    Brown, J H U

    1973-01-01

    Advances in Biomedical Engineering, Volume 2, is a collection of papers that discusses the basic sciences, the applied sciences of engineering, the medical sciences, and the delivery of health services. One paper discusses the models of adrenal cortical control, including the secretion and metabolism of cortisol (the controlled process), as well as the initiation and modulation of secretion of ACTH (the controller). Another paper discusses hospital computer systems-application problems, objective evaluation of technology, and multiple pathways for future hospital computer applications. The pos

  2. Statistics in biomedical research

    Directory of Open Access Journals (Sweden)

    González-Manteiga, Wenceslao

    2007-06-01

    Full Text Available The discipline of biostatistics is nowadays a fundamental scientific component of biomedical, public health and health services research. Traditional and emerging areas of application include clinical trials research, observational studies, physiology, imaging, and genomics. The present article reviews the current situation of biostatistics, considering the statistical methods traditionally used in biomedical research, as well as the ongoing development of new methods in response to the new problems arising in medicine. Clearly, the successful application of statistics in biomedical research requires appropriate training of biostatisticians. This training should aim to give due consideration to emerging new areas of statistics, while at the same time retaining full coverage of the fundamentals of statistical theory and methodology. In addition, it is important that students of biostatistics receive formal training in relevant biomedical disciplines, such as epidemiology, clinical trials, molecular biology, genetics, and neuroscience.La Bioestadística es hoy en día una componente científica fundamental de la investigación en Biomedicina, salud pública y servicios de salud. Las áreas tradicionales y emergentes de aplicación incluyen ensayos clínicos, estudios observacionales, fisología, imágenes, y genómica. Este artículo repasa la situación actual de la Bioestadística, considerando los métodos estadísticos usados tradicionalmente en investigación biomédica, así como los recientes desarrollos de nuevos métodos, para dar respuesta a los nuevos problemas que surgen en Medicina. Obviamente, la aplicación fructífera de la estadística en investigación biomédica exige una formación adecuada de los bioestadísticos, formación que debería tener en cuenta las áreas emergentes en estadística, cubriendo al mismo tiempo los fundamentos de la teoría estadística y su metodología. Es importante, además, que los estudiantes de

  3. Biomedical signals and systems

    CERN Document Server

    Tranquillo, Joseph V

    2013-01-01

    Biomedical Signals and Systems is meant to accompany a one-semester undergraduate signals and systems course. It may also serve as a quick-start for graduate students or faculty interested in how signals and systems techniques can be applied to living systems. The biological nature of the examples allows for systems thinking to be applied to electrical, mechanical, fluid, chemical, thermal and even optical systems. Each chapter focuses on a topic from classic signals and systems theory: System block diagrams, mathematical models, transforms, stability, feedback, system response, control, time

  4. Biomedical photonics handbook

    CERN Document Server

    Vo-Dinh, Tuan

    2003-01-01

    1.Biomedical Photonics: A Revolution at the Interface of Science and Technology, T. Vo-DinhPHOTONICS AND TISSUE OPTICS2.Optical Properties of Tissues, J. Mobley and T. Vo-Dinh3.Light-Tissue Interactions, V.V. Tuchin 4.Theoretical Models and Algorithms in Optical Diffusion Tomography, S.J. Norton and T. Vo-DinhPHOTONIC DEVICES5.Laser Light in Biomedicine and the Life Sciences: From the Present to the Future, V.S. Letokhov6.Basic Instrumentation in Photonics, T. Vo-Dinh7.Optical Fibers and Waveguides for Medical Applications, I. Gannot and

  5. Radiochemicals in biomedical research

    International Nuclear Information System (INIS)

    Evans, E.A.; Oldham, K.G.

    1988-01-01

    This volume describes the role of radiochemicals in biomedical research, as tracers in the development of new drugs, their interaction and function with receptor proteins, with the kinetics of binding of hormone - receptor interactions, and their use in cancer research and clinical oncology. The book also aims to identify future trends in this research, the main objective of which is to provide information leading to improvements in the quality of life, and to give readers a basic understanding of the development of new drugs, how they function in relation to receptor proteins and lead to a better understanding of the diagnosis and treatment of cancers. (author)

  6. Beyond PubMed: Searching the "Grey Literature" for Clinical Trial Results.

    Science.gov (United States)

    Citrome, Leslie

    2014-07-01

    Clinical trial results have been traditionally communicated through the publication of scholarly reports and reviews in biomedical journals. However, this dissemination of information can be delayed or incomplete, making it difficult to appraise new treatments, or in the case of missing data, evaluate older interventions. Going beyond the routine search of PubMed, it is possible to discover additional information in the "grey literature." Examples of the grey literature include clinical trial registries, patent databases, company and industrywide repositories, regulatory agency digital archives, abstracts of paper and poster presentations on meeting/congress websites, industry investor reports and press releases, and institutional and personal websites.

  7. Main data - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ftp://ftp.biosciencedbc.jp/archive/rmg/LATEST/rmg_main.zip File size: 1 KB Simple search URL http://togodb.b... This Database Database Description Download License Update History of This Database Site Policy | Contact Us Main data - RMG | LSDB Archive ...

  8. Alignment - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...e URL: ftp://ftp.biosciencedbc.jp/archive/sahg/LATEST/sahg_alignment.zip File size: 12.0 MB Simple search UR...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Alignment - SAHG | LSDB Archive ...

  9. Locus - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...URL: ftp://ftp.biosciencedbc.jp/archive/astra/LATEST/astra_locus.zip File size: 887 KB Simple search URL htt...icing type (ex. cassette) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Locus - ASTRA | LSDB Archive ...

  10. Optimal search filters for renal information in EMBASE.

    Science.gov (United States)

    Iansavichus, Arthur V; Haynes, R Brian; Shariff, Salimah Z; Weir, Matthew; Wilczynski, Nancy L; McKibbon, Ann; Rehman, Faisal; Garg, Amit X

    2010-07-01

    EMBASE is a popular database used to retrieve biomedical information. Our objective was to develop and test search filters to help clinicians and researchers efficiently retrieve articles with renal information in EMBASE. We used a diagnostic test assessment framework because filters operate similarly to screening tests. We divided a sample of 5,302 articles from 39 journals into development and validation sets of articles. Information retrieval properties were assessed by treating each search filter as a "diagnostic test" or screening procedure for the detection of relevant articles. We tested the performance of 1,936,799 search filters made of unique renal terms and their combinations. REFERENCE STANDARD & OUTCOME: The reference standard was manual review of each article. We calculated the sensitivity and specificity of each filter to identify articles with renal information. The best renal filters consisted of multiple search terms, such as "renal replacement therapy," "renal," "kidney disease," and "proteinuria," and the truncated terms "kidney," "dialy," "neph," "glomerul," and "hemodial." These filters achieved peak sensitivities of 98.7% (95% CI, 97.9-99.6) and specificities of 98.5% (95% CI, 98.0-99.0). The retrieval performance of these filters remained excellent in the validation set of independent articles. The retrieval performance of any search will vary depending on the quality of all search concepts used, not just renal terms. We empirically developed and validated high-performance renal search filters for EMBASE. These filters can be programmed into the search engine or used on their own to improve the efficiency of searching.

  11. Exploring and linking biomedical resources through multidimensional semantic spaces.

    Science.gov (United States)

    Berlanga, Rafael; Jiménez-Ruiz, Ernesto; Nebot, Victoria

    2012-01-25

    The semantic integration of biomedical resources is still a challenging issue which is required for effective information processing and data analysis. The availability of comprehensive knowledge resources such as biomedical ontologies and integrated thesauri greatly facilitates this integration effort by means of semantic annotation, which allows disparate data formats and contents to be expressed under a common semantic space. In this paper, we propose a multidimensional representation for such a semantic space, where dimensions regard the different perspectives in biomedical research (e.g., population, disease, anatomy and protein/genes). This paper presents a novel method for building multidimensional semantic spaces from semantically annotated biomedical data collections. This method consists of two main processes: knowledge and data normalization. The former one arranges the concepts provided by a reference knowledge resource (e.g., biomedical ontologies and thesauri) into a set of hierarchical dimensions for analysis purposes. The latter one reduces the annotation set associated to each collection item into a set of points of the multidimensional space. Additionally, we have developed a visual tool, called 3D-Browser, which implements OLAP-like operators over the generated multidimensional space. The method and the tool have been tested and evaluated in the context of the Health-e-Child (HeC) project. Automatic semantic annotation was applied to tag three collections of abstracts taken from PubMed, one for each target disease of the project, the Uniprot database, and the HeC patient record database. We adopted the UMLS Meta-thesaurus 2010AA as the reference knowledge resource. Current knowledge resources and semantic-aware technology make possible the integration of biomedical resources. Such an integration is performed through semantic annotation of the intended biomedical data resources. This paper shows how these annotations can be exploited for

  12. License - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us FANTOM....0 International . If you use data from this database, please be sure attribute this database as follows: FANTOM...se Database Description Download License Update History of This Database Site Policy | Contact Us License - FANTOM5 | LSDB Archive ...

  13. Update History of This Database - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RED Update History of This Database Date Update contents 2015/12/21 Rice Expression Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RED | LSDB Archive ... ...ve site is opened. 2000/10/1 Rice Expression Database ( http://red.dna.affrc.go.jp/RED/ ) is opened. About Thi

  14. Update History of This Database - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RPD Update History of This Database Date Update contents 2016/02/02 Rice Proteome Database English archi...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - RPD | LSDB Archive ... ...ve site is opened. 2003/01/07 Rice Proteome Database ( http://gene64.dna.affrc.go.jp/RPD/ ) is opened. About Thi

  15. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  16. Animal Research International: Advanced Search

    African Journals Online (AJOL)

    PROMOTING ACCESS TO AFRICAN RESEARCH ... Animal Research International: Advanced Search ... containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ... Journal of Biomedical Research, African Journal of Biotechnology, African Journal of Chemical Education ...

  17. An optimal big data workflow for biomedical image analysis

    Directory of Open Access Journals (Sweden)

    Aurelle Tchagna Kouanou

    Full Text Available Background and objective: In the medical field, data volume is increasingly growing, and traditional methods cannot manage it efficiently. In biomedical computation, the continuous challenges are: management, analysis, and storage of the biomedical data. Nowadays, big data technology plays a significant role in the management, organization, and analysis of data, using machine learning and artificial intelligence techniques. It also allows a quick access to data using the NoSQL database. Thus, big data technologies include new frameworks to process medical data in a manner similar to biomedical images. It becomes very important to develop methods and/or architectures based on big data technologies, for a complete processing of biomedical image data. Method: This paper describes big data analytics for biomedical images, shows examples reported in the literature, briefly discusses new methods used in processing, and offers conclusions. We argue for adapting and extending related work methods in the field of big data software, using Hadoop and Spark frameworks. These provide an optimal and efficient architecture for biomedical image analysis. This paper thus gives a broad overview of big data analytics to automate biomedical image diagnosis. A workflow with optimal methods and algorithm for each step is proposed. Results: Two architectures for image classification are suggested. We use the Hadoop framework to design the first, and the Spark framework for the second. The proposed Spark architecture allows us to develop appropriate and efficient methods to leverage a large number of images for classification, which can be customized with respect to each other. Conclusions: The proposed architectures are more complete, easier, and are adaptable in all of the steps from conception. The obtained Spark architecture is the most complete, because it facilitates the implementation of algorithms with its embedded libraries. Keywords: Biomedical images, Big

  18. License - Q-TARO | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...thout notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - Q-TARO | LSDB Archive ...

  19. Download - GenLibi | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...access [here]. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - GenLibi | LSDB Archive ...

  20. Biomedical ontologies: toward scientific debate.

    Science.gov (United States)

    Maojo, V; Crespo, J; García-Remesal, M; de la Iglesia, D; Perez-Rey, D; Kulikowski, C

    2011-01-01

    Biomedical ontologies have been very successful in structuring knowledge for many different applications, receiving widespread praise for their utility and potential. Yet, the role of computational ontologies in scientific research, as opposed to knowledge management applications, has not been extensively discussed. We aim to stimulate further discussion on the advantages and challenges presented by biomedical ontologies from a scientific perspective. We review various aspects of biomedical ontologies going beyond their practical successes, and focus on some key scientific questions in two ways. First, we analyze and discuss current approaches to improve biomedical ontologies that are based largely on classical, Aristotelian ontological models of reality. Second, we raise various open questions about biomedical ontologies that require further research, analyzing in more detail those related to visual reasoning and spatial ontologies. We outline significant scientific issues that biomedical ontologies should consider, beyond current efforts of building practical consensus between them. For spatial ontologies, we suggest an approach for building "morphospatial" taxonomies, as an example that could stimulate research on fundamental open issues for biomedical ontologies. Analysis of a large number of problems with biomedical ontologies suggests that the field is very much open to alternative interpretations of current work, and in need of scientific debate and discussion that can lead to new ideas and research directions.

  1. Professional Identification for Biomedical Engineers

    Science.gov (United States)

    Long, Francis M.

    1973-01-01

    Discusses four methods of professional identification in biomedical engineering including registration, certification, accreditation, and possible membership qualification of the societies. Indicates that the destiny of the biomedical engineer may be under the control of a new profession, neither the medical nor the engineering. (CC)

  2. Egyptian Journal of Biomedical Sciences

    African Journals Online (AJOL)

    The Egyptian Journal of Biomedical Sciences publishes in all aspects of biomedical research sciences. Both basic and clinical research papers are welcomed. Vol 23 (2007). DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Table of Contents. Articles. Phytochemical And ...

  3. African Journal of Biomedical Research

    African Journals Online (AJOL)

    The African Journal of biomedical Research was founded in 1998 as a joint project ... of the journal led to the formation of a group (Biomedical Communications Group, ... analysis of multidrug resistant aerobic gram-negative clinical isolates from a ... Dental formula and dental abnormalities observed in the Eidolon helvum ...

  4. Biomedical Science Technologists in Lagos Universities: Meeting ...

    African Journals Online (AJOL)

    Biomedical Science Technologists in Lagos Universities: Meeting Modern Standards ... like to see in biomedical science in Nigeria; 5) their knowledge of ten state-of-the-arts ... KEY WORDS: biomedical science, state-of-the-arts, technical staff ...

  5. Journal of Biomedical Investigation: Editorial Policies

    African Journals Online (AJOL)

    Journal of Biomedical Investigation: Editorial Policies. Journal Home ... The focus of the Journal of Biomedical Research is to promote interdisciplinary research across all Biomedical Sciences. It publishes ... Business editor – Sam Meludu.

  6. Biomedical informatics and translational medicine

    Directory of Open Access Journals (Sweden)

    Sarkar Indra

    2010-02-01

    Full Text Available Abstract Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians" can be essential members of translational medicine teams.

  7. Computational intelligence in biomedical imaging

    CERN Document Server

    2014-01-01

    This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...

  8. Customization of biomedical terminologies.

    Science.gov (United States)

    Homo, Julien; Dupuch, Laëtitia; Benbrahim, Allel; Grabar, Natalia; Dupuch, Marie

    2012-01-01

    Within the biomedical area over one hundred terminologies exist and are merged in the Unified Medical Language System Metathesaurus, which gives over 1 million concepts. When such huge terminological resources are available, the users must deal with them and specifically they must deal with irrelevant parts of these terminologies. We propose to exploit seed terms and semantic distance algorithms in order to customize the terminologies and to limit within them a semantically homogeneous space. An evaluation performed by a medical expert indicates that the proposed approach is relevant for the customization of terminologies and that the extracted terms are mostly relevant to the seeds. It also indicates that different algorithms provide with similar or identical results within a given terminology. The difference is due to the terminologies exploited. A special attention must be paid to the definition of optimal association between the semantic similarity algorithms and the thresholds specific to a given terminology.

  9. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  10. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    Science.gov (United States)

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  11. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  12. Search Engines for Tomorrow's Scholars

    Science.gov (United States)

    Fagan, Jody Condit

    2011-01-01

    Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…

  13. Update History of This Database - KEGG MEDICUS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available glish archive site is opened. 2010/10/01 KEGG MEDICUS ( http://www.kegg.jp/kegg/medicus/ ) is opened. About ...[ Credits ] English ]; } else if ( url.search(//en//) != -1 ) { url = url.replace(/...switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us KEGG MEDI...CUS Update History of This Database Date Update contents 2014/05/09 KEGG MEDICUS En...This Database Database Description Download License Update History of This Database Site Policy | Contact Us Update History of This Database - KEGG MEDICUS | LSDB Archive ...

  14. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  15. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  16. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  17. Tibetan Magmatism Database

    Science.gov (United States)

    Chapman, James B.; Kapp, Paul

    2017-11-01

    A database containing previously published geochronologic, geochemical, and isotopic data on Mesozoic to Quaternary igneous rocks in the Himalayan-Tibetan orogenic system are presented. The database is intended to serve as a repository for new and existing igneous rock data and is publicly accessible through a web-based platform that includes an interactive map and data table interface with search, filtering, and download options. To illustrate the utility of the database, the age, location, and ɛHft composition of magmatism from the central Gangdese batholith in the southern Lhasa terrane are compared. The data identify three high-flux events, which peak at 93, 50, and 15 Ma. They are characterized by inboard arc migration and a temporal and spatial shift to more evolved isotopic compositions.

  18. Citation Searching: Search Smarter & Find More

    Science.gov (United States)

    Hammond, Chelsea C.; Brown, Stephanie Willen

    2008-01-01

    The staff at University of Connecticut are participating in Elsevier's Student Ambassador Program (SAmP) in which graduate students train their peers on "citation searching" research using Scopus and Web of Science, two tremendous citation databases. They are in the fourth semester of these training programs, and they are wildly successful: They…

  19. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  20. [Evidence-based medicine. 2. Research of clinically relevant biomedical information. Gruppo Italiano per la Medicina Basata sulle Evidenze--GIMBE].

    Science.gov (United States)

    Cartabellotta, A

    1998-05-01

    Evidence-based Medicine is a product of the electronic information age and there are several databases useful for practice it--MEDLINE, EMBASE, specialized compendiums of evidence (Cochrane Library, Best Evidence), practice guidelines--most of them free available through Internet, that offers a growing number of health resources. Because searching best evidence is a basic step to practice Evidence-based Medicine, this second review (the first one has been published in the issue of March 1998) has the aim to provide physicians tools and skills for retrieving relevant biomedical information. Therefore, we discuss about strategies for managing information overload, analyze characteristics, usefulness and limits of medical databases and explain how to use MEDLINE in day-to-day clinical practice.

  1. Biomedical informatics: we are what we publish.

    Science.gov (United States)

    Elkin, P L; Brown, S H; Wright, G

    2013-01-01

    This article is part of a For-Discussion-Section of Methods of Information in Medicine on "Biomedical Informatics: We are what we publish". It is introduced by an editorial and followed by a commentary paper with invited comments. In subsequent issues the discussion may continue through letters to the editor. Informatics experts have attempted to define the field via consensus projects which has led to consensus statements by both AMIA. and by IMIA. We add to the output of this process the results of a study of the Pubmed publications with abstracts from the field of Biomedical Informatics. We took the terms from the AMIA consensus document and the terms from the IMIA definitions of the field of Biomedical Informatics and combined them through human review to create the Health Informatics Ontology. We built a terminology server using the Intelligent Natural Language Processor (iNLP). Then we downloaded the entire set of articles in Medline identified by searching the literature by "Medical Informatics" OR "Bioinformatics". The articles were parsed by the joint AMIA / IMIA terminology and then again using SNOMED CT and for the Bioinformatics they were also parsed using HGNC Ontology. We identified 153,580 articles using "Medical Informatics" and 20,573 articles using "Bioinformatics". This resulted in 168,298 unique articles and an overlap of 5,855 articles. Of these 62,244 articles (37%) had titles and abstracts that contained at least one concept from the Health Informatics Ontology. SNOMED CT indexing showed that the field interacts with most all clinical fields of medicine. Further defining the field by what we publish can add value to the consensus driven processes that have been the mainstay of the efforts to date. Next steps should be to extract terms from the literature that are uncovered and create class hierarchies and relationships for this content. We should also examine the high occurring of MeSH terms as markers to define Biomedical Informatics

  2. Text Mining in Biomedical Domain with Emphasis on Document Clustering.

    Science.gov (United States)

    Renganathan, Vinaitheerthan

    2017-07-01

    With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.

  3. The Weaknesses of Full-Text Searching

    Science.gov (United States)

    Beall, Jeffrey

    2008-01-01

    This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves…

  4. Possible use of fuzzy logic in database

    Directory of Open Access Journals (Sweden)

    Vaclav Bezdek

    2011-04-01

    Full Text Available The article deals with fuzzy logic and its possible use in database systems. At first fuzzy thinking style is shown on a simple example. Next the advantages of the fuzzy approach to database searching are considered on the database of used cars in the Czech Republic.

  5. Search Help

    Science.gov (United States)

    Guidance and search help resource listing examples of common queries that can be used in the Google Search Appliance search request, including examples of special characters, or query term seperators that Google Search Appliance recognizes.

  6. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  7. Update History of This Database - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us KAIKOcDNA Update History of This Database Date Update contents 2014/10/20 The URL of the dat... database ( http://sgp.dna.affrc.go.jp/EST/ ) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - KAIKOcDNA | LSDB Archive ... ...abase maintenance site is changed. 2014/10/08 KAIKOcDNA English archive site is opened. 2004/04/12 KAIKOcDNA

  8. Update History of This Database - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us PLACE Update History of This Database Date Update contents 2016/08/22 The contact address is...s Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - PLACE | LSDB Archive ... ... changed. 2014/10/20 The URLs of the database maintenance site and the portal site are changed. 2014/07/17 PLACE English archi

  9. GRIP Database original data - GRIPDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us GRI...PDB GRIP Database original data Data detail Data name GRIP Database original data DOI 10....18908/lsdba.nbdc01665-006 Description of data contents GRIP Database original data It consists of data table...s and sequences. Data file File name: gripdb_original_data.zip File URL: ftp://ftp.biosciencedbc.jp/archive/gripdb/LATEST/gri...e Database Description Download License Update History of This Database Site Policy | Contact Us GRIP Database original data - GRIPDB | LSDB Archive ...

  10. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  11. Smart nanomaterials for biomedics.

    Science.gov (United States)

    Choi, Soonmo; Tripathi, Anuj; Singh, Deepti

    2014-10-01

    Nanotechnology has become important in various disciplines of technology and science. It has proven to be a potential candidate for various applications ranging from biosensors to the delivery of genes and therapeutic agents to tissue engineering. Scaffolds for every application can be tailor made to have the appropriate physicochemical properties that will influence the in vivo system in the desired way. For highly sensitive and precise detection of specific signals or pathogenic markers, or for sensing the levels of particular analytes, fabricating target-specific nanomaterials can be very useful. Multi-functional nano-devices can be fabricated using different approaches to achieve multi-directional patterning in a scaffold with the ability to alter topographical cues at scale of less than or equal to 100 nm. Smart nanomaterials are made to understand the surrounding environment and act accordingly by either protecting the drug in hostile conditions or releasing the "payload" at the intended intracellular target site. All of this is achieved by exploiting polymers for their functional groups or incorporating conducting materials into a natural biopolymer to obtain a "smart material" that can be used for detection of circulating tumor cells, detection of differences in the body analytes, or repair of damaged tissue by acting as a cell culture scaffold. Nanotechnology has changed the nature of diagnosis and treatment in the biomedical field, and this review aims to bring together the most recent advances in smart nanomaterials.

  12. Bio-medical CMOS ICs

    CERN Document Server

    Yoo, Hoi-Jun

    2011-01-01

    This book is based on a graduate course entitled, Ubiquitous Healthcare Circuits and Systems, that was given by one of the editors. It includes an introduction and overview to biomedical ICs and provides information on the current trends in research.

  13. Functionalized carbon nanotubes: biomedical applications

    Science.gov (United States)

    Vardharajula, Sandhya; Ali, Sk Z; Tiwari, Pooja M; Eroğlu, Erdal; Vig, Komal; Dennis, Vida A; Singh, Shree R

    2012-01-01

    Carbon nanotubes (CNTs) are emerging as novel nanomaterials for various biomedical applications. CNTs can be used to deliver a variety of therapeutic agents, including biomolecules, to the target disease sites. In addition, their unparalleled optical and electrical properties make them excellent candidates for bioimaging and other biomedical applications. However, the high cytotoxicity of CNTs limits their use in humans and many biological systems. The biocompatibility and low cytotoxicity of CNTs are attributed to size, dose, duration, testing systems, and surface functionalization. The functionalization of CNTs improves their solubility and biocompatibility and alters their cellular interaction pathways, resulting in much-reduced cytotoxic effects. Functionalized CNTs are promising novel materials for a variety of biomedical applications. These potential applications are particularly enhanced by their ability to penetrate biological membranes with relatively low cytotoxicity. This review is directed towards the overview of CNTs and their functionalization for biomedical applications with minimal cytotoxicity. PMID:23091380

  14. Molecular Biomedical Imaging Laboratory (MBIL)

    Data.gov (United States)

    Federal Laboratory Consortium — The Molecular Biomedical Imaging Laboratory (MBIL) is adjacent-a nd has access-to the Department of Radiology and Imaging Sciences clinical imaging facilities. MBIL...

  15. New Directions for Biomedical Engineering

    Science.gov (United States)

    Plonsey, Robert

    1973-01-01

    Discusses the definition of "biomedical engineering" and the development of educational programs in the field. Includes detailed descriptions of the roles of bioengineers, medical engineers, and chemical engineers. (CC)

  16. Summer Biomedical Engineering Institute 1972

    Science.gov (United States)

    Deloatch, E. M.

    1973-01-01

    The five problems studied for biomedical applications of NASA technology are reported. The studies reported are: design modification of electrophoretic equipment, operating room environment control, hematological viscometry, handling system for iridium, and indirect blood pressure measuring device.

  17. Building a biomedical cyberinfrastructure for collaborative research.

    Science.gov (United States)

    Schad, Peter A; Mobley, Lee Rivers; Hamilton, Carol M

    2011-05-01

    For the potential power of genome-wide association studies (GWAS) and translational medicine to be realized, the biomedical research community must adopt standard measures, vocabularies, and systems to establish an extensible biomedical cyberinfrastructure. Incorporating standard measures will greatly facilitate combining and comparing studies via meta-analysis. Incorporating consensus-based and well-established measures into various studies should reduce the variability across studies due to attributes of measurement, making findings across studies more comparable. This article describes two well-established consensus-based approaches to identifying standard measures and systems: PhenX (consensus measures for phenotypes and eXposures), and the Open Geospatial Consortium (OGC). NIH support for these efforts has produced the PhenX Toolkit, an assembled catalog of standard measures for use in GWAS and other large-scale genomic research efforts, and the RTI Spatial Impact Factor Database (SIFD), a comprehensive repository of geo-referenced variables and extensive meta-data that conforms to OGC standards. The need for coordinated development of cyberinfrastructure to support measures and systems that enhance collaboration and data interoperability is clear; this paper includes a discussion of standard protocols for ensuring data compatibility and interoperability. Adopting a cyberinfrastructure that includes standard measures and vocabularies, and open-source systems architecture, such as the two well-established systems discussed here, will enhance the potential of future biomedical and translational research. Establishing and maintaining the cyberinfrastructure will require a fundamental change in the way researchers think about study design, collaboration, and data storage and analysis. Copyright © 2011 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  18. A novel biomedical image indexing and retrieval system via deep preference learning.

    Science.gov (United States)

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state

  19. Biomedical Publications Profile and Trends in Gulf Cooperation Council Countries

    Directory of Open Access Journals (Sweden)

    Almundher Al-Maawali

    2012-02-01

    Full Text Available Objectives There is a dearth of studies examining the relationship between research output and other socio-demographic indicators in the Gulf Cooperation Council (GCC countries (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates. The three interrelated aims of this study were, first, to ascertain the number of biomedical publications in the GCC from 1970 to 2010; second, to establish the rate of publication according population size during the same period and, third, to gauge the relationship between the number of publications and specific socio-economic parameters. Methods: The Medline database was searched in October 2010 by affiliation, year and publication type from 1970 to 2010. Data obtained were normalised to the number of publications per million of the population, gross domestic product, and the number of physicians in each country. Results: The number of articles from the GCC region published over this 40 year period was 25,561. Saudi Arabia had the highest number followed by Kuwait, UAE, and then Oman. Kuwait had the highest profile of publication when normalised to population size, followed by Qatar. Oman is the lowest in this ranking. Overall, the six countries showed a rising trend in publication numbers with Oman having a significant increase from 1990 to 2005. There was a significant relationship between the number of physicians and the number of publications. Conclusion: The research productivity from GGC has experienced complex and fluctuating growth in the past 40 years. Future prospects for increasing research productivity are discussed with particular reference to the situation in Oman.

  20. Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.

    Science.gov (United States)

    Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao

    2016-06-14

    In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.

  1. Hydroxyapatite coatings for biomedical applications

    CERN Document Server

    Zhang, Sam

    2013-01-01

    Hydroxyapatite coatings are of great importance in the biological and biomedical coatings fields, especially in the current era of nanotechnology and bioapplications. With a bonelike structure that promotes osseointegration, hydroxyapatite coating can be applied to otherwise bioinactive implants to make their surface bioactive, thus achieving faster healing and recovery. In addition to applications in orthopedic and dental implants, this coating can also be used in drug delivery. Hydroxyapatite Coatings for Biomedical Applications explores developments in the processing and property characteri

  2. John Glenn Biomedical Engineering Consortium

    Science.gov (United States)

    Nall, Marsha

    2004-01-01

    The John Glenn Biomedical Engineering Consortium is an inter-institutional research and technology development, beginning with ten projects in FY02 that are aimed at applying GRC expertise in fluid physics and sensor development with local biomedical expertise to mitigate the risks of space flight on the health, safety, and performance of astronauts. It is anticipated that several new technologies will be developed that are applicable to both medical needs in space and on earth.

  3. Pathophysiologic mechanisms of biomedical nanomaterials

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Liming, E-mail: wangliming@ihep.ac.cn; Chen, Chunying, E-mail: chenchy@nanoctr.cn

    2016-05-15

    Nanomaterials (NMs) have been widespread used in biomedical fields, daily consuming, and even food industry. It is crucial to understand the safety and biomedical efficacy of NMs. In this review, we summarized the recent progress about the physiological and pathological effects of NMs from several levels: protein-nano interface, NM-subcellular structures, and cell–cell interaction. We focused on the detailed information of nano-bio interaction, especially about protein adsorption, intracellular trafficking, biological barriers, and signaling pathways as well as the associated mechanism mediated by nanomaterials. We also introduced related analytical methods that are meaningful and helpful for biomedical effect studies in the future. We believe that knowledge about pathophysiologic effects of NMs is not only significant for rational design of medical NMs but also helps predict their safety and further improve their applications in the future. - Highlights: • Rapid protein adsorption onto nanomaterials that affects biomedical effects • Nanomaterials and their interaction with biological membrane, intracellular trafficking and specific cellular effects • Nanomaterials and their interaction with biological barriers • The signaling pathways mediated by nanomaterials and related biomedical effects • Novel techniques for studying translocation and biomedical effects of NMs.

  4. Pathophysiologic mechanisms of biomedical nanomaterials

    International Nuclear Information System (INIS)

    Wang, Liming; Chen, Chunying

    2016-01-01

    Nanomaterials (NMs) have been widespread used in biomedical fields, daily consuming, and even food industry. It is crucial to understand the safety and biomedical efficacy of NMs. In this review, we summarized the recent progress about the physiological and pathological effects of NMs from several levels: protein-nano interface, NM-subcellular structures, and cell–cell interaction. We focused on the detailed information of nano-bio interaction, especially about protein adsorption, intracellular trafficking, biological barriers, and signaling pathways as well as the associated mechanism mediated by nanomaterials. We also introduced related analytical methods that are meaningful and helpful for biomedical effect studies in the future. We believe that knowledge about pathophysiologic effects of NMs is not only significant for rational design of medical NMs but also helps predict their safety and further improve their applications in the future. - Highlights: • Rapid protein adsorption onto nanomaterials that affects biomedical effects • Nanomaterials and their interaction with biological membrane, intracellular trafficking and specific cellular effects • Nanomaterials and their interaction with biological barriers • The signaling pathways mediated by nanomaterials and related biomedical effects • Novel techniques for studying translocation and biomedical effects of NMs

  5. Biomedical Risk Factors of Achilles Tendinopathy in Physically Active People: a Systematic Review.

    Science.gov (United States)

    Kozlovskaia, Maria; Vlahovich, Nicole; Ashton, Kevin J; Hughes, David C

    2017-12-01

    Achilles tendinopathy is the most prevalent tendon disorder in people engaged in running and jumping sports. Aetiology of Achilles tendinopathy is complex and requires comprehensive research of contributing risk factors. There is relatively little research focussing on potential biomedical risk factors for Achilles tendinopathy. The purpose of this systematic review is to identify studies and summarise current knowledge of biomedical risk factors of Achilles tendinopathy in physically active people. Research databases were searched for relevant articles followed by assessment in accordance with PRISMA statement and standards of Cochrane collaboration. Levels of evidence and quality assessment designation were implemented in accordance with OCEBM levels of evidence and Newcastle-Ottawa Quality Assessment Scale, respectively. A systematic review of the literature identified 22 suitable articles. All included studies had moderate level of evidence (2b) with the Newcastle-Ottawa score varying between 6 and 9. The majority (17) investigated genetic polymorphisms involved in tendon structure and homeostasis and apoptosis and inflammation pathways. Overweight as a risk factor of Achilles tendinopathy was described in five included studies that investigated non-genetic factors. COL5A1 genetic variants were the most extensively studied, particularly in association with genetic variants in the genes involved in regulation of cell-matrix interaction in tendon and matrix homeostasis. It is important to investigate connections and pathways whose interactions might be disrupted and therefore alter collagen structure and lead to the development of pathology. Polymorphisms in genes involved in apoptosis and inflammation, and Achilles tendinopathy did not show strong association and, however, should be considered for further investigation. This systematic review suggests that biomedical risk factors are an important consideration in the future study of propensity to the development

  6. RPCs in biomedical applications

    Science.gov (United States)

    Belli, G.; De Vecchi, C.; Giroletti, E.; Guida, R.; Musitelli, G.; Nardò, R.; Necchi, M. M.; Pagano, D.; Ratti, S. P.; Sani, G.; Vicini, A.; Vitulo, P.; Viviani, C.

    2006-08-01

    We are studying possible applications of Resistive Plate Chambers (RPCs) in the biomedical domain such as Positron Emission Tomography (PET). The use of RPCs in PET can provide several improvements on the usual scintillation-based detectors. The most striking features are the extremely good spatial and time resolutions. They can be as low as 50 μm and 25 ps respectively, to be compared to the much higher intrinsic limits in bulk detectors. Much efforts have been made to investigate suitable materials to make RPCs sensitive to 511 keV photons. For this reason, we are studying different types of coating employing high Z materials with proper electrical resistivity. Later investigations explored the possibility of coating glass electrodes by mean of serigraphy techniques, employing oxide based mixtures with a high density of high Z materials; the efficiency is strongly dependent on its thickness and it reaches a maximum for a characteristic value that is a function of the compound (usually a few hundred microns). The most promising mixtures seem to be PbO, Bi 2O 3 and Tl 2O. Preliminary gamma efficiency measurements for a Multigap RPC prototype (MRPC) are presented as well as simulations using GEANT4-based framework. The MRPC has 5 gas gaps; their spacings are kept by 0.3 mm diameter nylon fishing line, electrodes are made of thin glasses (1 mm for the outer electrodes, 0.15-0.4 mm for the inner ones). The detector is enclosed in a metallic gas-tight box, filled with a C 2H 2F 4 92.5%, SF 6 2.5%, C 4H 10 5% mixture. Different gas mixtures are being studied increasing the SF6 percentage and results of efficiency as a function of the new mixtures will be presented.

  7. INIS: Manual for online retrieval from the INIS Database on the Internet

    International Nuclear Information System (INIS)

    2000-01-01

    This manual demonstrates the different Search Forms available to retrieve relevant records using the INIS Database online retrieval system. Information on how to search, how to store, refine and retrieve searches, and how to update a literature search is given

  8. INIS: Manual for online retrieval from the INIS Database on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-10-01

    This manual demonstrates the different Search Forms available to retrieve relevant records using the INIS Database online retrieval system. Information on how to search, how to store, refine and retrieve searches, and how to update a literature search is given.

  9. The Impact of Online Bibliographic Databases on Teaching and Research in Political Science.

    Science.gov (United States)

    Reichel, Mary

    The availability of online bibliographic databases greatly facilitates literature searching in political science. The advantages to searching databases online include combination of concepts, comprehensiveness, multiple database searching, free-text searching, currency, current awareness services, document delivery service, and convenience.…

  10. XCEDE: an extensible schema for biomedical data.

    Science.gov (United States)

    Gadde, Syam; Aucoin, Nicole; Grethe, Jeffrey S; Keator, David B; Marcus, Daniel S; Pieper, Steve

    2012-01-01

    The XCEDE (XML-based Clinical and Experimental Data Exchange) XML schema, developed by members of the BIRN (Biomedical Informatics Research Network), provides an extensive metadata hierarchy for storing, describing and documenting the data generated by scientific studies. Currently at version 2.0, the XCEDE schema serves as a specification for the exchange of scientific data between databases, analysis tools, and web services. It provides a structured metadata hierarchy, storing information relevant to various aspects of an experiment (project, subject, protocol, etc.). Each hierarchy level also provides for the storage of data provenance information allowing for a traceable record of processing and/or changes to the underlying data. The schema is extensible to support the needs of various data modalities and to express types of data not originally envisioned by the developers. The latest version of the XCEDE schema and manual are available from http://www.xcede.org/ .

  11. Subject search study. Final report

    International Nuclear Information System (INIS)

    Todeschini, C.

    1995-01-01

    The study gathered information on how users search the database of the International Nuclear Information System (INIS), using indicators such as Subject categories, Controlled terms, Subject headings, Free-text words, combinations of the above. Users participated from the Australian, French, Russian and Spanish INIS Centres, that have different national languages. Participants, both intermediaries and end users, replied to a questionnaire and executed search queries. The INIS Secretariat at the IAEA also participated. A protocol of all search strategies used in actual searches in the database was kept. The thought process for Russian and Spanish users is predominantly non-English and also the actual initial search formulation is predominantly non-English among Russian and Spanish users while it tends to be more in English among French users. A total of 1002 searches were executed by the five INIS centres including the IAEA. The search protocols indicate the following search behaviour: 1) free text words represent about 40% of search points on an average query; 2) descriptors used as search keys have the widest range as percentage of search points, from a low of 25% to a high of 48%; 3) search keys consisting of free text that coincides with a descriptor account for about 15% of search points; 4) Subject Categories are not used in many searches; 5) free text words are present as search points in about 80% of all searches; 6) controlled terms (descriptors) are used very extensively and appear in about 90% of all searches; 7) Subject Headings were used in only a few percent of searches. From the results of the study one can conclude that there is a greater reluctance on the part of non-native English speakers in initiating their searches by using free text word searches. Also: Subject Categories are little used in searching the database; both free text terms and controlled terms are the predominant types of search keys used, whereby the controlled terms are used more

  12. Flat Files - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... Data file File name: jsnp_flat_files File URL: ftp://ftp.biosciencedbc.jp/archiv...his Database Database Description Download License Update History of This Database Site Policy | Contact Us Flat Files - JSNP | LSDB Archive ...

  13. Reference - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ailable. Data file File name: place_reference.zip File URL: ftp://ftp.biosciencedbc.jp/archive/place/LATEST/...ber About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Reference - PLACE | LSDB Archive ...

  14. License - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... might be changed without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - AT Atlas | LSDB Archive ...

  15. Protein - AT Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ..._protein.zip File URL: ftp://ftp.biosciencedbc.jp/archive/at_atlas/LATEST/at_atla...About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Protein - AT Atlas | LSDB Archive ...

  16. Mapping data - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...tional Rice Genome Sequencing Project (IRGSP) Data file File name: kome_mapping_data.zip File URL: ftp://ftp.biosciencedbc.jp/archiv...(Transcriptional Unit) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Mapping data - KOME | LSDB Archive ...

  17. Download - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data .... If it is, access [here]. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - RPD | LSDB Archive ...

  18. License - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ts might be changed without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - RED | LSDB Archive ...

  19. License - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... might be changed without notice. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us License - TP Atlas | LSDB Archive ...

  20. Spot table - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...d_spot.zip File URL: ftp://ftp.biosciencedbc.jp/archive/rpd/LATEST/rpd_spot.zip F... cDNA. (multiple entries) About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Spot table - RPD | LSDB Archive ...

  1. Exon - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ontents Exons in variants Data file File name: astra_exon.zip File URL: ftp://ftp.biosciencedbc.jp/archive/a... About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Exon - ASTRA | LSDB Archive ...

  2. Download - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data .... If it is, access [here]. About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - JSNP | LSDB Archive ...

  3. ORF information - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ... File URL: ftp://ftp.biosciencedbc.jp/archive/kome/LATEST/kome_orf_infomation.zip File size: 526 KB Simple s...ut This Database Database Description Download License Update History of This Database Site Policy | Contact Us ORF information - KOME | LSDB Archive ...

  4. Download - Plabrain DB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Plabrain...s Database Database Description Download License Update History of This Database Site Policy | Contact Us Download - Plabrain DB | LSDB Archive ...

  5. ElasticSearch server

    CERN Document Server

    Rogozinski, Marek

    2014-01-01

    This book is a detailed, practical, hands-on guide packed with real-life scenarios and examples which will show you how to implement an ElasticSearch search engine on your own websites.If you are a web developer or a user who wants to learn more about ElasticSearch, then this is the book for you. You do not need to know anything about ElastiSeach, Java, or Apache Lucene in order to use this book, though basic knowledge about databases and queries is required.

  6. EST data - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...st.zip File URL: ftp://ftp.biosciencedbc.jp/archive/red/LATEST/red_est.zip File size: 629 KB Simple search U...ase Database Description Download License Update History of This Database Site Policy | Contact Us EST data - RED | LSDB Archive ...

  7. Geologic Field Database

    Directory of Open Access Journals (Sweden)

    Katarina Hribernik

    2002-12-01

    Full Text Available The purpose of the paper is to present the field data relational database, which was compiled from data, gathered during thirty years of fieldwork on the Basic Geologic Map of Slovenia in scale1:100.000. The database was created using MS Access software. The MS Access environment ensures its stability and effective operation despite changing, searching, and updating the data. It also enables faster and easier user-friendly access to the field data. Last but not least, in the long-term, with the data transferred into the GISenvironment, it will provide the basis for the sound geologic information system that will satisfy a broad spectrum of geologists’ needs.

  8. Database on wind characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, K.S. [The Technical Univ. of Denmark (Denmark); Courtney, M.S. [Risoe National Lab., (Denmark)

    1999-08-01

    The organisations that participated in the project consists of five research organisations: MIUU (Sweden), ECN (The Netherlands), CRES (Greece), DTU (Denmark), Risoe (Denmark) and one wind turbine manufacturer: Vestas Wind System A/S (Denmark). The overall goal was to build a database consisting of a large number of wind speed time series and create tools for efficiently searching through the data to select interesting data. The project resulted in a database located at DTU, Denmark with online access through the Internet. The database contains more than 50.000 hours of measured wind speed measurements. A wide range of wind climates and terrain types are represented with significant amounts of time series. Data have been chosen selectively with a deliberate over-representation of high wind and complex terrain cases. This makes the database ideal for wind turbine design needs but completely unsuitable for resource studies. Diversity has also been an important aim and this is realised with data from a large range of terrain types; everything from offshore to mountain, from Norway to Greece. (EHS)

  9. The biomedical discourse relation bank

    Directory of Open Access Journals (Sweden)

    Joshi Aravind

    2011-05-01

    Full Text Available Abstract Background Identification of discourse relations, such as causal and contrastive relations, between situations mentioned in text is an important task for biomedical text-mining. A biomedical text corpus annotated with discourse relations would be very useful for developing and evaluating methods for biomedical discourse processing. However, little effort has been made to develop such an annotated resource. Results We have developed the Biomedical Discourse Relation Bank (BioDRB, in which we have annotated explicit and implicit discourse relations in 24 open-access full-text biomedical articles from the GENIA corpus. Guidelines for the annotation were adapted from the Penn Discourse TreeBank (PDTB, which has discourse relations annotated over open-domain news articles. We introduced new conventions and modifications to the sense classification. We report reliable inter-annotator agreement of over 80% for all sub-tasks. Experiments for identifying the sense of explicit discourse connectives show the connective itself as a highly reliable indicator for coarse sense classification (accuracy 90.9% and F1 score 0.89. These results are comparable to results obtained with the same classifier on the PDTB data. With more refined sense classification, there is degradation in performance (accuracy 69.2% and F1 score 0.28, mainly due to sparsity in the data. The size of the corpus was found to be sufficient for identifying the sense of explicit connectives, with classifier performance stabilizing at about 1900 training instances. Finally, the classifier performs poorly when trained on PDTB and tested on BioDRB (accuracy 54.5% and F1 score 0.57. Conclusion Our work shows that discourse relations can be reliably annotated in biomedical text. Coarse sense disambiguation of explicit connectives can be done with high reliability by using just the connective as a feature, but more refined sense classification requires either richer features or more

  10. Nuclear database management systems

    International Nuclear Information System (INIS)

    Stone, C.; Sutton, R.

    1996-01-01

    The authors are developing software tools for accessing and visualizing nuclear data. MacNuclide was the first software application produced by their group. This application incorporates novel database management and visualization tools into an intuitive interface. The nuclide chart is used to access properties and to display results of searches. Selecting a nuclide in the chart displays a level scheme with tables of basic, radioactive decay, and other properties. All level schemes are interactive, allowing the user to modify the display, move between nuclides, and display entire daughter decay chains

  11. [Master course in biomedical engineering].

    Science.gov (United States)

    Jobbágy, Akos; Benyó, Zoltán; Monos, Emil

    2009-11-22

    The Bologna Declaration aims at harmonizing the European higher education structure. In accordance with the Declaration, biomedical engineering will be offered as a master (MSc) course also in Hungary, from year 2009. Since 1995 biomedical engineering course has been held in cooperation of three universities: Semmelweis University, Budapest Veterinary University, and Budapest University of Technology and Economics. One of the latter's faculties, Faculty of Electrical Engineering and Informatics, has been responsible for the course. Students could start their biomedical engineering studies - usually in parallel with their first degree course - after they collected at least 180 ECTS credits. Consequently, the biomedical engineering course could have been considered as a master course even before the Bologna Declaration. Students had to collect 130 ECTS credits during the six-semester course. This is equivalent to four-semester full-time studies, because during the first three semesters the curriculum required to gain only one third of the usual ECTS credits. The paper gives a survey on the new biomedical engineering master course, briefly summing up also the subjects in the curriculum.

  12. Update History of This Database - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us RMG Update History of This Database Date Update contents 2016/08/22 The contact address is c...dna.affrc.go.jp/ ) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - RMG | LSDB Archive ... ... URL of the portal site is changed. 2013/08/07 RMG archive site is opened. 2002/09/25 RMG ( http://rmg.rice.

  13. Update History of This Database - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us DGBY Update History of This Database Date Update contents 2014/10/20 The URL of the portal s...aro.affrc.go.jp/yakudachi/yeast/index.html ) is opened. About This Database Database Description Download License Update Hi...story of This Database Site Policy | Contact Us Update History of This Database - DGBY | LSDB Archive ... ... Expression of attribution in License is updated. 2012/03/08 DGBY English archive site is opened. 2006/10/02

  14. Update History of This Database - Q-TARO | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us Q-TARO Update History of This Database Date Update contents 2014/10/20 The URL of the portal...ption Download License Update History of This Database Site Policy | Contact Us Update History of This Database - Q-TARO | LSDB Archive ... ... site is changed. 2013/12/17 The URL of the portal site is changed. 2013/12/13 Q-TARO English archive site i...s opened. 2009/11/15 Q-TARO ( http://qtaro.abr.affrc.go.jp/ ) is opened. About This Database Database Descri

  15. Update History of This Database - TogoTV | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us TogoTV Update History of This Database Date Update contents 2017/05/12 TogoTV English archiv...ription Download License Update History of This Database Site Policy | Contact Us Update History of This Database - TogoTV | LSDB Archive ... ...e site is opened. 2007/07/20 TogoTV ( http://togotv.dbcls.jp/ ) is opened. About This Database Database Desc

  16. Update History of This Database - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us ConfC Update History of This Database Date Update contents 2016/09/20 ConfC English archive ...tion Download License Update History of This Database Site Policy | Contact Us Update History of This Database - ConfC | LSDB Archive ... ...site is opened. 2005/05/01 ConfC (http://mbs.cbrc.jp/ConfC/) is opened. About This Database Database Descrip

  17. Update History of This Database - TP Atlas | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us TP Atlas Update History of This Database Date Update contents 2013/12/16 The email address i...s ( http://www.tanpaku.org/tpatlas/ ) is opened. About This Database Database Description Download License Update History of Thi...s Database Site Policy | Contact Us Update History of This Database - TP Atlas | LSDB Archive ... ...n the contact information is corrected. 2013/11/19 TP Atlas English archive site is opened. 2008/4/1 TP Atla

  18. Update History of This Database - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us fRNAdb Update History of This Database Date Update contents 2016/03/29 fRNAdb English archiv...on Download License Update History of This Database Site Policy | Contact Us Update History of This Database - fRNAdb | LSDB Archive ... ...e site is opened. 2006/12 fRNAdb ( http://www.ncrna.org/ ) is opened. About This Database Database Descripti

  19. Update History of This Database - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us AcEST Update History of This Database Date Update contents 2013/01/10 Errors found on AcEST ...s Database Database Description Download License Update History of This Data...base Site Policy | Contact Us Update History of This Database - AcEST | LSDB Archive ... ...Conting data have been correceted. For details, please refer to the following page. Data correction 2010/03/29 AcEST English archi

  20. Innovations in Biomedical Engineering 2016

    CERN Document Server

    Tkacz, Ewaryst; Paszenda, Zbigniew; Piętka, Ewa

    2017-01-01

    This book presents the proceedings of the “Innovations in Biomedical Engineering IBE’2016” Conference held on October 16–18, 2016 in Poland, discussing recent research on innovations in biomedical engineering. The past decade has seen the dynamic development of more and more sophisticated technologies, including biotechnologies, and more general technologies applied in the area of life sciences. As such the book covers the broadest possible spectrum of subjects related to biomedical engineering innovations. Divided into four parts, it presents state-of-the-art achievements in: • engineering of biomaterials, • modelling and simulations in biomechanics, • informatics in medicine • signal analysis The book helps bridge the gap between technological and methodological engineering achievements on the one hand and clinical requirements in the three major areas diagnosis, therapy and rehabilitation on the other.