Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/.
Full Text Available Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/.
Vishnyakova, Dina; Pasche, Emilie; Gobeill, Julien; Gaudinat, Arnaud; Lovis, Christian; Ruch, Patrick
We present a new approach to perform biomedical documents classification and prioritization for the Comparative Toxicogenomics Database (CTD). This approach is motivated by needs such as literature curation, in particular applied to the human health environment domain. The unique integration of chemical, genes/proteins and disease data in the biomedical literature may advance the identification of exposure and disease biomarkers, mechanisms of chemical actions, and the complex aetiologies of chronic diseases. Our approach aims to assist biomedical researchers when searching for relevant articles for CTD. The task is functionally defined as a binary classification task, where selected articles must also be ranked by order of relevance. We design a SVM classifier, which combines three main feature sets: an information retrieval system (EAGLi), a biomedical named-entity recognizer (MeSH term extraction), a gene normalization (GN) service (NormaGene) and an ad-hoc keyword recognizer for diseases and chemicals. The evaluation of the gene identification module was done on BioCreativeIII test data. Disease normalization is achieved with 95% precision and 93% of recall. The evaluation of the classification was done on the corpus provided by BioCreative organizers in 2012. The approach showed promising performance on the test data.
Full Text Available The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed. Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD, diabetes mellitus, and cystic fibrosis. We then evaluate our approach in two ways: (1 a direct comparison with the state of the art using benchmark datasets; (2 a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79 over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB, we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets
Singhal, Ayush; Simmons, Michael; Lu, Zhiyong
The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease
Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang
Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating
Khare, Ritu; Leaman, Robert; Lu, Zhiyong
Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literat...
de Silva, N H Nisansa D
In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.
Khare, Ritu; Leaman, Robert; Lu, Zhiyong
Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy.
Full Text Available Abstract Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned
Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R
Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.
Full Text Available Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA, UniProt and Protein Data Bank, Europe (PDBe, we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.
Ronald N. Kostoff
Full Text Available Purpose: To address the under-reporting of research results, with emphasis on the underreporting/distorted reporting of adverse events in the biomedical research literature. Design/methodology/approach: A four-step approach is used:(1 To identify the characteristics of literature that make it adequate to support policy; (2 to show how each of these characteristics becomes degraded to make inadequate literature; (3 to identify incentives to prevent inadequate literature; and (4 to show policy implications of inadequate literature. Findings: This review has provided reasons for, and examples of, adverse health effects of myriad substances (1 being under-reported in the premiere biomedical literature, or (2 entering this literature in distorted form. Since there is no way to gauge the extent of this under/distorted-reporting, the quality and credibility of the ‘premiere’ biomedical literature is unknown. Therefore, any types of meta-analyses or scientometric analyses of this literature will have unknown quality and credibility. The most sophisticated scientometric analysis cannot compensate for a highly flawed database. Research limitations: The main limitation is in identifying examples of under-reporting. There are many incentives for under-reporting and few dis-incentives. Practical implications: Almost all research publications, addressing causes of disease, treatments for disease, diagnoses for disease, scientometrics of disease and health issues, and other aspects of healthcare, build upon previous healthcare-related research published. Many researchers will not have laboratories or other capabilities to replicate or validate the published research, and depend almost completely on the integrity of this literature. If the literature is distorted, then future research can be misguided, and health policy recommendations can be ineffective or worse. Originality/value: This review has examined a much wider range of technical and nontechnical
Kraus, Milena; Niedermeier, Julian; Jankrift, Marcel; Tietböhl, Sören; Stachewicz, Toni; Folkerts, Hendrik; Uflacker, Matthias; Neves, Mariana
Researchers usually query the large biomedical literature in PubMed via keywords, logical operators and filters, none of which is very intuitive. Question answering systems are an alternative to keyword searches. They allow questions in natural language as input and results reflect the given type of question, such as short answers and summaries. Few of those systems are available online but they experience drawbacks in terms of long response times and they support a limited amount of question and result types. Additionally, user interfaces are usually restricted to only displaying the retrieved information. For our Olelo web application, we combined biomedical literature and terminologies in a fast in-memory database to enable real-time responses to researchers' queries. Further, we extended the built-in natural language processing features of the database with question answering and summarization procedures. Combined with a new explorative approach of document filtering and a clean user interface, Olelo enables a fast and intelligent search through the ever-growing biomedical literature. Olelo is available at http://www.hpi.de/plattner/olelo. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available The fast increasing amount of articles published in the biomedical field is creating difficulties in the way this wealth of information can be efficiently exploited by researchers. As a way of overcoming these limitations and potentiating a more efficient use of the literature, we propose an approach for structuring the results of a literature search based on the latent semantic information extracted from a corpus. Moreover, we show how the results of the Latent Semantic Analysis method can be adapted so as to evidence differences between results of different searches. We also propose different visualization techniques that can be applied to explore these results. Used in combination, these techniques could empower users with tools for literature guided knowledge exploration and discovery.
Background Cell lines and cell types are extensively studied in biomedical research yielding to a significant amount of publications each year. Identifying cell lines and cell types precisely in publications is crucial for science reproducibility and knowledge integration. There are efforts for standardisation of the cell nomenclature based on ontology development to support FAIR principles of the cell knowledge. However, it is important to analyse the usage of cell nomenclature in publications at a large scale for understanding the level of uptake of cell nomenclature in literature by scientists. In this study, we analyse the usage of cell nomenclature, both in Vivo, and in Vitro in biomedical literature by using text mining methods and present our results. Results We identified 59% of the cell type classes in the Cell Ontology and 13% of the cell line classes in the Cell Line Ontology in the literature. Our analysis showed that cell line nomenclature is much more ambiguous compared to the cell type nomenclature. However, trends indicate that standardised nomenclature for cell lines and cell types are being increasingly used in publications by the scientists. Conclusions Our findings provide an insight to understand how experimental cells are described in publications and may allow for an improved standardisation of cell type and cell line nomenclature as well as can be utilised to develop efficient text mining applications on cell types and cell lines. All data generated in this study is available at https://github.com/shenay/CellNomenclatureStudy.
Yan, Erjia; Zhu, Yongjun
Up to this point, research on written scholarly communication has focused primarily on syntactic, rather than semantic, analyses. Consequently, we have yet to understand semantic change as it applies to disciplinary discourse. The objective of this study is to illustrate word semantic change in biomedical literature. To that end, we identify a set of representative words in biomedical literature based on word frequency and word-topic probability distributions. A word2vec language model is then applied to the identified words in order to measure word- and topic-level semantic changes. We find that for the selected words in PubMed, overall, meanings are becoming more stable in the 2000s than they were in the 1980s and 1990s. At the topic level, the global distance of most topics (19 out of 20 tested) is declining, suggesting that the words used to discuss these topics are stabilizing semantically. Similarly, the local distance of most topics (19 out of 20) is also declining, showing that the meanings of words from these topics are becoming more consistent with those of their semantic neighbors. At the word level, this paper identifies two different trends in word semantics, as measured by the aforementioned distance metrics: on the one hand, words can form clusters with their semantic neighbors, and these words, as a cluster, coevolve semantically; on the other hand, words can drift apart from their semantic neighbors while nonetheless stabilizing in the global context. In relating our work to language laws on semantic change, we find no overwhelming evidence to support either the law of parallel change or the law of conformity. Copyright © 2017 Elsevier B.V. All rights reserved.
Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan
Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is
Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph
The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well
Cohen, J?r?mie F; Korevaar, Dani?l A; Wang, Junfeng; Spijker, Ren?; Bossuyt, Patrick M
Background Chinese biomedical databases contain a large number of publications available to systematic reviewers, but it is unclear whether they are used for synthesizing the available evidence. Methods We report a case of two systematic reviews on the accuracy of anti-cyclic citrullinated peptide for diagnosing rheumatoid arthritis. In one of these, the authors did not search Chinese databases; in the other, they did. We additionally assessed the extent to which Cochrane reviewers have searc...
Cohen, Jérémie F.; Korevaar, Daniël A.; Wang, Junfeng; Spijker, René; Bossuyt, Patrick M.
Chinese biomedical databases contain a large number of publications available to systematic reviewers, but it is unclear whether they are used for synthesizing the available evidence. We report a case of two systematic reviews on the accuracy of anti-cyclic citrullinated peptide for diagnosing
Sarić, Jasmin; Engelken, Henriette; Reyle, Uwe
Biomedical knowledge is to a very large extent represented only in textual form. To make this knowledge accessible to humans and/or further automatic processing, text mining applications have been developed. At the end of this chapter we present an overview of the most important open access applications and their functionality. The main part of the paper is devoted to the major problems with which all such applications have to deal. The first problem is terminology processing, i.e., recognizing biomedical terms and identifying their meanings, at least to a certain degree. The second problem is to bring together information units that are distributed over more than one sentence. The task of coreference resolution consists of identifying the entities to which the text refers in different sentences and in different ways. The third problem we discuss is that of information extraction, in particular, extraction of relational information. The representation of the domain knowledge is an indispensable component of any text mining application. We discuss different types and depths of ontological modeling and how this knowledge helps to accomplish the tasks described above. An overview of ontological resources is given at the end of the chapter.
Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan
Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.
Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Spijker, René; Bossuyt, Patrick M
Chinese biomedical databases contain a large number of publications available to systematic reviewers, but it is unclear whether they are used for synthesizing the available evidence. We report a case of two systematic reviews on the accuracy of anti-cyclic citrullinated peptide for diagnosing rheumatoid arthritis. In one of these, the authors did not search Chinese databases; in the other, they did. We additionally assessed the extent to which Cochrane reviewers have searched Chinese databases in a systematic overview of the Cochrane Library (inception to 2014). The two diagnostic reviews included a total of 269 unique studies, but only 4 studies were included in both reviews. The first review included five studies published in the Chinese language (out of 151) while the second included 114 (out of 118). The summary accuracy estimates from the two reviews were comparable. Only 243 of the published 8,680 Cochrane reviews (less than 3%) searched one or more of the five major Chinese databases. These Chinese databases index about 2,500 journals, of which less than 6% are also indexed in MEDLINE. All 243 Cochrane reviews evaluated an intervention, 179 (74%) had at least one author with a Chinese affiliation; 118 (49%) addressed a topic in complementary or alternative medicine. Although searching Chinese databases may lead to the identification of a large amount of additional clinical evidence, Cochrane reviewers have rarely included them in their search strategy. We encourage future initiatives to evaluate more systematically the relevance of searching Chinese databases, as well as collaborative efforts to allow better incorporation of Chinese resources in systematic reviews.
Kang, Hongyu; Hou, Zhen; Li, Jiao
Open access (OA) resources and local libraries often have their own literature databases, especially in the field of biomedicine. We have developed a method of linking a local library to a biomedical OA resource facilitating researchers' full-text article access. The method uses a model based on vector space to measure similarities between two articles in local library and OA resources. The method achieved an F-score of 99.61%. This method of article linkage and mapping between local library and OA resources is available for use. Through this work, we have improved the full-text access of the biomedical OA resources.
Vasas, Lívia; Hercsel, Imréné
Presence of the biomedical periodicals of Hungarian editions in international databases. The majority of Hungarian scientific results in medical and related sciences are published in scientific periodicals of foreign edition with high impact factor (IF) values, and they appear in international scientific literature in foreign languages. In this study the authors dealt with the presence and registered citation in international databases of those periodicals only, which had been published in Hungary and/or in cooperation with foreign publishing companies. The examination went back to year 1980 and covered a 25-year long period. 110 periodicals were selected for more detailed examination. The authors analyzed the situation of the current periodicals in the three most often visited databases (MEDLINE, EMBASE, Web of Science), and discovered, that the biomedical scientific periodicals of Hungarian interests were not represented with reasonable emphasis in the relevant international bibliographic databases. Because of the great number of data the scientific literature of medicine and related sciences could not be represented in its entirety, this publication, however, might give useful information for the inquirers, and call the attention of the competent people.
Bui, Quoc-Chinh; Sloot, Peter M A
The abundance of biomedical literature has attracted significant interest in novel methods to automatically extract biomedical relations from the literature. Until recently, most research was focused on extracting binary relations such as protein-protein interactions and drug-disease relations. However, these binary relations cannot fully represent the original biomedical data. Therefore, there is a need for methods that can extract fine-grained and complex relations known as biomedical events. In this article we propose a novel method to extract biomedical events from text. Our method consists of two phases. In the first phase, training data are mapped into structured representations. Based on that, templates are used to extract rules automatically. In the second phase, extraction methods are developed to process the obtained rules. When evaluated against the Genia event extraction abstract and full-text test datasets (Task 1), we obtain results with F-scores of 52.34 and 53.34, respectively, which are comparable to the state-of-the-art systems. Furthermore, our system achieves superior performance in terms of computational efficiency. Our source code is available for academic use at http://dl.dropbox.com/u/10256952/BioEvent.zip.
Xu, Yun; Wang, ZhiHao; Lei, YiMing; Zhao, YuZhong; Xue, Yu
The exploding growth of the biomedical literature presents many challenges for biological researchers. One such challenge is from the use of a great deal of abbreviations. Extracting abbreviations and their definitions accurately is very helpful to biologists and also facilitates biomedical text analysis. Existing approaches fall into four broad categories: rule based, machine learning based, text alignment based and statistically based. State of the art methods either focus exclusively on acronym-type abbreviations, or could not recognize rare abbreviations. We propose a systematic method to extract abbreviations effectively. At first a scoring method is used to classify the abbreviations into acronym-type and non-acronym-type abbreviations, and then their corresponding definitions are identified by two different methods: text alignment algorithm for the former, statistical method for the latter. A literature mining system MBA was constructed to extract both acronym-type and non-acronym-type abbreviations. An abbreviation-tagged literature corpus, called Medstract gold standard corpus, was used to evaluate the system. MBA achieved a recall of 88% at the precision of 91% on the Medstract gold-standard EVALUATION Corpus. We present a new literature mining system MBA for extracting biomedical abbreviations. Our evaluation demonstrates that the MBA system performs better than the others. It can identify the definition of not only acronym-type abbreviations including a little irregular acronym-type abbreviations (e.g., ), but also non-acronym-type abbreviations (e.g., ).
This commentary highlights popular research literature databases and the use of the internet to obtain valuable research information. These literature retrieval methods include the use of the popular PubMed as well as internet search engines. Specific websites catering to developing countries' information and journals' ...
Chen, Guocai; Zhao, Jieyi; Cohen, Trevor; Tao, Cui; Sun, Jingchun; Xu, Hua; Bernstam, Elmer V; Lawson, Andrew; Zeng, Jia; Johnson, Amber M; Holla, Vijaykumar; Bailey, Ann M; Lara-Guerra, Humberto; Litzenburger, Beate; Meric-Bernstam, Funda; Jim Zheng, W
Ambiguous gene names in the biomedical literature are a barrier to accurate information extraction. To overcome this hurdle, we generated Ontology Fingerprints for selected genes that are relevant for personalized cancer therapy. These Ontology Fingerprints were used to evaluate the association between genes and biomedical literature to disambiguate gene names. We obtained 93.6% precision for the test gene set and 80.4% for the area under a receiver-operating characteristics curve for gene and article association. The core algorithm was implemented using a graphics processing unit-based MapReduce framework to handle big data and to improve performance. We conclude that Ontology Fingerprints can help disambiguate gene names mentioned in text and analyse the association between genes and articles. Database URL: http://www.ontologyfingerprint.org © The Author(s) 2015. Published by Oxford University Press.
Full Text Available As the volume of publications rapidly increases, searching for relevant information from the literature becomes more challenging. To complement standard search engines such as PubMed, it is desirable to have an advanced search tool that directly returns relevant biomedical entities such as targets, drugs, and mutations rather than a long list of articles. Some existing tools submit a query to PubMed and process retrieved abstracts to extract information at query time, resulting in a slow response time and limited coverage of only a fraction of the PubMed corpus. Other tools preprocess the PubMed corpus to speed up the response time; however, they are not constantly updated, and thus produce outdated results. Further, most existing tools cannot process sophisticated queries such as searches for mutations that co-occur with query terms in the literature. To address these problems, we introduce BEST, a biomedical entity search tool. BEST returns, as a result, a list of 10 different types of biomedical entities including genes, diseases, drugs, targets, transcription factors, miRNAs, and mutations that are relevant to a user's query. To the best of our knowledge, BEST is the only system that processes free text queries and returns up-to-date results in real time including mutation information in the results. BEST is freely accessible at http://best.korea.ac.kr.
Kongsholm, Gertrud Gansmo; Nielsen, Anna Katrine Toft; Damkier, Per
PURPOSE: It is well documented that drug-drug interaction databases (DIDs) differ substantially with respect to classification of drug-drug interactions (DDIs). The aim of this study was to study online available transparency of ownership, funding, information, classifications, staff training...... and the three most commonly used subscription DIDs in the medical literature. The following parameters were assessed for each of the databases: Ownership, classification of interactions, primary information sources, and staff qualification. We compared the overall proportion of yes/no answers from open access...... rose to 22/60 and 30/36, respectively (p interaction" domain, proportions were 3/25 versus 11/15 available from the webpage (P = 0.0001) and 3/25 versus 15/15 (p
Workman, T Elizabeth; Fiszman, Marcelo; Hurdle, John F; Rindflesch, Thomas C
This paper examines the development and evaluation of an automatic summarization system in the domain of molecular genetics. The system is a potential component of an advanced biomedical information management application called Semantic MEDLINE and could assist librarians in developing secondary databases of genetic information extracted from the primary literature. An existing summarization system was modified for identifying biomedical text relevant to the genetic etiology of disease. The summarization system was evaluated on the task of identifying data describing genes associated with bladder cancer in MEDLINE citations. A gold standard was produced using records from Genetics Home Reference and Online Mendelian Inheritance in Man. Genes in text found by the system were compared to the gold standard. Recall, precision, and F-measure were calculated. The system achieved recall of 46%, and precision of 88% (F-measure=0.61) by taking Gene References into Function (GeneRIFs) into account. The new summarization schema for genetic etiology has potential as a component in Semantic MEDLINE to support the work of data curators.
Segura Bedmar, Isabel; Martínez, Paloma; Carruana Martín, Adrián
Biomedical semantic indexing is a very useful support tool for human curators in their efforts for indexing and cataloging the biomedical literature. The aim of this study was to describe a system to automatically assign Medical Subject Headings (MeSH) to biomedical articles from MEDLINE. Our approach relies on the assumption that similar documents should be classified by similar MeSH terms. Although previous work has already exploited the document similarity by using a k-nearest neighbors algorithm, we represent documents as document vectors by search engine indexing and then compute the similarity between documents using cosine similarity. Once the most similar documents for a given input document are retrieved, we rank their MeSH terms to choose the most suitable set for the input document. To do this, we define a scoring function that takes into account the frequency of the term into the set of retrieved documents and the similarity between the input document and each retrieved document. In addition, we implement guidelines proposed by human curators to annotate MEDLINE articles; in particular, the heuristic that says if 3 MeSH terms are proposed to classify an article and they share the same ancestor, they should be replaced by this ancestor. The representation of the MeSH thesaurus as a graph database allows us to employ graph search algorithms to quickly and easily capture hierarchical relationships such as the lowest common ancestor between terms. Our experiments show promising results with an F1 of 69% on the test dataset. To the best of our knowledge, this is the first work that combines search and graph database technologies for the task of biomedical semantic indexing. Due to its horizontal scalability, ElasticSearch becomes a real solution to index large collections of documents (such as the bibliographic database MEDLINE). Moreover, the use of graph search algorithms for accessing MeSH information could provide a support tool for cataloging MEDLINE
Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering...... dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional...
The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search.
Wang, Tao; Xing, Qin-Rui; Wang, Hui; Chen, Wei
The number of articles published in open access journals (OAJs) has increased dramatically in recent years. Simultaneously, the quality of publications in these journals has been called into question. Few studies have explored the retraction rate from OAJs. The purpose of the current study was to determine the reasons for retractions of articles from OAJs in biomedical research. The Medline database was searched through PubMed to identify retracted publications in OAJs. The journals were identified by the Directory of Open Access Journals. Data were extracted from each retracted article, including the time from publication to retraction, causes, journal impact factor, and country of origin. Trends in the characteristics related to retraction were determined. Data from 621 retracted studies were included in the analysis. The number and rate of retractions have increased since 2010. The most common reasons for retraction are errors (148), plagiarism (142), duplicate publication (101), fraud/suspected fraud (98) and invalid peer review (93). The number of retracted articles from OAJs has been steadily increasing. Misconduct was the primary reason for retraction. The majority of retracted articles were from journals with low impact factors and authored by researchers from China, India, Iran, and the USA.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements
P Tafti, Ahmad; Badger, Jonathan; LaRose, Eric; Shirzadi, Ehsan; Mahnke, Andrea; Mayer, John; Ye, Zhan; Page, David; Peissig, Peggy
The study of adverse drug events (ADEs) is a tenured topic in medical literature. In recent years, increasing numbers of scientific articles and health-related social media posts have been generated and shared daily, albeit with very limited use for ADE study and with little known about the content with respect to ADEs. The aim of this study was to develop a big data analytics strategy that mines the content of scientific articles and health-related Web-based social media to detect and identify ADEs. We analyzed the following two data sources: (1) biomedical articles and (2) health-related social media blog posts. We developed an intelligent and scalable text mining solution on big data infrastructures composed of Apache Spark, natural language processing, and machine learning. This was combined with an Elasticsearch No-SQL distributed database to explore and visualize ADEs. The accuracy, precision, recall, and area under receiver operating characteristic of the system were 92.7%, 93.6%, 93.0%, and 0.905, respectively, and showed better results in comparison with traditional approaches in the literature. This work not only detected and classified ADE sentences from big data biomedical literature but also scientifically visualized ADE interactions. To the best of our knowledge, this work is the first to investigate a big data machine learning strategy for ADE discovery on massive datasets downloaded from PubMed Central and social media. This contribution illustrates possible capacities in big data biomedical text analysis using advanced computational methods with real-time update from new data published on a daily basis. ©Ahmad P Tafti, Jonathan Badger, Eric LaRose, Ehsan Shirzadi, Andrea Mahnke, John Mayer, Zhan Ye, David Page, Peggy Peissig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 08.12.2017.
Shareen A Iqbal
Full Text Available There is a growing movement to encourage reproducibility and transparency practices in the scientific community, including public access to raw data and protocols, the conduct of replication studies, systematic integration of evidence in systematic reviews, and the documentation of funding and potential conflicts of interest. In this survey, we assessed the current status of reproducibility and transparency addressing these indicators in a random sample of 441 biomedical journal articles published in 2000-2014. Only one study provided a full protocol and none made all raw data directly available. Replication studies were rare (n = 4, and only 16 studies had their data included in a subsequent systematic review or meta-analysis. The majority of studies did not mention anything about funding or conflicts of interest. The percentage of articles with no statement of conflict decreased substantially between 2000 and 2014 (94.4% in 2000 to 34.6% in 2014; the percentage of articles reporting statements of conflicts (0% in 2000, 15.4% in 2014 or no conflicts (5.6% in 2000, 50.0% in 2014 increased. Articles published in journals in the clinical medicine category versus other fields were almost twice as likely to not include any information on funding and to have private funding. This study provides baseline data to compare future progress in improving these indicators in the scientific literature.
Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.
Verspoor, Karin; Jimeno Yepes, Antonio; Cavedon, Lawrence; McIntosh, Tara; Herten-Crabb, Asha; Thomas, Zoë; Plazzer, John-Paul
This article introduces the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases. The schema has been applied to a small corpus of full text journal publications on the subject of inherited colorectal cancer. We show that the inter-annotator agreement on annotation of this corpus ranges from 0.78 to 0.95 F-score across different entity types when exact matching is measured, and improves to a minimum F-score of 0.87 when boundary matching is relaxed. Relations show more variability in agreement, but several are reliable, with the highest, cohort-has-size, reaching 0.90 F-score. We also explore the relevance of the schema to the InSiGHT database curation process. The schema and the corpus represent an important new resource for the development of text mining solutions that address relationships among patient cohorts, disease and genetic variation, and therefore, we also discuss the role text mining might play in the curation of information related to the human variome. The corpus is available at http://opennicta.com/home/health/variome.
Ferris, Todd A; Garrison, Gregory M; Lowe, Henry J
Access to clinical data is of increasing importance to biomedical research. The pending HIPAA privacy regulations provide specific requirements for the release of protected health information. Under the regulations, biomedical researchers may utilize anonymized data, or adhere to HIPAA requirements regarding protected health information. In order to provide researchers with anonymized data from a clinical research database, we reviewed several published strategies for de-identification of protected health information. Critical analysis with respect to this project suggests that de-identification alone is problematic when applied to clinical research databases. We propose a hybrid system; utilizing secure key escrow, de-identification, and role-based access for IRB approved researchers.
Full Text Available Applying machine learning techniques to on-line biomedical databases is a challenging task, as this data is collected from large number of sources and it is multi-dimensional. Also retrieval of relevant document from large repository such as gene document takes more processing time and an increased false positive rate. Generally, the extraction of biomedical document is based on the stream of prior observations of gene parameters taken at different time periods. Traditional web usage models such as Markov, Bayesian and Clustering models are sensitive to analyze the user navigation patterns and session identification in online biomedical database. Moreover, most of the document ranking models on biomedical database are sensitive to sparsity and outliers. In this paper, a novel user recommendation system was implemented to predict the top ranked biomedical documents using the disease type, gene entities and user navigation patterns. In this recommendation system, dynamic session identification, dynamic user identification and document ranking techniques were used to extract the highly relevant disease documents on the online PubMed repository. To verify the performance of the proposed model, the true positive rate and runtime of the model was compared with that of traditional static models such as Bayesian and Fuzzy rank. Experimental results show that the performance of the proposed ranking model is better than the traditional models.
Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases
Mads Svane Liljekvist
Full Text Available Background. Open access (OA journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases.Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS and DOAJ were gathered. Journals were included if they were (1 actively publishing, (2 full OA, (3 prospectively indexed in one or more database, and (4 of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined.Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ.Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional
Naderi, Nona; Witte, René
), the first comprehensive, fully open-source approach to automatically extract impacts and related relevant information from the biomedical literature. We assessed the performance of our work on manually annotated corpora and the results show the reliability of our approach. The representation of the extracted information into a structured format facilitates knowledge management and aids in database curation and correction. Furthermore, access to the analysis results is provided through multiple interfaces, including web services for automated data integration and desktop-based solutions for end user interactions.
Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.
Full Text Available Abstract Background Negated biomedical events are often ignored by text-mining applications; however, such events carry scientific significance. We report on the development of BioN∅T, a database of negated sentences that can be used to extract such negated events. Description Currently BioN∅T incorporates ≈32 million negated sentences, extracted from over 336 million biomedical sentences from three resources: ≈2 million full-text biomedical articles in Elsevier and the PubMed Central, as well as ≈20 million abstracts in PubMed. We evaluated BioN∅T on three important genetic disorders: autism, Alzheimer's disease and Parkinson's disease, and found that BioN∅T is able to capture negated events that may be ignored by experts. Conclusions The BioN∅T database can be a useful resource for biomedical researchers. BioN∅T is freely available at http://bionot.askhermes.org/. In future work, we will develop semantic web related technologies to enrich BioN∅T.
Neves, Mariana; Leser, Ulf
New approaches to biomedical text mining crucially depend on the existence of comprehensive annotated corpora. Such corpora, commonly called gold standards, are important for learning patterns or models during the training phase, for evaluating and comparing the performance of algorithms and also for better understanding the information sought for by means of examples. Gold standards depend on human understanding and manual annotation of natural language text. This process is very time-consuming and expensive because it requires high intellectual effort from domain experts. Accordingly, the lack of gold standards is considered as one of the main bottlenecks for developing novel text mining methods. This situation led the development of tools that support humans in annotating texts. Such tools should be intuitive to use, should support a range of different input formats, should include visualization of annotated texts and should generate an easy-to-parse output format. Today, a range of tools which implement some of these functionalities are available. In this survey, we present a comprehensive survey of tools for supporting annotation of biomedical texts. Altogether, we considered almost 30 tools, 13 of which were selected for an in-depth comparison. The comparison was performed using predefined criteria and was accompanied by hands-on experiences whenever possible. Our survey shows that current tools can support many of the tasks in biomedical text annotation in a satisfying manner, but also that no tool can be considered as a true comprehensive solution.
Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.
Liberman Mark Y
Full Text Available Abstract Background The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining. Results We developed a named entity recognizer (MTag, an entity tagger for recognizing clinical descriptions of malignancy presented in text. The application uses the machine-learning technique Conditional Random Fields with additional domain-specific features. MTag was tested with 1,010 training and 432 evaluation documents pertaining to cancer genomics. Overall, our experiments resulted in 0.85 precision, 0.83 recall, and 0.84 F-measure on the evaluation set. Compared with a baseline system using string matching of text with a neoplasm term list, MTag performed with a much higher recall rate (92.1% vs. 42.1% recall and demonstrated the ability to learn new patterns. Application of MTag to all MEDLINE abstracts yielded the identification of 580,002 unique and 9,153,340 overall mentions of malignancy. Significantly, addition of an extensive lexicon of malignancy mentions as a feature set for extraction had minimal impact in performance. Conclusion Together, these results suggest that the
Jin, Yang; McDonald, Ryan T; Lerman, Kevin; Mandel, Mark A; Carroll, Steven; Liberman, Mark Y; Pereira, Fernando C; Winters, Raymond S; White, Peter S
The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining. We developed a named entity recognizer (MTag), an entity tagger for recognizing clinical descriptions of malignancy presented in text. The application uses the machine-learning technique Conditional Random Fields with additional domain-specific features. MTag was tested with 1,010 training and 432 evaluation documents pertaining to cancer genomics. Overall, our experiments resulted in 0.85 precision, 0.83 recall, and 0.84 F-measure on the evaluation set. Compared with a baseline system using string matching of text with a neoplasm term list, MTag performed with a much higher recall rate (92.1% vs. 42.1% recall) and demonstrated the ability to learn new patterns. Application of MTag to all MEDLINE abstracts yielded the identification of 580,002 unique and 9,153,340 overall mentions of malignancy. Significantly, addition of an extensive lexicon of malignancy mentions as a feature set for extraction had minimal impact in performance. Together, these results suggest that the identification of disparate biomedical entity classes in
Ibrahim Burak Ozyurt
Full Text Available The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web. RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual
Ozyurt, Ibrahim Burak; Grethe, Jeffrey S; Martone, Maryann E; Bandrowski, Anita E
The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web). RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual resources through an
Danishevskiy Kirill D
Full Text Available Abstract In the 20th century, Russian biomedical science experienced a decline from the blossom of the early years to a drastic state. Through the first decades of the USSR, it was transformed to suit the ideological requirements of a totalitarian state and biased directives of communist leaders. Later, depressing economic conditions and isolation from the international research community further impeded its development. Contemporary Russia has inherited a system of medical education quite different from the west as well as counterproductive regulations for the allocation of research funding. The methodology of medical and epidemiological research in Russia is largely outdated. Epidemiology continues to focus on infectious disease and results of the best studies tend to be published in international periodicals. MEDLINE continues to be the best database to search for Russian biomedical publications, despite only a small proportion being indexed. The database of the Moscow Central Medical Library is the largest national database of medical periodicals, but does not provide abstracts and full subject heading codes, and it does not cover even the entire collection of the Library. New databases and catalogs (e.g. Panteleimon that have appeared recently are incomplete and do not enable effective searching.
Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R
Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.
Smalheiser, Neil R; Lin, Can; Jia, Lifeng; Jiang, Yu; Cohen, Aaron M; Yu, Clement; Davis, John M; Adams, Clive E; McDonagh, Marian S; Meng, Weiyi
Individuals and groups who write systematic reviews and meta-analyses in evidence-based medicine regularly carry out literature searches across multiple search engines linked to different bibliographic databases, and thus have an urgent need for a suitable metasearch engine to save time spent on repeated searches and to remove duplicate publications from initial consideration. Unlike general users who generally carry out searches to find a few highly relevant (or highly recent) articles, systematic reviewers seek to obtain a comprehensive set of articles on a given topic, satisfying specific criteria. This creates special requirements and challenges for metasearch engine design and implementation. We created a federated search tool that is connected to five databases: PubMed, EMBASE, CINAHL, PsycINFO, and the Cochrane Central Register of Controlled Trials. Retrieved bibliographic records were shown online; optionally, results could be de-duplicated and exported in both BibTex and XML format. The query interface was extensively modified in response to feedback from users within our team. Besides a general search track and one focused on human-related articles, we also added search tracks optimized to identify case reports and systematic reviews. Although users could modify preset search options, they were rarely if ever altered in practice. Up to several thousand retrieved records could be exported within a few minutes. De-duplication of records returned from multiple databases was carried out in a prioritized fashion that favored retaining citations returned from PubMed. Systematic reviewers are used to formulating complex queries using strategies and search tags that are specific for individual databases. Metta offers a different approach that may save substantial time but which requires modification of current search strategies and better indexing of randomized controlled trial articles. We envision Metta as one piece of a multi-tool pipeline that will assist
Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K
Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.
The Wind-Wildlife Impacts Literature Database (WILD), developed and maintained by the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL), is comprised of over 1,000 citations pertaining to the effects of land-based wind, offshore wind, marine and hydrokinetic, power lines, and communication and television towers on wildlife.
Full Text Available Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel.
Disease associated gene discovery is a critical step to realize the future of personalized medicine. However empirical and clinical validation of disease associated genes are time consuming and expensive. In silico discovery of disease associated genes from literature is therefore becoming the first essential step for biomarker discovery to…
These literature retrieval methods include the use of the popular. PubMed as well as internet search engines. Specific websites catering to developing countries' information and journals' websites are also highlighted. Key words: Research information, retrieval methods, internet, search engines, PubMed. INTRODUCTION.
Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.
Boutron, Isabelle; Ravaud, Philippe
Publication in peer-reviewed journals is an essential step in the scientific process. However, publication is not simply the reporting of facts arising from a straightforward analysis thereof. Authors have broad latitude when writing their reports and may be tempted to consciously or unconsciously "spin" their study findings. Spin has been defined as a specific intentional or unintentional reporting that fails to faithfully reflect the nature and range of findings and that could affect the impression the results produce in readers. This article, based on a literature review, reports the various practices of spin from misreporting by "beautification" of methods to misreporting by misinterpreting the results. It provides data on the prevalence of some forms of spin in specific fields and the possible effects of some types of spin on readers' interpretation and research dissemination. We also discuss why researchers would spin their reports and possible ways to avoid it.
Diabetes mellitus is a condition that is extremely serious from both clinical and public health standpoints. The traditional healthcare system of India, Ayurveda, offers a balanced and holistic multi-modality approach to treating this disorder. Many Ayurvedic modalities have been subjected to empirical scientific evaluation, but most such research has been done in India, receiving little attention in North America. This paper offers a review of the English language literature related to Ayurveda and diabetes care, encompassing herbs, diet, yoga, and meditation as modalities that are accessible and acceptable to Western clinicians and patients. There is a considerable amount of data from both animal and human trials suggesting efficacy of Ayurvedic interventions in managing diabetes. However, the reported human trials generally fall short of contemporary methodological standards. More research is needed in the area of Ayurvedic treatment of diabetes, assessing both whole practice and individual modalities.
Full Text Available There is an increasing need to evaluate the production and impact of medical research produced by institutions. Many indicators exist, yet we do not have enough information about their relevance. The objective of this systematic review was (1 to identify all the indicators that could be used to measure the output and outcome of medical research carried out in institutions and (2 enlist their methodology, use, positive and negative points.We have searched 3 databases (Pubmed, Scopus, Web of Science using the following keywords: [Research outcome* OR research output* OR bibliometric* OR scientometric* OR scientific production] AND [indicator* OR index* OR evaluation OR metrics]. We included articles presenting, discussing or evaluating indicators measuring the scientific production of an institution. The search was conducted by two independent authors using a standardised data extraction form. For each indicator we extracted its definition, calculation, its rationale and its positive and negative points. In order to reduce bias, data extraction and analysis was performed by two independent authors.We included 76 articles. A total of 57 indicators were identified. We have classified those indicators into 6 categories: 9 indicators of research activity, 24 indicators of scientific production and impact, 5 indicators of collaboration, 7 indicators of industrial production, 4 indicators of dissemination, 8 indicators of health service impact. The most widely discussed and described is the h-index with 31 articles discussing it.The majority of indicators found are bibliometric indicators of scientific production and impact. Several indicators have been developed to improve the h-index. This indicator has also inspired the creation of two indicators to measure industrial production and collaboration. Several articles propose indicators measuring research impact without detailing a methodology for calculating them. Many bibliometric indicators identified
Most biomedical journals charge readers a hefty access toll to read the full text version of a published research article. These tolls bring enormous profits to the traditional corporate publishing industry, but they make it impossible for most people worldwide--particularly in low and middle income countries--to access the biomedical literature. Traditional publishers also insist on owning the copyright on these articles, making it illegal for readers to freely distribute and photocopy papers, translate them, or create derivative educational works. This article argues that excluding the poor from accessing and freely using the biomedical research literature is harming global public health. Health care workers, for example, are prevented from accessing the information they need to practice effective medicine, while policymakers are prevented from accessing the essential knowledge they require to build better health care systems. The author proposes that the biomedical literature should be considered a global public good, basing his arguments upon longstanding and recent international declarations that enshrine access to scientific and medical knowledge as a human right. He presents an emerging alternative publishing model, called open access, and argues that this model is a more socially responsive and equitable approach to knowledge dissemination.
Hernandez-Villafuerte, Karla; Sussex, Jon; Robin, Enora; Guthrie, Sue; Wooding, Steve
Publicly funded biomedical and health research is expected to achieve the best return possible for taxpayers and for society generally. It is therefore important to know whether such research is more productive if concentrated into a small number of 'research groups' or dispersed across many. We undertook a systematic rapid evidence assessment focused on the research question: do economies of scale and scope exist in biomedical and health research? In other words, is that research more productive per unit of cost if more of it, or a wider variety of it, is done in one location? We reviewed English language literature without date restriction to the end of 2014. To help us to classify and understand that literature, we first undertook a review of econometric literature discussing models for analysing economies of scale and/or scope in research generally (not limited to biomedical and health research). We found a large and disparate literature. We reviewed 60 empirical studies of (dis-)economies of scale and/or scope in biomedical and health research, or in categories of research including or overlapping with biomedical and health research. This literature is varied in methods and findings. At the level of universities or research institutes, studies more often point to positive economies of scale than to diseconomies of scale or constant returns to scale in biomedical and health research. However, all three findings exist in the literature, along with inverse U-shaped relationships. At the level of individual research units, laboratories or projects, the numbers of studies are smaller and evidence is mixed. Concerning economies of scope, the literature more often suggests positive economies of scope than diseconomies, but the picture is again mixed. The effect of varying the scope of activities by a research group was less often reported than the effect of scale and the results were more mixed. The absence of predominant findings for or against the existence of
Yang, Zhihao; Lin, Yuan; Wu, Jiajin; Tang, Nan; Lin, Hongfei; Li, Yanpeng
Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. However, the volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database curators to detect and curate protein interaction information manually. We present a multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, and graph and combines their output with Ranking support vector machine (SVM). Experimental evaluations show that the features in individual kernels are complementary and the kernel combined with Ranking SVM achieves better performance than those of the individual kernels, equal weight combination and optimal weight combination. Our approach can achieve state-of-the-art performance with respect to the comparable evaluations, with 64.88% F-score and 88.02% AUC on the AImed corpus. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wu, Stephen; Liu, Hongfang
Natural language processing (NLP) has become crucial in unlocking information stored in free text, from both clinical notes and biomedical literature. Clinical notes convey clinical information related to individual patient health care, while biomedical literature communicates scientific findings. This work focuses on semantic characterization of texts at an enterprise scale, comparing and contrasting the two domains and their NLP approaches. We analyzed the empirical distributional characteristics of NLP-discovered named entities in Mayo Clinic clinical notes from 2001-2010, and in the 2011 MetaMapped Medline Baseline. We give qualitative and quantitative measures of domain similarity and point to the feasibility of transferring resources and techniques. An important by-product for this study is the development of a weighted ontology for each domain, which gives distributional semantic information that may be used to improve NLP applications.
Full Text Available Abstract Background The majority of experimentally verified molecular interaction and biological pathway data are present in the unstructured text of biomedical journal articles where they are inaccessible to computational methods. The Biomolecular interaction network database (BIND seeks to capture these data in a machine-readable format. We hypothesized that the formidable task-size of backfilling the database could be reduced by using Support Vector Machine technology to first locate interaction information in the literature. We present an information extraction system that was designed to locate protein-protein interaction data in the literature and present these data to curators and the public for review and entry into BIND. Results Cross-validation estimated the support vector machine's test-set precision, accuracy and recall for classifying abstracts describing interaction information was 92%, 90% and 92% respectively. We estimated that the system would be able to recall up to 60% of all non-high throughput interactions present in another yeast-protein interaction database. Finally, this system was applied to a real-world curation problem and its use was found to reduce the task duration by 70% thus saving 176 days. Conclusions Machine learning methods are useful as tools to direct interaction and pathway database back-filling; however, this potential can only be realized if these techniques are coupled with human review and entry into a factual database such as BIND. The PreBIND system described here is available to the public at http://bind.ca. Current capabilities allow searching for human, mouse and yeast protein-interaction information.
Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R
In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.
Navarro-Pérez, Patricia; Ortiz-Gómez, Teresa; Gil-García, Eugenia
To explore the scientific output on transsexuality in the Spanish biomedical literature between 1973 and 2011, through bibliometric and content analyses. We carried out a descriptive, cross-sectional study of Spanish biomedical articles on transsexuality published between 1973 and 2011. The data sources consisted of Índice Médico Español and ISOC-Ciencias Sociales y Humanidades. Bibliometric and content analyses were performed. A total of 65 papers were analyzed. Knowledge on transsexuality in Spain began to appear in medical journals between 1973 and 1984. A decade of intense productivity began in 1996 and the number of journals publishing articles on transsexuality multiplied in the following years. Until 2006, the year with the most biomedical productivity, biomedical discourses reproduced representations of transsexuality anchored in biological determinism. From 2008-2011, professionals writing on the topic incorporated feminist theories and social perspectives in their discourses. In the last quarter of the twentieth century, the dominant medical discourse considered manifestations of transsexual people from a biologist perspective that conceives transsexuality as a configuration mismatch between sex and gender. The emergence of new identity categories and medical reflection from non-essentialist and non-normative gender perspectives has improved the clinical management of transsexuality. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
Volanakis, Adam; Krawczyk, Konrad
There are more than 26 million peer-reviewed biomedical research items according to Medline/PubMed. This breadth of information is indicative of the progress in biomedical sciences on one hand, but an overload for scientists performing literature searches on the other. A major portion of scientific literature search is to find statements, numbers and protocols that can be cited to build an evidence-based narrative for a new manuscript. Because science builds on prior knowledge, such information has likely been written out and cited in an older manuscript. Thus, Cited Statements, pieces of text from scientific literature supported by citing other peer-reviewed publications, carry significant amount of condensed information on prior art. Based on this principle, we propose a literature search service, SciRide Finder (finder.sciride.org), which constrains the search corpus to such Cited Statements only. We demonstrate that Cited Statements can carry different information to this found in titles/abstracts and full text, giving access to alternative literature search results than traditional search engines. We further show how presenting search results as a list of Cited Statements allows researchers to easily find information to build an evidence-based narrative for their own manuscripts.
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine.
Elizabeth Margaret Stovold
A Review of: Peterson, G.M. (2013). Characteristics of retracted open access biomedical literature: a bibliographic analysis. Journal of the American Society for Information Science and Technology. 64(12), 2428-2436. doi: 10.1002/asi.22944 Objective – To investigate whether the rate of retracted articles and citation rates post-retraction in the biomedical literature are comparable across open access, free-to-access, or pay-to-access journals. Design – Citation analysis. Sett...
Marcos Antonio Mouriño García
Full Text Available Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus.
Quan, Changqin; Wang, Meng; Ren, Fuji
The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.
Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.
So in the present research published corpora of 34306 documents for biofilm was collected from PubMed database along with non-indexed resources like books, conferences, newspaper articles, etc. and these were divided into five categories i.e. classification, growth and development, physiology, drug effects and radiation effects. These five categories were further individually divided into three parts i.e. Journal Title, Abstract Title, and Abstract Text to make indexing highly specific. Text-processing was done using the software Rapid Miner_v5.3, which tokenizes the entire text into words and provides the frequency of each word within the document. The obtained words were normalized using Remove Stop and Stem Word command of Rapid Miner_v5.3 which removes the stopping and stemming words. The obtained words were stored in MS-Excel 2007 and were sorted in decreasing order of frequency using Sort & Filter command of MS-Excel 2007. The words are visualization through networks obtained by Cytoscape_v2.7.0. Now the words obtained were highly specific for biofilms, generating a controlled biofilm vocabulary and this vocabulary could be used for indexing articles for biofilm (similar to MeSH database which indexes articles for PubMed. The obtained keywords information was stored in the relational database which is locally hosted using the WAMP_v2.4 (Windows, Apache, MySQL, PHP server. The available biofilm vocabulary will be significant for researchers studying biofilm literature, making their search easy and efficient.
Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S
Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean
Doughty, Emily; Kertesz-Farkas, Attila; Bodenreider, Olivier; Thompson, Gary; Adadey, Asa; Peterson, Thomas; Kann, Maricel G
A major goal of biomedical research in personalized medicine is to find relationships between mutations and their corresponding disease phenotypes. However, most of the disease-related mutational data are currently buried in the biomedical literature in textual form and lack the necessary structure to allow easy retrieval and visualization. We introduce a high-throughput computational method for the identification of relevant disease mutations in PubMed abstracts applied to prostate (PCa) and breast cancer (BCa) mutations. We developed the extractor of mutations (EMU) tool to identify mutations and their associated genes. We benchmarked EMU against MutationFinder--a tool to extract point mutations from text. Our results show that both methods achieve comparable performance on two manually curated datasets. We also benchmarked EMU's performance for extracting the complete mutational information and phenotype. Remarkably, we show that one of the steps in our approach, a filter based on sequence analysis, increases the precision for that task from 0.34 to 0.59 (PCa) and from 0.39 to 0.61 (BCa). We also show that this high-throughput approach can be extended to other diseases. Our method improves the current status of disease-mutation databases by significantly increasing the number of annotated mutations. We found 51 and 128 mutations manually verified to be related to PCa and Bca, respectively, that are not currently annotated for these cancer types in the OMIM or Swiss-Prot databases. EMU's retrieval performance represents a 2-fold improvement in the number of annotated mutations for PCa and BCa. We further show that our method can benefit from full-text analysis once there is an increase in Open Access availability of full-text articles. Freely available at: http://bioinf.umbc.edu/EMU/ftp.
Information on Protein Interactions (PIs) is valuable for biomedical research, but often lies buried in the scientific literature and cannot be readily retrieved. While much progress has been made over the years in extracting PIs from the literature using computational methods, there is a lack of free, public, user-friendly tools for the discovery of PIs. We developed an online tool for the extraction of PI relationships from PubMed-abstracts, which we name PIMiner. Protein pairs and the words that describe their interactions are reported by PIMiner so that new interactions can be easily detected within text. The interaction likelihood levels are reported too. The option to extract only specific types of interactions is also provided. The PIMiner server can be accessed through a web browser or remotely through a client\\'s command line. PIMiner can process 50,000 PubMed abstracts in approximately 7 min and thus appears suitable for large-scale processing of biological/biomedical literature. Copyright © 2013 Inderscience Enterprises Ltd.
Full Text Available Abstract Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs. In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA, (ii identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH, automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard
Perez-Rey, David; Jimenez-Castellanos, Ana; Garcia-Remesal, Miguel; Crespo, Jose; Maojo, Victor
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset
Elizabeth Margaret Stovold
Full Text Available A Review of: Peterson, G.M. (2013. Characteristics of retracted open access biomedical literature: a bibliographic analysis. Journal of the American Society for Information Science and Technology. 64(12, 2428-2436. doi: 10.1002/asi.22944 Objective – To investigate whether the rate of retracted articles and citation rates post-retraction in the biomedical literature are comparable across open access, free-to-access, or pay-to-access journals. Design – Citation analysis. Setting – Biomedical literature. Subjects – 160 retracted papers published between 1st January 2001 and 31st December 2010. Methods – For the retracted papers, 100 records were retrieved from the PubMed database and 100 records from the PubMed Central (PMC open access subset. Records were selected at random, based on the PubMed identifier. Each article was assigned a number based on its accessibility using the specific criteria. Articles published in the PMC open access subset were assigned a 2; articles retrieved from PubMed that were freely accessible, but did not meet the criteria for open access were assigned a 1; and articles retrieved through PubMed which were pay-to-access were assigned a 0. This allowed articles to be grouped and compared by accessibility. Citation information was collected primarily from the Science Citation Index. Articles for which no citation information was available, and those with a lifetime citation of 0 (or 1 where the citation came from the retraction statement were excluded, leaving 160 articles for analysis. Information on the impact factor of the journals was retrieved and the analysis was performed twice; first with the entire set, and second after excluding articles published in journals with an impact factor of 10 or above (14% of the total. The average number of citations per month was used to compare citation rates, and the percentage change in citation rate pre- and post-retraction was calculated. Information was also collected
Background and objectives : Nanotechnology is a new technology which is increasingly used over the past decade. Due to its great significance, governments are tending to invest greatly on the research and development on nanotechnology in various sectors and aspects. The purpose of this study was to determine the status of biomedical nanotechnology publications over the past ten years (2010-2000) in Medline/ PubMed. Material and Methods : This was a descriptive study. The Medline database wa...
Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol
A huge amount of biomedical textual information has been produced and collected in MEDLINE for decades. In order to easily utilize biomedical information in the free text, document clustering and text summarization together are used as a solution for text information overload problem. In this paper, we introduce a coherent graph-based semantic clustering and summarization approach for biomedical literature. Our extensive experimental results show the approach shows 45% cluster quality improvement and 72% clustering reliability improvement, in terms of misclassification index, over Bisecting K-means as a leading document clustering approach. In addition, our approach provides concise but rich text summary in key concepts and sentences. Our coherent biomedical literature clustering and summarization approach that takes advantage of ontology-enriched graphical representations significantly improves the quality of document clusters and understandability of documents through summaries.
Porta, Miquel; Vandenbroucke, Jan P; Ioannidis, John P A; Sanz, Sergio; Fernandez, Esteve; Bhopal, Raj; Morabia, Alfredo; Victora, Cesar; Lopez, Tomàs
There are no analyses of citations to books on epidemiological and statistical methods in the biomedical literature. Such analyses may shed light on how concepts and methods changed while biomedical research evolved. Our aim was to analyze the number and time trends of citations received from biomedical articles by books on epidemiological and statistical methods, and related disciplines. The data source was the Web of Science. The study books were published between 1957 and 2010. The first year of publication of the citing articles was 1945. We identified 125 books that received at least 25 citations. Books first published in 1980-1989 had the highest total and median number of citations per year. Nine of the 10 most cited texts focused on statistical methods. Hosmer & Lemeshow's Applied logistic regression received the highest number of citations and highest average annual rate. It was followed by books by Fleiss, Armitage, et al., Rothman, et al., and Kalbfleisch and Prentice. Fifth in citations per year was Sackett, et al., Evidence-based medicine. The rise of multivariate methods, clinical epidemiology, or nutritional epidemiology was reflected in the citation trends. Educational textbooks, practice-oriented books, books on epidemiological substantive knowledge, and on theory and health policies were much less cited. None of the 25 top-cited books had the theoretical or sociopolitical scope of works by Cochrane, McKeown, Rose, or Morris. Books were mainly cited to reference methods. Books first published in the 1980s continue to be most influential. Older books on theory and policies were rooted in societal and general medical concerns, while the most modern books are almost purely on methods.
Full Text Available BACKGROUND: There are no analyses of citations to books on epidemiological and statistical methods in the biomedical literature. Such analyses may shed light on how concepts and methods changed while biomedical research evolved. Our aim was to analyze the number and time trends of citations received from biomedical articles by books on epidemiological and statistical methods, and related disciplines. METHODS AND FINDINGS: The data source was the Web of Science. The study books were published between 1957 and 2010. The first year of publication of the citing articles was 1945. We identified 125 books that received at least 25 citations. Books first published in 1980-1989 had the highest total and median number of citations per year. Nine of the 10 most cited texts focused on statistical methods. Hosmer & Lemeshow's Applied logistic regression received the highest number of citations and highest average annual rate. It was followed by books by Fleiss, Armitage, et al., Rothman, et al., and Kalbfleisch and Prentice. Fifth in citations per year was Sackett, et al., Evidence-based medicine. The rise of multivariate methods, clinical epidemiology, or nutritional epidemiology was reflected in the citation trends. Educational textbooks, practice-oriented books, books on epidemiological substantive knowledge, and on theory and health policies were much less cited. None of the 25 top-cited books had the theoretical or sociopolitical scope of works by Cochrane, McKeown, Rose, or Morris. CONCLUSIONS: Books were mainly cited to reference methods. Books first published in the 1980s continue to be most influential. Older books on theory and policies were rooted in societal and general medical concerns, while the most modern books are almost purely on methods.
Automatic summarization of biomedical literature usually relies on domain knowledge from external sources to build rich semantic representations of the documents to be summarized. In this paper, we investigate the impact of the knowledge source used on the quality of the summaries that are generated. We present a method for representing a set of documents relevant to a given biological entity or topic as a semantic graph of domain concepts and relations. Different graphs are created by using different combinations of ontologies and vocabularies within the UMLS (including GO, SNOMED-CT, HUGO and all available vocabularies in the UMLS) to retrieve domain concepts, and different types of relationships (co-occurrence and semantic relations from the UMLS Metathesaurus and Semantic Network) are used to link the concepts in the graph. The different graphs are next used as input to a summarization system that produces summaries composed of the most relevant sentences from the original documents. Our experiments demonstrate that the choice of the knowledge source used to model the text has a significant impact on the quality of the automatic summaries. In particular, we find that, when summarizing gene-related literature, using GO, SNOMED-CT and HUGO to extract domain concepts results in significantly better summaries than using all available vocabularies in the UMLS. This finding suggests that successful biomedical summarization requires the selection of the appropriate knowledge source, whose coverage, specificity and relations must be in accordance to the type of the documents to summarize. Copyright © 2014 Elsevier Inc. All rights reserved.
Moradi, Milad; Ghadiri, Nasser
Automatic text summarization tools can help users in the biomedical domain to access information efficiently from a large volume of scientific literature and other sources of text documents. In this paper, we propose a summarization method that combines itemset mining and domain knowledge to construct a concept-based model and to extract the main subtopics from an input document. Our summarizer quantifies the informativeness of each sentence using the support values of itemsets appearing in the sentence. To address the concept-level analysis of text, our method initially maps the original document to biomedical concepts using the Unified Medical Language System (UMLS). Then, it discovers the essential subtopics of the text using a data mining technique, namely itemset mining, and constructs the summarization model. The employed itemset mining algorithm extracts a set of frequent itemsets containing correlated and recurrent concepts of the input document. The summarizer selects the most related and informative sentences and generates the final summary. We evaluate the performance of our itemset-based summarizer using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics, performing a set of experiments. We compare the proposed method with GraphSum, TexLexAn, SweSum, SUMMA, AutoSummarize, the term-based version of the itemset-based summarizer, and two baselines. The results show that the itemset-based summarizer performs better than the compared methods. The itemset-based summarizer achieves the best scores for all the assessed ROUGE metrics (R-1: 0.7583, R-2: 0.3381, R-W-1.2: 0.0934, and R-SU4: 0.3889). We also perform a set of preliminary experiments to specify the best value for the minimum support threshold used in the itemset mining algorithm. The results demonstrate that the value of this threshold directly affects the accuracy of the summarization model, such that a significant decrease can be observed in the performance of summarization due to
Yates, Elliot J; Dixon, Louise C
Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
Miller, Casey W.; Belyea, Dustin; Chabot, Michelle; Messina, Troy
A method is described to empower students to efficiently perform general and specific literature searches using online resources [Miller et al., Am. J. Phys. 77, 1112 (2009)]. The method was tested on multiple groups, including undergraduate and graduate students with varying backgrounds in scientific literature searches. Students involved in this study showed marked improvement in their awareness of how and where to find scientific information. Repeated exposure to literature searching methods appears worthwhile, starting early in the undergraduate career, and even in graduate school orientation.
Lee, Dong-Gi; Shin, Hyunjung
Recently, research on human disease network has succeeded and has become an aid in figuring out the relationship between various diseases. In most disease networks, however, the relationship between diseases has been simply represented as an association. This representation results in the difficulty of identifying prior diseases and their influence on posterior diseases. In this paper, we propose a causal disease network that implements disease causality through text mining on biomedical literature. To identify the causality between diseases, the proposed method includes two schemes: the first is the lexicon-based causality term strength, which provides the causal strength on a variety of causality terms based on lexicon analysis. The second is the frequency-based causality strength, which determines the direction and strength of causality based on document and clause frequencies in the literature. We applied the proposed method to 6,617,833 PubMed literature, and chose 195 diseases to construct a causal disease network. From all possible pairs of disease nodes in the network, 1011 causal pairs of 149 diseases were extracted. The resulting network was compared with that of a previous study. In terms of both coverage and quality, the proposed method showed outperforming results; it determined 2.7 times more causalities and showed higher correlation with associated diseases than the existing method. This research has novelty in which the proposed method circumvents the limitations of time and cost in applying all possible causalities in biological experiments and it is a more advanced text mining technique by defining the concepts of causality term strength.
Full Text Available Automatic extraction of protein-protein interaction (PPI pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK. DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM. Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Matarese, Valerie; Shashok, Karen
A team of stakeholders in biomedical publishing recently proposed a set of core competencies for journal editors, as a resource that can inform training programs for editors and ultimately improve the quality of the biomedical research literature. This initiative, still in its early stages, would benefit from additional sources of expert information. Based on our experiences as authors' editors, we offer two suggestions on how to strengthen these competencies so that they better respond to the needs of readers and authors - the main users of and contributors to research journals. First, journal editors should be able to ensure that authors are given useful feedback on the language and writing in submitted manuscripts, beyond a (possibly incorrect) blanket judgement of whether the English is "acceptable" or not. Second, journal editors should be able to deal effectively with inappropriate text re-use and plagiarism. These additional competencies would, we believe, be valued by other stakeholders in biomedical research publication as markers of editorial quality.
Plaza, Laura; Carrillo-de-Albornoz, Jorge
The position of a sentence in a document has been traditionally considered an indicator of the relevance of the sentence, and therefore it is frequently used by automatic summarization systems as an attribute for sentence selection. Sentences close to the beginning of the document are supposed to deal with the main topic and thus are selected for the summary. This criterion has shown to be very effective when summarizing some types of documents, such as news items. However, this property is not likely to be found in other types of documents, such as scientific articles, where other positional criteria may be preferred. The purpose of the present work is to study the utility of different positional strategies for biomedical literature summarization. We have evaluated three different positional strategies: (1) awarding the sentences at the beginning of the document, (2) preferring those at the beginning and end of the document, and (3) weighting the sentences according to the section in which they appear. To this end, we have implemented two summarizers, one based on semantic graphs and the other based on concept frequencies, and evaluated the summaries they produce when combined with each of the positional strategies above using ROUGE metrics. Our results indicate that it is possible to improve the quality of the summaries by weighting the sentences according to the section in which they appear (≈17% improvement in ROUGE-2 for the graph-based summarizer and ≈20% for the frequency-based summarizer), and that the sections containing the more salient information are the Methods and Material and the Discussion and Results ones. It has been found that the use of traditional positional criteria that award sentences at the beginning and/or the end of the document are not helpful when summarizing scientific literature. In contrast, a more appropriate strategy is that which weights sentences according to the section in which they appear.
Eka Wardhani S.
Full Text Available Evaluasi terhadap pemanfaatan koleksi sangat diperlukan untuk mengetahui seberapa besar koleksi tersebut diakses dan dimanfaatkan oleh pengguna. Ebsco Biomedical Reference Collection (Ebsco BRC merupakan salah satu database jurnal yang berparadigma akses. Evaluasi pemanfaatan jurnal dalam database Ebsco BRC merupakan penelitian tentang pemanfaatan koleksi perpustakaan yang dilakukan di UPIK (Unit Perpustakaan dan Informatika Kedokteran Fakultas Kedokteran Universitas Gadjah Mada Yogyakarta. Penelitian ini bertujuan untuk mengetahui tingkat keterpakaian dan pemanfaatan jumal oleh sivitas akademika di FK UGM. Evaluasi dilakukan dengan metode deskriptif dengan pendekatan data kuantitatif dan kualitatif. . Instrumen yang digunakan dalam evaluasi adalah kuesioner dan usage statistics report. Hasil Penelitian ini menunjukkan bahwa tingkat keterpakaian jurnal berdasarkan judul yang ada tinggi (97,96%, akan tetapi tingkat pengaksesannya belum dilakukan secara maksimal. Rata-rata pengaksesan jurnal setiap harinya 25%. Dari data usage statistics report dapat diketahui sebanyak 12 judul jumal yang diakses lebih dari 1000 kali yang dinyatakan sebagai jumal yang paling sering diakses oleh pengguna. Saran peneliti berdasarkan hasil penelitian yang diperoleh adalah bahwa kegiatan melanggan koleksi database Ebsco dapat terus dilakukan , akan tetapi UPIK harus berusaha meningkatkan sosialisasi koleksi, aksesibilitas, fasilitas, dan bimbingan bagi pengguna dalam melakukan penelusuran dalam database tersebut agar dapat dimanfaatkan secara maksimal. Kata Kunci: Evaluasi Koleksi, Ebsco
Joubert, M; Fieschi, M; Robert, J J; Volot, F; Fieschi, D
The aim of the project ARIANE is to model and implement seamless, natural, and easy-to-use interfaces with various kinds of heterogeneous biomedical information databases. A conceptual model of some of the Unified Medical Language System (UMLS) knowledge sources has been developed to help end users to query information databases. A query is represented by a conceptual graph that translates the deep structure of an end-user's interest in a topic. A computational model exploits this conceptual model to build a query interactively represented as query graph. A query graph is then matched to the data graph built with data issued from each record of a database by means of a pattern-matching (projection) rule that applies to conceptual graphs. Prototypes have been implemented to test the feasibility of the model with different kinds of information databases. Three cases are studied: 1) information in records is structured according to the UMLS knowledge sources; 2) information is able to be structured without error in the frame of the UMLS knowledge; 3) information cannot be structured. In each case the pattern-matching is processed by the projection rule according to the structure of information that has been implemented in the databases. The conceptual graphs theory provides with a homogeneous and powerful formalism able to represent both concepts, instances of concepts in medical contexts, and associations by means of relationships, and to represent data at different levels of details. The conceptual-graphs formalism allows powerful capabilities to operate a semantic integration of information databases using the UMLS knowledge sources.
Kibbe, Warren A; Arze, Cesar; Felix, Victor; Mitraka, Elvira; Bolton, Evan; Fu, Gang; Mungall, Christopher J; Binder, Janos X; Malone, James; Vasant, Drashtti; Parkinson, Helen; Schriml, Lynn M
The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years. These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. This will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
K, Rakhi; Tuwani, Rudraksh; Mukherjee, Jagriti; Bagler, Ganesh
Spices and herbs are key dietary ingredients used across cultures worldwide. Beyond their use as flavoring and coloring agents, the popularity of these aromatic plant products in culinary preparations has been attributed to their antimicrobial properties. Last few decades have witnessed an exponential growth of biomedical literature investigating the impact of spices and herbs on health, presenting an opportunity to mine for patterns from empirical evidence. Systematic investigation of empiri...
Wright, Judy M; Cottrell, David J; Mir, Ghazala
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
Smedley, Damian; Schofield, Paul; Chen, Chao-Kung; Aidinis, Vassilis; Ainali, Chrysanthi; Bard, Jonathan; Balling, Rudi; Birney, Ewan; Blake, Andrew; Bongcam-Rudloff, Erik; Brookes, Anthony J.; Cesareni, Gianni; Chandras, Christina; Eppig, Janan; Flicek, Paul; Gkoutos, Georgios; Greenaway, Simon; Gruenberger, Michael; Heriche, Jean-Karim; Lyall, Andrew; Mallon, Ann-Marie; Muddyman, Dawn; Reisinger, Florian; Ringwald, Martin; Rosenthal, Nadia; Schughart, Klaus; Swertz, Morris; Thorisson, Gudmundur A.; Zouberakis, Michael; Hancock, John M.
The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them. Despite a rapid acceleration in uptake
Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna
Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html
Zhu, Changtai; Jiang, Ting; Cao, Hao; Sun, Wenguang; Chen, Zhong; Liu, Jinming
The meta-analysis is regarded as an important evidence for making scientific decision. The database of ISI Web of Science collected a great number of high quality literatures including meta-analysis literatures. However, it is significant to understand the general characteristics of meta-analysis literatures to outline the perspective of meta-analysis. In this present study, we summarized and clarified some features on these literatures in the database of ISI Web of Science. We retrieved the meta-analysis literatures in the database of ISI Web of Science including SCI-E, SSCI, A&HCI, CPCI-S, CPCI-SSH, CCR-E, and IC. The annual growth rate, literature category, language, funding, index citation, agencies and countries/territories of the meta-analysis literatures were analyzed, respectively. A total of 95,719 records, which account for 0.38% (99% CI: 0.38%-0.39%) of all literatures, were found in the database. From 1997 to 2012, the annual growth rate of meta-analysis literatures was 18.18%. The literatures involved in many categories, languages, fundings, citations, publication agencies, and countries/territories. Interestingly, the index citation frequencies of the meta-analysis were significantly higher than that of other type literatures such as multi-centre study, randomize controlled trial, cohort study, case control study, and cases report (P<0.0001). The increasing numbers, intensively global influence and high citations revealed that the meta-analysis has been becoming more and more prominent in recent years. In future, in order to promote the validity of meta-analysis, the CONSORT and PRISMA standard should be continuously popularized in the field of evidence-based medicine.
Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry
The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].
Zhang, Han; Fiszman, Marcelo; Shin, Dongwook
Background: Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts).Results: Sem...
Yamamoto, Yasunori; Yamaguchi, Atsuko; Yonezawa, Akinori
There is a growing need for efficient and integrated access to databases provided by diverse institutions. Using a linked data design pattern allows the diverse data on the Internet to be linked effectively and accessed efficiently by computers. Previously, we developed the Allie database, which stores pairs of abbreviations and long forms (LFs, or expanded forms) used in the life sciences. LFs define the semantics of abbreviations, and Allie provides a Web-based search service for researchers to look up the LF of an unfamiliar abbreviation. This service encounters two problems. First, it does not display each LF's definition, which could help the user to disambiguate and learn the abbreviations more easily. Furthermore, there are too many LFs for us to prepare a full dictionary from scratch. On the other hand, DBpedia has made the contents of Wikipedia available in the Resource Description Framework (RDF), which is expected to contain a significant number of entries corresponding to LFs. Therefore, linking the Allie LFs to DBpedia entries may present a solution to the Allie's problems. This requires a method that is capable of matching large numbers of string pairs within a reasonable period of time because Allie and DBpedia are frequently updated. We built a Linked Open Data set that links LFs to DBpedia titles by applying key collision methods (i.e., fingerprint and n-gram fingerprint) to their literals, which are simple approximate string-matching methods. In addition, we used UMLS resources to normalise the life science terms. As a result, combining the key collision methods with the domain-specific resources performed best, and 44,027 LFs have links to DBpedia titles. We manually evaluated the accuracy of the string matching by randomly sampling 1200 LFs, and our approach achieved an F-measure of 0.98. In addition, our experiments revealed the following. (1) Performances were similar independently from the frequency of the LFs in MEDLINE. (2) There is a
Sherwin, Trevor; Gilhotra, Amardeep K
Literature databases are an ever-expanding resource available to the field of medical sciences. Understanding how to use such databases efficiently is critical for those involved in research. However, for the uninitiated, getting started is a major hurdle to overcome and for the occasional user, the finer points of database searching remain an unacquired skill. In the fifth and final article in this series aimed at those embarking on ophthalmology and vision science research, we look at how the beginning researcher can start to use literature databases and, by using a stepwise approach, how they can optimize their use. This instructional paper gives a hypothetical example of a researcher writing a review article and how he or she acquires the necessary scientific literature for the article. A prototype search of the Medline database is used to illustrate how even a novice might swiftly acquire the skills required for a medium-level search. It provides examples and key tips that can increase the proficiency of the occasional user. Pitfalls of database searching are discussed, as are the limitations of which the user should be aware.
Wang, Liqin; Haug, Peter J; Del Fiol, Guilherme
Mining disease-specific associations from existing knowledge resources can be useful for building disease-specific ontologies and supporting knowledge-based applications. Many association mining techniques have been exploited. However, the challenge remains when those extracted associations contained much noise. It is unreliable to determine the relevance of the association by simply setting up arbitrary cut-off points on multiple scores of relevance; and it would be expensive to ask human experts to manually review a large number of associations. We propose that machine-learning-based classification can be used to separate the signal from the noise, and to provide a feasible approach to create and maintain disease-specific vocabularies. We initially focused on disease-medication associations for the purpose of simplicity. For a disease of interest, we extracted potentially treatment-related drug concepts from biomedical literature citations and from a local clinical data repository. Each concept was associated with multiple measures of relevance (i.e., features) such as frequency of occurrence. For the machine purpose of learning, we formed nine datasets for three diseases with each disease having two single-source datasets and one from the combination of previous two datasets. All the datasets were labeled using existing reference standards. Thereafter, we conducted two experiments: (1) to test if adding features from the clinical data repository would improve the performance of classification achieved using features from the biomedical literature only, and (2) to determine if classifier(s) trained with known medication-disease data sets would be generalizable to new disease(s). Simple logistic regression and LogitBoost were two classifiers identified as the preferred models separately for the biomedical-literature datasets and combined datasets. The performance of the classification using combined features provided significant improvement beyond that using
Nadkarni, Prakash M; Parikh, Chirag R
Numerous biomedical software applications access databases maintained by the US National Center for Biotechnology Information (NCBI). To ease software automation, NCBI provides a powerful but complex Web-service-based programming interface, eUtils. This paper describes a toolset that simplifies eUtils use through a graphical front-end that can be used by non-programmers to construct data-extraction pipelines. The front-end relies on a code library that provides high-level wrappers around eUtils functions, and which is distributed as open-source, allowing customization and enhancement by individuals with programming skills. We initially created an application that queried eUtils to retrieve nephrology-specific biomedical literature citations for a user-definable set of genes. We later augmented the application code to create a general-purpose library that accesses eUtils capability as individual functions that could be combined into user-defined pipelines. The toolset's use is illustrated with an application that serves as a front-end to the library and can be used by non-programmers to construct user-defined pipelines. The operation of the library is illustrated for the literature-surveillance application, which serves as a case-study. An overview of the library is also provided. The library simplifies use of the eUtils service by operating at a higher level, and also transparently addresses robustness issues that would need to be individually implemented otherwise, such as error recovery and prevention of overloading of the eUtils service.
Groth, Paul; Cox, Jessica
Robotic labs, in which experiments are carried out entirely by robots, have the potential to provide a reproducible and transparent foundation for performing basic biomedical laboratory experiments. In this article, we investigate whether these labs could be applicable in current experimental practice. We do this by text mining 1,628 papers for occurrences of methods that are supported by commercial robotic labs. Using two different concept recognition tools, we find that 86%–89% of the paper...
Risal, S; Prasad, H N
Vision Science is considered to be a quite developed discipline in Nepal, with much research currently in progress. Though the results of these endeavors are published in scientific journals, formal citation analyses have not been performed on works contributed by Nepalese vision scientists. To study Nepal's contribution to vision science literature in the database "Web of Science". The primary data source of this paper was Web of Science, a citation database of Thomas Reuters. All bibliometric analyses were performed with the help of Web of Science analysis service. In the current database of vision science literature, Nepalese authors contributed 112 publications to Web of Science, 95 of which were original articles. Pokharel GP had the highest number of citations among contributing authors of Nepal. Hennig A contributed the highest number of article as a first author. The Nepal Eye Hospital contributed the highest number of articles as an institution to the field of Vision Science. Currently, only two journals from Nepal including Journal of Nepal Medical Association (JAMA) are indexed in the Web of Science database (Sieving, 2012). To evaluate the total productivity of vision science literature from Nepal, total publication counts from national journals and articles indexed in other databases such as PubMed and Scopus must also be considered. © NEPjOPH.
Wenban Adrian B
Full Text Available Abstract Background The misuse of the title 'chiropractor' and term 'chiropractic manipulation', in relation to injury associated with cervical spine manipulation, have previously been reported in the peer-reviewed literature. The objectives of this study were to - 1 Prospectively monitor the peer-reviewed literature for papers reporting an association between chiropractic, or chiropractic manipulation, and injury; 2 Contact lead authors of papers that report such an association in order to determine the basis upon which the title 'chiropractor' and/or term 'chiropractic manipulation' was used; 3 Document the outcome of submission of letters to the editors of journals wherein the title 'chiropractor', and/or term 'chiropractic manipulation', had been misused and resulted in the over-reporting of chiropractic induced injury. Methods One electronic database (PubMed was monitored prospectively, via monthly PubMed searches, during a 12 month period (June 2003 to May 2004. Once relevant papers were located, they were reviewed. If the qualifications and/or profession of the care provider/s were not apparent, an attempt was made to confirm them via direct e-mail communication with the principal researcher of each respective paper. A letter was then sent to the editor of each involved journal. Results A total of twenty four different cases, spread across six separate publications, were located via the monthly PubMed searches. All twenty four cases took place in one of two European countries. The six publications consisted of four case reports, each containing one patient, one case series, involving twenty relevant cases, and a secondary report that pertained to one of the four case reports. In each of the six publications the authors suggest the care provider was a chiropractor and that each patient received chiropractic manipulation of the cervical spine prior to developing symptoms suggestive of traumatic injury. In two of the four case reports
Deng, Hong-yong; Xu, Ji
To summarize and analyze literature about acupuncture and moxibustion embodied in Medline Database from 2000 to 2007, the articles in Medline PubMed Database were retrieved by different retrieval tactics on line, in combination with artificial statistical analysis on the literature data. Results indicate that among a total of 4 041 articles about acupuncture and moxibustion retrieved from Medline in the 8 years, 628 were published in Chinese and 3071 in English. These articles were involved in 836 journals, including 31 Chinese Journals. Eight hundred and forty-one articles were from China mainland, Beijing (176), Shanghai (136) and Tianjin (43) were the top 3 provinces and cities of literature numbers. It is showed that most acupuncture and moxibustion articles embodied in Medline are published in many foreign publications in English as main, but this situation has greatly changed after Zhongguo Zhenjiu and other journals were cited by Medline since 2006.
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care. PMID:26843812
Luo, Jake; Wu, Min; Gopukumar, Deepika; Zhao, Yiqing
Big data technologies are increasingly used for biomedical and health-care informatics research. Large amounts of biological and clinical data have been generated and collected at an unprecedented speed and scale. For example, the new generation of sequencing technologies enables the processing of billions of DNA sequence data per day, and the application of electronic health records (EHRs) is documenting large amounts of patient data. The cost of acquiring and analyzing biomedical data is expected to decrease dramatically with the help of technology upgrades, such as the emergence of new sequencing machines, the development of novel hardware and software for parallel computing, and the extensive expansion of EHRs. Big data applications present new opportunities to discover new knowledge and create novel methods to improve the quality of health care. The application of big data in health care is a fast-growing field, with many new discoveries and methodologies published in the last five years. In this paper, we review and discuss big data application in four major biomedical subdisciplines: (1) bioinformatics, (2) clinical informatics, (3) imaging informatics, and (4) public health informatics. Specifically, in bioinformatics, high-throughput experiments facilitate the research of new genome-wide association studies of diseases, and with clinical informatics, the clinical field benefits from the vast amount of collected patient data for making intelligent decisions. Imaging informatics is now more rapidly integrated with cloud platforms to share medical image data and workflows, and public health informatics leverages big data techniques for predicting and monitoring infectious disease outbreaks, such as Ebola. In this paper, we review the recent progress and breakthroughs of big data applications in these health-care domains and summarize the challenges, gaps, and opportunities to improve and advance big data applications in health care.
Full Text Available Robotic labs, in which experiments are carried out entirely by robots, have the potential to provide a reproducible and transparent foundation for performing basic biomedical laboratory experiments. In this article, we investigate whether these labs could be applicable in current experimental practice. We do this by text mining 1,628 papers for occurrences of methods that are supported by commercial robotic labs. Using two different concept recognition tools, we find that 86%–89% of the papers have at least one of these methods. This and our other results provide indications that robotic labs can serve as the foundation for performing many lab-based experiments.
Shraiberg, Yakov (Russian National Public Library for Science and Technology); GreyNet, Grey Literature Network Service
The paper presents new types of databases as part of services provided by the Library and the sources which may be regarded as "grey literature": patents, reports, unpublished translations, industrial catalogs. The paper describes services with these and other databases based on grey literature processing, local and remote access, interaction with Union Catalog and pilot CD-ROM projects. The paper provides sample records of the database on "grey" literature and explains the differences in dat...
Islamaj Doğan, Rezarta; Comeau, Donald C; Yeganova, Lana; Wilbur, W John
BioC is a recently created XML format to share text data and annotations, and an accompanying input/output library to promote interoperability of data and tools for natural language processing of biomedical text. This article reports the use of BioC to address a common challenge in processing biomedical text information-that of frequent entity name abbreviation. We selected three different abbreviation definition identification modules, and used the publicly available BioC code to convert these independent modules into BioC-compatible components that interact seamlessly with BioC-formatted data, and other BioC-compatible modules. In addition, we consider four manually annotated corpora of abbreviations in biomedical text: the Ab3P corpus of 1250 PubMed abstracts, the BIOADI corpus of 1201 PubMed abstracts, the old MEDSTRACT corpus of 199 PubMed(®) citations and the Schwartz and Hearst corpus of 1000 PubMed abstracts. Annotations in these corpora have been re-evaluated by four annotators and their consistency and quality levels have been improved. We converted them to BioC-format and described the representation of the annotations. These corpora are used to measure the three abbreviation-finding algorithms and the results are given. The BioC-compatible modules, when compared with their original form, have no difference in their efficiency, running time or any other comparable aspects. They can be conveniently used as a common pre-processing step for larger multi-layered text-mining endeavors. Database URL: Code and data are available for download at the BioC site: http://bioc.sourceforge.net. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
Saito, Yoshihiko; Suyama, Tadahiro; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Ochs, Michael
Japan Nuclear Cycle Development Institute (JNC) had developed the sorption database (JNC-SDB) which includes distribution coefficient (K d ) data of important radioactive elements for bentonite and rocks in order to define a dataset to evaluate the safety function of retardation by natural barrier and engineered barrier in the H12 report. Then, JNC added to the database the sorption data from 1998 to 2003 collected by literature survey. In this report, Japan Atomic Energy Agency (JAEA) has updated the sorption database: (1) JAEA has widely collected the sorption data in order to extend the sorption database. The JNC-SDB has been added the published data which are not registered in the sorption database so far. (2) For the convenience of users the JNC-SDB was partially improved such as the automatic graph function. (3) Moreover, errors during data input in the part of the JNC-SDB were corrected on the basis of reviewing data in the database according to the guideline; 'evaluating and categorizing the reliability of distribution coefficient values in the sorption database'. In this updated JNC-SDB, 3,205 sorption data for 23 elements, which are important for performance assessment were included. The frequency of K d for some elements was clearly shown by addition of the sorption data. (author)
Full Text Available Abstract Background This paper presents a novel approach to the problem of hedge detection, which involves identifying so-called hedge cues for labeling sentences as certain or uncertain. This is the classification problem for Task 1 of the CoNLL-2010 Shared Task, which focuses on hedging in the biomedical domain. We here propose to view hedge detection as a simple disambiguation problem, restricted to words that have previously been observed as hedge cues. As the feature space for the classifier is still very large, we also perform experiments with dimensionality reduction using the method of random indexing. Results The SVM-based classifiers developed in this paper achieves the best published results so far for sentence-level uncertainty prediction on the CoNLL-2010 Shared Task test data. We also show that the technique of random indexing can be successfully applied for reducing the dimensionality of the original feature space by several orders of magnitude, without sacrificing classifier performance. Conclusions This paper introduces a simplified approach to detecting speculation or uncertainty in text, focusing on the biomedical domain. Evaluated at the sentence-level, our SVM-based classifiers achieve the best published results so far. We also show that the feature space can be aggressively compressed using random indexing while still maintaining comparable classifier performance.
This paper presents a novel approach to the problem of hedge detection, which involves identifying so-called hedge cues for labeling sentences as certain or uncertain. This is the classification problem for Task 1 of the CoNLL-2010 Shared Task, which focuses on hedging in the biomedical domain. We here propose to view hedge detection as a simple disambiguation problem, restricted to words that have previously been observed as hedge cues. As the feature space for the classifier is still very large, we also perform experiments with dimensionality reduction using the method of random indexing. The SVM-based classifiers developed in this paper achieves the best published results so far for sentence-level uncertainty prediction on the CoNLL-2010 Shared Task test data. We also show that the technique of random indexing can be successfully applied for reducing the dimensionality of the original feature space by several orders of magnitude, without sacrificing classifier performance. This paper introduces a simplified approach to detecting speculation or uncertainty in text, focusing on the biomedical domain. Evaluated at the sentence-level, our SVM-based classifiers achieve the best published results so far. We also show that the feature space can be aggressively compressed using random indexing while still maintaining comparable classifier performance.
Wettermark, Bjørn; Wallach-Kildemoes, Helle
PURPOSE: All five Nordic countries have nationwide prescription databases covering all dispensed drugs, with potential for linkage to outcomes. The aim of this review is to present an overview of therapeutic areas studied and methods applied in pharmacoepidemiologic studies using data from...... these databases. METHODS: The study consists of a Medline-based structured literature review of scientific papers published during 2005-2010 using data from the prescription databases in Denmark, Finland, Iceland, Norway, and Sweden, covering 25 million inhabitants. Relevant studies were analyzed in terms...... of pharmacological group, study population, outcomes examined, type of study (drug utilization vs. effect of drug therapy), country of origin, and extent of cross-national collaboration. RESULTS: A total of 515 studies were identified. Of these, 262 were conducted in Denmark, 97 in Finland, 4 in Iceland, 87...
Ding Ding Wang
Full Text Available Abstract Background Evidence mapping is an emerging tool used to systematically identify, organize and summarize the quantity and focus of scientific evidence on a broad topic, but there are currently no methodological standards. Using the topic of low-calorie sweeteners (LCS and selected health outcomes, we describe the process of creating an evidence-map database and demonstrate several example descriptive analyses using this database. Methods The process of creating an evidence-map database is described in detail. The steps include: developing a comprehensive literature search strategy, establishing study eligibility criteria and a systematic study selection process, extracting data, developing outcome groups with input from expert stakeholders and tabulating data using descriptive analyses. The database was uploaded onto SRDR™ (Systematic Review Data Repository, an open public data repository. Results Our final LCS evidence-map database included 225 studies, of which 208 were interventional studies and 17 were cohort studies. An example bubble plot was produced to display the evidence-map data and visualize research gaps according to four parameters: comparison types, population baseline health status, outcome groups, and study sample size. This plot indicated a lack of studies assessing appetite and dietary intake related outcomes using LCS with a sugar intake comparison in people with diabetes. Conclusion Evidence mapping is an important tool for the contextualization of in-depth systematic reviews within broader literature and identifies gaps in the evidence base, which can be used to inform future research. An open evidence-map database has the potential to promote knowledge translation from nutrition science to policy.
Zhang, Han; Fiszman, Marcelo; Shin, Dongwook; Wilkowski, Bartlomiej; Rindflesch, Thomas C
Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts). SemRep is used to extract semantic predications from the citations returned by a PubMed search. Cliques were identified from frequently occurring predications with highly connected arguments filtered by degree centrality. Themes contained in the summary were identified with a hierarchical clustering algorithm based on common arguments shared among cliques. The validity of the clusters in the summaries produced was compared to the Silhouette-generated baseline for cohesion, separation and overall validity. The theme labels were also compared to a reference standard produced with major MeSH headings. For 11 topics in the testing data set, the overall validity of clusters from the system summary was 10% better than the baseline (43% versus 33%). While compared to the reference standard from MeSH headings, the results for recall, precision and F-score were 0.64, 0.65, and 0.65 respectively.
Background Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts). Results SemRep is used to extract semantic predications from the citations returned by a PubMed search. Cliques were identified from frequently occurring predications with highly connected arguments filtered by degree centrality. Themes contained in the summary were identified with a hierarchical clustering algorithm based on common arguments shared among cliques. The validity of the clusters in the summaries produced was compared to the Silhouette-generated baseline for cohesion, separation and overall validity. The theme labels were also compared to a reference standard produced with major MeSH headings. Conclusions For 11 topics in the testing data set, the overall validity of clusters from the system summary was 10% better than the baseline (43% versus 33%). While compared to the reference standard from MeSH headings, the results for recall, precision and F-score were 0.64, 0.65, and 0.65 respectively. PMID:23742159
Full Text Available Protein-Protein Interaction (PPI extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH. We evaluate our method with Support Vector Machine (SVM and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.
Jackson, Debra; Hickman, Louise D; Hutchinson, Marie; Andrew, Sharon; Smith, James; Potgieter, Ingrid; Cleary, Michelle; Peters, Kath
Abstract Aim: To summarise and critique the research literature about whistleblowing and nurses. Whistleblowing is identified as a crucial issue in maintenance of healthcare standards and nurses are frequently involved in whistleblowing events. Despite the importance of this issue, to our knowledge an evaluation of this body of the data-based literature has not been undertaken. An integrative literature review approach was used to summarise and critique the research literature. A comprehensive search of five databases including Medline, CINAHL, PubMed and Health Science: Nursing/Academic Edition, and Google, were searched using terms including: 'Whistleblow*,' 'nurs*.' In addition, relevant journals were examined, as well as reference lists of retrieved papers. Papers published during the years 2007-2013 were selected for inclusion. Fifteen papers were identified, capturing data from nurses in seven countries. The findings in this review demonstrate a growing body of research for the nursing profession at large to engage and respond appropriately to issues involving suboptimal patient care or organisational wrongdoing. Nursing plays a key role in maintaining practice standards and in reporting care that is unacceptable although the repercussions to nurses who raise concerns are insupportable. Overall, whistleblowing and how it influences the individual, their family, work colleagues, nursing practice and policy overall, requires further national and international research attention.
In the biomedical domain, authors publish their experiments and findings using a quasi-standard coarse-grained discourse structure, which starts with an introduction that sets up the motivation, continues with a description of the materials and methods, and concludes with results and discussions. Over the course of the years, there has been a fair amount of research done in the area of scientific discourse analysis, with a focus on performing automatic recognition of scientific artefacts/conceptualisation zones from the raw content of scientific publications. Most of the existing approaches use Machine Learning techniques to perform classification based on features that rely on the shallow structure of the sentence tokens, or sentences as a whole, in addition to corpus-driven statistics. In this article, we investigate the role carried by the deep (dependency) structure of the sentences in describing their rhetorical nature. Using association rule mining techniques, we study the presence of dependency structure patterns in the context of a given rhetorical type, the use of these patterns in exploring differences in structure between the rhetorical types, and their ability to discriminate between the different rhetorical types. Our final goal is to provide a series of insights that can be used to complement existing classification approaches. Experimental results show that, in particular in the context of a fine-grained multi-class classification context, the association rules emerged from the dependency structure are not able to produce uniform classification results. However, they can be used to derive discriminative pair-wise classification mechanisms, in particular for some of the most ambiguous types.
Full Text Available In the biomedical domain, authors publish their experiments and findings using a quasi-standard coarse-grained discourse structure, which starts with an introduction that sets up the motivation, continues with a description of the materials and methods, and concludes with results and discussions. Over the course of the years, there has been a fair amount of research done in the area of scientific discourse analysis, with a focus on performing automatic recognition of scientific artefacts/conceptualisation zones from the raw content of scientific publications. Most of the existing approaches use Machine Learning techniques to perform classification based on features that rely on the shallow structure of the sentence tokens, or sentences as a whole, in addition to corpus-driven statistics. In this article, we investigate the role carried by the deep (dependency structure of the sentences in describing their rhetorical nature. Using association rule mining techniques, we study the presence of dependency structure patterns in the context of a given rhetorical type, the use of these patterns in exploring differences in structure between the rhetorical types, and their ability to discriminate between the different rhetorical types. Our final goal is to provide a series of insights that can be used to complement existing classification approaches. Experimental results show that, in particular in the context of a fine-grained multi-class classification context, the association rules emerged from the dependency structure are not able to produce uniform classification results. However, they can be used to derive discriminative pair-wise classification mechanisms, in particular for some of the most ambiguous types.
Full Text Available Abstract Background Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst 1, but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. Results We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. Conclusion We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets.
Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...
Full Text Available Cardiovascular diseases (CVDs account for high morbidity and mortality worldwide. Both, genetic and epigenetic factors are involved in the enumeration of various cardiovascular diseases. In recent years, a vast amount of multi-omics data are accumulated in the field of cardiovascular research, yet the understanding of key mechanistic aspects of CVDs remain uncovered. Hence, a comprehensive online resource tool is required to comprehend previous research findings and to draw novel methodology for understanding disease pathophysiology. Here, we have developed a literature-based database, CardioGenBase, collecting gene-disease association from Pubmed and MEDLINE. The database covers major cardiovascular diseases such as cerebrovascular disease, coronary artery disease (CAD, hypertensive heart disease, inflammatory heart disease, ischemic heart disease and rheumatic heart disease. It contains ~1,500 cardiovascular disease genes from ~2,4000 research articles. For each gene, literature evidence, ontology, pathways, single nucleotide polymorphism, protein-protein interaction network, normal gene expression, protein expressions in various body fluids and tissues are provided. In addition, tools like gene-disease association finder and gene expression finder are made available for the users with figures, tables, maps and venn diagram to fit their needs. To our knowledge, CardioGenBase is the only database to provide gene-disease association for above mentioned major cardiovascular diseases in a single portal. CardioGenBase is a vital online resource to support genome-wide analysis, genetic, epigenetic and pharmacological studies.
Discovery of academic literature through Web search engines challenges the traditional role of specialized research databases. Creation of literature outside academic presses and peer-reviewed publications expands the content for scholarly research within a particular field. The resulting body of literature raises the question of whether scholars…
Masic, Izet; Milinovic, Katarina
Most of medical journals now has it's electronic version, available over public networks. Although there are parallel printed and electronic versions, and one other form need not to be simultaneously published. Electronic version of a journal can be published a few weeks before the printed form and must not has identical content. Electronic form of a journals may have an extension that does not contain a printed form, such as animation, 3D display, etc., or may have available fulltext, mostly in PDF or XML format, or just the contents or a summary. Access to a full text is usually not free and can be achieved only if the institution (library or host) enters into an agreement on access. Many medical journals, however, provide free access for some articles, or after a certain time (after 6 months or a year) to complete content. The search for such journals provide the network archive as High Wire Press, Free Medical Journals.com. It is necessary to allocate PubMed and PubMed Central, the first public digital archives unlimited collect journals of available medical literature, which operates in the system of the National Library of Medicine in Bethesda (USA). There are so called on- line medical journals published only in electronic form. It could be searched over on-line databases. In this paper authors shortly described about 30 data bases and short instructions how to make access and search the published papers in indexed medical journals.
Chavalarias, David; Wallach, Joshua David; Li, Alvin Ho Ting; Ioannidis, John P A
The use and misuse of P values has generated extensive debates. To evaluate in large scale the P values reported in the abstracts and full text of biomedical research articles over the past 25 years and determine how frequently statistical information is presented in ways other than P values. Automated text-mining analysis was performed to extract data on P values reported in 12,821,790 MEDLINE abstracts and in 843,884 abstracts and full-text articles in PubMed Central (PMC) from 1990 to 2015. Reporting of P values in 151 English-language core clinical journals and specific article types as classified by PubMed also was evaluated. A random sample of 1000 MEDLINE abstracts was manually assessed for reporting of P values and other types of statistical information; of those abstracts reporting empirical data, 100 articles were also assessed in full text. P values reported. Text mining identified 4,572,043 P values in 1,608,736 MEDLINE abstracts and 3,438,299 P values in 385,393 PMC full-text articles. Reporting of P values in abstracts increased from 7.3% in 1990 to 15.6% in 2014. In 2014, P values were reported in 33.0% of abstracts from the 151 core clinical journals (n = 29,725 abstracts), 35.7% of meta-analyses (n = 5620), 38.9% of clinical trials (n = 4624), 54.8% of randomized controlled trials (n = 13,544), and 2.4% of reviews (n = 71,529). The distribution of reported P values in abstracts and in full text showed strong clustering at P values of .05 and of .001 or smaller. Over time, the "best" (most statistically significant) reported P values were modestly smaller and the "worst" (least statistically significant) reported P values became modestly less significant. Among the MEDLINE abstracts and PMC full-text articles with P values, 96% reported at least 1 P value of .05 or lower, with the proportion remaining steady over time in PMC full-text articles. In 1000 abstracts that were manually reviewed, 796 were from articles reporting
Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.
Özgür, Arzucan; Hur, Junguk; He, Yongqun
The Interaction Network Ontology (INO) logically represents biological interactions, pathways, and networks. INO has been demonstrated to be valuable in providing a set of structured ontological terms and associated keywords to support literature mining of gene-gene interactions from biomedical literature. However, previous work using INO focused on single keyword matching, while many interactions are represented with two or more interaction keywords used in combination. This paper reports our extension of INO to include combinatory patterns of two or more literature mining keywords co-existing in one sentence to represent specific INO interaction classes. Such keyword combinations and related INO interaction type information could be automatically obtained via SPARQL queries, formatted in Excel format, and used in an INO-supported SciMiner, an in-house literature mining program. We studied the gene interaction sentences from the commonly used benchmark Learning Logic in Language (LLL) dataset and one internally generated vaccine-related dataset to identify and analyze interaction types containing multiple keywords. Patterns obtained from the dependency parse trees of the sentences were used to identify the interaction keywords that are related to each other and collectively represent an interaction type. The INO ontology currently has 575 terms including 202 terms under the interaction branch. The relations between the INO interaction types and associated keywords are represented using the INO annotation relations: 'has literature mining keywords' and 'has keyword dependency pattern'. The keyword dependency patterns were generated via running the Stanford Parser to obtain dependency relation types. Out of the 107 interactions in the LLL dataset represented with two-keyword interaction types, 86 were identified by using the direct dependency relations. The LLL dataset contained 34 gene regulation interaction types, each of which associated with multiple keywords. A
Xie, Boya; Ding, Qin; Han, Hongjin; Wu, Di
Research interests in microRNAs have increased rapidly in the past decade. Many studies have showed that microRNAs have close relationships with various human cancers, and they potentially could be used as cancer indicators in diagnosis or as a suppressor for treatment purposes. There are several databases that contain microRNA-cancer associations predicted by computational methods but few from empirical results. Despite the fact that abundant experiments investigating microRNA expressions in cancer cells have been carried out, the results have remain scattered in the literature. We propose to extract microRNA-cancer associations by text mining and store them in a database called miRCancer. The text mining is based on 75 rules we have constructed, which represent the common sentence structures typically used to state microRNA expressions in cancers. The microRNA-cancer association database, miRCancer, is updated regularly by running the text mining algorithm against PubMed. All miRNA-cancer associations are confirmed manually after automatic extraction. miRCancer currently documents 878 relationships between 236 microRNAs and 79 human cancers through the processing of >26 000 published articles. miRCancer is freely available on the web at http://mircancer.ecu.edu/
Natalia Judith Laso
Full Text Available Research has demonstrated that it is challenging for English as an Additional Language (EAL writers to acquire phraseological competence in academic English and develop a good working knowledge of discipline-specific formulaic language. This paper aims to explore if SciE-Lex, a powerful lexical database of biomedical research articles, can be exploited by EAL writers to enhance their command of formulaic language in biomedical English published writing. Our paper reports on the challenges associated with formulaic language (namely collocations for EAL writers, it reflects on the benefits of using a lexical database and it evaluates a pedagogical approach to helping EAL writers produce publishable texts. It specifically highlights results from two writing workshops conducted for EAL writers (medical researchers in the present study. The workshops involved medical researchers working on drafts of their writing using SciE-Lex. Our paper reports on the specific benefits of using SciE-Lex as demonstrated by revisions in the writing produced by the EAL medical researchers. This contribution aims to contribute to current discussion on English for Research Publication Purposes (ERPP for the EAL community who now form the main contributors to research knowledge dissemination.
An assessment of the efficacy of searching in biomedical databases beyond MEDLINE in identifying studies for a systematic review on ward closures as an infection control intervention to control outbreaks.
Kwon, Yoojin; Powelson, Susan E; Wong, Holly; Ghali, William A; Conly, John M
The purpose of our study is to determine the value and efficacy of searching biomedical databases beyond MEDLINE for systematic reviews. We analyzed the results from a systematic review conducted by the authors and others on ward closure as an infection control practice. Ovid MEDLINE including In-Process & Other Non-Indexed Citations, Ovid Embase, CINAHL Plus, LILACS, and IndMED were systematically searched for articles of any study type discussing ward closure, as were bibliographies of selected articles and recent infection control conference abstracts. Search results were tracked, recorded, and analyzed using a relative recall method. The sensitivity of searching in each database was calculated. Two thousand ninety-five unique citations were identified and screened for inclusion in the systematic review: 2,060 from database searching and 35 from hand searching and other sources. Ninety-seven citations were included in the final review. MEDLINE and Embase searches each retrieved 80 of the 97 articles included, only 4 articles from each database were unique. The CINAHL search retrieved 35 included articles, and 4 were unique. The IndMED and LILACS searches did not retrieve any included articles, although 75 of the included articles were indexed in LILACS. The true value of using regional databases, particularly LILACS, may lie with the ability to search in the language spoken in the region. Eight articles were found only through hand searching. Identifying studies for a systematic review where the research is observational is complex. The value each individual study contributes to the review cannot be accurately measured. Consequently, we could not determine the value of results found from searching beyond MEDLINE, Embase, and CINAHL with accuracy. However, hand searching for serendipitous retrieval remains an important aspect due to indexing and keyword challenges inherent in this literature.
Read-across predictions require high quality measured data for source analogues. These data are typically retrieved from structured databases, but biomedical literature data are often untapped because current literature mining approaches are resource intensive. Our high-throughpu...
Motschall, Edith; Falck-Ytter, Yngve
The Medline database from the National Library of Medicine (NLM) contains more than 12 million bibliographic citations from over 4,600 international biomedical journals. One of the interfaces for searching Medline is PubMed, provided by the NLM for free access via the Internet (www.pubmed.gov). Also searchable with the PubMed interface are non-Medline citations, i.e. articles supplied by publishers to the NLM. Direct access to an electronic full text version is also possible if the article is available from a publisher or institution participating in Linkout (www.ncbi.nlm.nih.gov/entrez/linkout/). Some publishers provide free access to their journals. Other journals require an online license and are fee based. The following example demonstrates some of the most important search functions in PubMed. We will start out with a fast and simple approach without the use of specific searching techniques and then continue with a more sophisticated search that requires the knowledge of Medline search functions. This example will show how the application of Medline search tools and how the use of the controlled vocabulary of 'Medical Subject Headings' (MeSH) will influence the results in comparison with the fast and simple approach. Let's try to find the best evidence to answer the following question: Is a 30-year-old man with typical acid reflux symptoms for many years (gastroesophageal reflux disease, GERD) more likely to develop esophageal cancer than people without reflux symptoms? This question can be split into several components: -a patient with reflux symptoms (GERD), -esophageal cancer: etiology, risk, -study design for etiology studies: cohort studies, case-control studies.
Verkroost, H. [Netherlands Energy Research Foundation ECN, Petten (Netherlands). Unit ECN Policy Studies
On the occasion of the 25th birthday of the database INIS a brief overview is given of the contents of INIS, its products and services, and the future of INIS. For the readers of this magazine an indication is given of the literature that is available in INIS in the subject categories for external radiation, internal radiation, radiation protection and dosimetry. 1 tab.
Zhang, Li; Wang, Wei
not in the ClinicalTrials.gov database. (b) We excluded clinical trials that dealt with stem cells other than MDSCs in the ClinicalTrials.gov database. (1) Type of literature; (2) annual publication output; (3) distribution according to journals; (4) distribution according to country; (5) distribution according to institution; (6) top cited authors over the last 10 years; (7) projects financially supported by the NIH; and (8) clinical trials registered. (1) In all, 802 studies on MDSCs appeared in the Web of Science from 2002 to 2011, almost half of which derived from American authors and institutes. The number of studies on MDSCs has gradually increased over the past 10 years. Most papers on MDSCs appeared in journals with a particular focus on cell biology research, such as Experimental Cell Research, Journal of Cell Science, and PLoS One. (2) Eight MDSC research projects have received over US$6 billion in funding from the NIH. The current project led by Dr. Johnny Huard of the University of Pittsburgh-"Muscle-Based Tissue Engineering to Improve Bone Healing"-is supported by the NIH. Dr. Huard has been the most productive and top-cited author in the field of gene therapy and adult stem cell research in the Web of Science over last 10 years. (3) On ClinicalTrials.gov, "Muscle Derived Cell Therapy for Bladder Exstrophy Epispadias Induced Incontinence" Phase 1 is registered and sponsored by Johns Hopkins University and has been led by Dr. John P. Gearhart since November 2009. From our analysis of the literature and research trends, we found that MDSCs may offer further benefits in regenerative medicine.
Bonde, Lars Ole
Bibliografi og database over litteratur om den receptive musikterapimetode Guided Imagery and Music......Bibliografi og database over litteratur om den receptive musikterapimetode Guided Imagery and Music...
To study database which contributes to the future scientific technology information distribution, survey/analysis were conducted of the present status of the service supply side. In the survey on the database trend, the trend of relations between DB producers and distributors was investigated. As a result, there were seen the increase in DB producers, expansion of internet/distribution/service, etc., and there were no changes in the U.S.-centered structure. Further, it was recognized that the DB service in the internet age now faces the time of change as seen in existing producers' response to internet, on-line service of primary information source, creation of new on-line service, etc. By the internet impact, the following are predicted for the future DB service: slump of producers without strong points and gateway type distributors, appearance of new types of DB service, etc. (NEDO)
Full Text Available Introduction: To evaluate the scientific literature on dietary fiber collected in PubMed database by bibliometric analysis. Material and methods: It is a descriptive study. It was calculated the sample size by estimating population parameters in an infinite population (n=386. The sampling method was simple random without replacement. Results: The most common type of document was original articles with 177 documents (45.9%; 95% CI: 40.9 to 50.1, being the productivity index of 2.25. The age of the documents analyzed was 17.7 (95% CI: 16.4 to 18.9, with a median of 15.5 years. Revised documents were predominantly written in English, 352 cases (91.2%; 95% CI: 88.4 to 94.0, followed by German in 11 articles (2.9%; 95% CI: 1.2 to 4.5, Russian 7 times (1.8%; 95% CI: 0.5 to 3.1 and Spanish with 6 items (1.6%; 95% CI: 0.3 to 2.8. The magazines that had 15 or more jobs in search results made, are 4: American Journal of Clinical Nutrition with 31 references (8.0%; 95% CI: 5.3 to 10.7, Journal of Animal Science with 20 references (5.2%; 95% CI: 3.0 to 7.4, British Journal of Nutrition with 16 references (4.2%; 95% CI: 2.2 to 6.1 and European Journal of Clinical Nutrition with 15 references (3.9%; 95% CI: 2.0 to 5.8. Conclusions: This study indicates that dietary fiber is a topic highly researched subject where English is still the majority language. The descriptors are in line with the subject studied.
Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.
Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240
Usié, Anabel; Cruz, Joaquim; Comas, Jorge; Solsona, Francesc; Alves, Rui
Small chemical molecules regulate biological processes at the molecular level. Those molecules are often involved in causing or treating pathological states. Automatically identifying such molecules in biomedical text is difficult due to both, the diverse morphology of chemical names and the alternative types of nomenclature that are simultaneously used to describe them. To address these issues, the last BioCreAtIvE challenge proposed a CHEMDNER task, which is a Named Entity Recognition (NER) challenge that aims at labelling different types of chemical names in biomedical text. To address this challenge we tested various approaches to recognizing chemical entities in biomedical documents. These approaches range from linear Conditional Random Fields (CRFs) to a combination of CRFs with regular expression and dictionary matching, followed by a post-processing step to tag those chemical names in a corpus of Medline abstracts. We named our best performing systems CheNER. We evaluate the performance of the various approaches using the F-score statistics. Higher F-scores indicate better performance. The highest F-score we obtain in identifying unique chemical entities is 72.88%. The highest F-score we obtain in identifying all chemical entities is 73.07%. We also evaluate the F-Score of combining our system with ChemSpot, and find an increase from 72.88% to 73.83%. CheNER presents a valid alternative for automated annotation of chemical entities in biomedical documents. In addition, CheNER may be used to derive new features to train newer methods for tagging chemical entities. CheNER can be downloaded from http://metres.udl.cat and included in text annotation pipelines.
David D. Shin
Full Text Available Arterial spin labeling (ASL is a MRI technique that provides a noninvasive and quantitative measure of cerebral blood flow (CBF. After more than a decade of active research, ASL is now emerging as a robust and reliable CBF measurement technique with increased availability and ease of use. There is a growing number of research and clinical sites using ASL for neuroscience research and clinical care. In this paper, we present an online CBF Database and Analysis Pipeline, collectively called the CBFBIRN that allows researchers to upload and share ASL and clinical data. In addition to serving the role as a central data repository, the CBFBIRN provides a streamlined data processing infrastructure for CBF quantification and group analysis, which has the potential to accelerate the discovery of new scientific and clinical knowledge. All capabilities and features built into the CBFBIRN are accessed online using a web browser through a secure login. In this work, we begin with a general description of the CBFBIRN system data model and its architecture, then devote the remainder of the paper to the CBFBIRN capabilities. The latter part of our work is divided into two processing modules: 1 Data Upload and CBF Quantification Module; 2 Group Analysis Module that supports three types of analysis commonly used in neuroscience research. To date, the CBFBIRN hosts CBF maps and associated clinical data from more than 1300 individual subjects. The data have been contributed by more than 20 different research studies, investigating the effect of various conditions on CBF including Alzheimer’s, schizophrenia, bipolar disorder, depression, traumatic brain injury, HIV, caffeine usage and Methamphetamine abuse. Several example results, generated by the CBFBIRN processing modules, are presented. We conclude with the lessons learned during implementation and deployment of the CBFBIRN and our experience in promoting data sharing.
Xu, Rong; Wang, QuanQiu
Independent data sources can be used to augment post-marketing drug safety signal detection. The vast amount of publicly available biomedical literature contains rich side effect information for drugs at all clinical stages. In this study, we present a large-scale signal boosting approach that combines over 4 million records in the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and over 21 million biomedical articles. The datasets are comprised of 4,285,097 records from FAERS and 21,354,075 MEDLINE articles. We first extracted all drug-side effect (SE) pairs from FAERS. Our study implemented a total of seven signal ranking algorithms. We then compared these different ranking algorithms before and after they were boosted with signals from MEDLINE sentences or abstracts. Finally, we manually curated all drug-cardiovascular (CV) pairs that appeared in both data sources and investigated whether our approach can detect many true signals that have not been included in FDA drug labels. We extracted a total of 2,787,797 drug-SE pairs from FAERS with a low initial precision of 0.025. The ranking algorithm combined signals from both FAERS and MEDLINE, significantly improving the precision from 0.025 to 0.371 for top-ranked pairs, representing a 13.8 fold elevation in precision. We showed by manual curation that drug-SE pairs that appeared in both data sources were highly enriched with true signals, many of which have not yet been included in FDA drug labels. We have developed an efficient and effective drug safety signal ranking and strengthening approach We demonstrate that large-scale combining information from FAERS and biomedical literature can significantly contribute to drug safety surveillance.
Gleason, Robert A.; Tangen, Brian A.; Laubhan, Murray K.; Finocchiaro, Raymond G.; Stamm, John F.
Long-term accumulation of salts in wetlands at Bowdoin National Wildlife Refuge (NWR), Mont., has raised concern among wetland managers that increasing salinity may threaten plant and invertebrate communities that provide important habitat and food resources for migratory waterfowl. Currently, the U.S. Fish and Wildlife Service (USFWS) is evaluating various water management strategies to help maintain suitable ranges of salinity to sustain plant and invertebrate resources of importance to wildlife. To support this evaluation, the USFWS requested that the U.S. Geological Survey (USGS) provide information on salinity ranges of water and soil for common plants and invertebrates on Bowdoin NWR lands. To address this need, we conducted a search of the literature on occurrences of plants and invertebrates in relation to salinity and pH of the water and soil. The compiled literature was used to (1) provide a general overview of salinity concepts, (2) document published tolerances and adaptations of biota to salinity, (3) develop databases that the USFWS can use to summarize the range of reported salinity values associated with plant and invertebrate taxa, and (4) perform database summaries that describe reported salinity ranges associated with plants and invertebrates at Bowdoin NWR. The purpose of this report is to synthesize information to facilitate a better understanding of the ecological relations between salinity and flora and fauna when developing wetland management strategies. A primary focus of this report is to provide information to help evaluate and address salinity issues at Bowdoin NWR; however, the accompanying databases, as well as concepts and information discussed, are applicable to other areas or refuges. The accompanying databases include salinity values reported for 411 plant taxa and 330 invertebrate taxa. The databases are available in Microsoft Excel version 2007 (http://pubs.usgs.gov/sir/2009/5098/downloads/databases_21april2009.xls) and contain
Full Text Available The growth in scientific production may threaten the capacity for the scientific community to handle the ever-increasing demand for peer review of scientific publications. There is little evidence regarding the sustainability of the peer-review system and how the scientific community copes with the burden it poses. We used mathematical modeling to estimate the overall quantitative annual demand for peer review and the supply in biomedical research. The modeling was informed by empirical data from various sources in the biomedical domain, including all articles indexed at MEDLINE. We found that for 2015, across a range of scenarios, the supply exceeded by 15% to 249% the demand for reviewers and reviews. However, 20% of the researchers performed 69% to 94% of the reviews. Among researchers actually contributing to peer review, 70% dedicated 1% or less of their research work-time to peer review while 5% dedicated 13% or more of it. An estimated 63.4 million hours were devoted to peer review in 2015, among which 18.9 million hours were provided by the top 5% contributing reviewers. Our results support that the system is sustainable in terms of volume but emphasizes a considerable imbalance in the distribution of the peer-review effort across the scientific community. Finally, various individual interactions between authors, editors and reviewers may reduce to some extent the number of reviewers who are available to editors at any point.
Kovanis, Michail; Porcher, Raphaël; Ravaud, Philippe; Trinquart, Ludovic
The growth in scientific production may threaten the capacity for the scientific community to handle the ever-increasing demand for peer review of scientific publications. There is little evidence regarding the sustainability of the peer-review system and how the scientific community copes with the burden it poses. We used mathematical modeling to estimate the overall quantitative annual demand for peer review and the supply in biomedical research. The modeling was informed by empirical data from various sources in the biomedical domain, including all articles indexed at MEDLINE. We found that for 2015, across a range of scenarios, the supply exceeded by 15% to 249% the demand for reviewers and reviews. However, 20% of the researchers performed 69% to 94% of the reviews. Among researchers actually contributing to peer review, 70% dedicated 1% or less of their research work-time to peer review while 5% dedicated 13% or more of it. An estimated 63.4 million hours were devoted to peer review in 2015, among which 18.9 million hours were provided by the top 5% contributing reviewers. Our results support that the system is sustainable in terms of volume but emphasizes a considerable imbalance in the distribution of the peer-review effort across the scientific community. Finally, various individual interactions between authors, editors and reviewers may reduce to some extent the number of reviewers who are available to editors at any point.
McGinn, Tony; Taylor, Brian; McColgan, Mary; McQuilkan, Janice
Objectives: To compare the performance of a range of search facilities; and to illustrate the execution of a comprehensive literature search for qualitative evidence in social work. Context: Developments in literature search methods and comparisons of search facilities help facilitate access to the best available evidence for social workers.…
Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha
This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.
Brazier, H; Begley, C M
This study compares the usefulness of the MEDLINE and CINAHL databases for students on post-registration nursing courses. We searched for nine topics, using title words only. Identical searches of the two databases retrieved 1162 references, of which 88% were in MEDLINE, 33% in CINAHL and 20% in both sources. The relevance of the references was assessed by student reviewers. The positive predictive value of CINAHL (70%) was higher than that of MEDLINE (54%), but MEDLINE produced more than twice as many relevant references as CINAHL. The sensitivity of MEDLINE was 85% (95% CI 82-88%), and that of CINAHL was 41% (95% CI 37-45%). To assess the ease of obtaining the references, we developed an index of accessibility, based on the holdings of a number of Irish and British libraries. Overall, 47% of relevant references were available in the students' own library, and 64% could be obtained within 48 hours. There was no difference between the two databases overall, but when two topics relating specifically to the organization of nursing were excluded, references found in MEDLINE were significantly more accessible. We recommend that MEDLINE should be regarded as the first choice of bibliographic database for any subject other than one related strictly to the organization of nursing.
Deidda, Arianna; Pisanu, Claudia; Micheletto, Laura; Bocchetta, Alberto; Del Zompo, Maria; Stochino, Maria Erminia
We investigated a pulmonary adverse drug reaction possibly induced by fluoxetine, the Interstitial Lung Disease, by performing a systematic review of published case reports on this subject, a review of the World Health Organization VigiAccess database, of the European EudraVigilance database and of a national Pharmacovigilance database (Italian Pharmacovigilance Network). The research found a total of seven cases linking fluoxetine to Interstitial Lung Disease in the literature. 36 cases of interstitial lung disease related to fluoxetine were retrieved from the VigiAccess database (updated to July 2016), and 36 reports were found in EudraVigilance database (updated to June 2016). In the Italian Pharmacovigilance database (updated to August 2016), we found only one case of Interstitial Lung Disease, codified as "pulmonary disease". Our investigation shows that fluoxetine might be considered as a possible cause of Interstitial Lung Disease. In particular, although here we do not discuss the assessment of benefits and harms of fluoxetine, since this antidepressant is widely used, our review suggests that fluoxetine-induced Interstitial Lung Disease should be considered in patients with dyspnea, associated or not with dry cough, who are treated with this drug. An early withdrawn of fluoxetine could be useful to obtain a complete remission of this adverse drug reaction and special attention should be particularly devoted to long-term therapy, and to female and elderly patients. Although the spontaneous reporting system is affected by important limitations, drug post- marketing surveillance represents an important tool to evaluate the real world effectiveness and safety of drugs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uso de bases de datos bibliográficas por investigadores biomédicos latinoamericanos hispanoparlantes: estudio transversal The use of bibliographic databases by Spanish-speaking Latin American biomedical researchers: a cross-sectional study
Edgar Guillermo Ospina
articles. RESULTS: A total of 586 e-mail messages with the survey were sent out, and 185 responses were received (32%. The databases most utilized to obtain biomedical information were MEDLINE (34.1%, general search engines (Google, Yahoo!, and AltaVista (15.9%, on-line journals (9.8%, BIREME-LILACS (6.0%, BioMedNet (5.4%, the databases of the Centers for Disease Control and Prevention of the United States of America (5.2%, and the Cochrane Library (4.9%. Of the respondents, 64% said they had average or advanced abilities in using MEDLINE. However, 71% of the respondents did not use or were not aware of the MEDLINE Medical Subject Headings (MeSH, a controlled vocabulary established by the National Library of Medicine of the United States of America for indexing articles. The frequency of accessing the databases was similar in all the countries studied, without significant differences in terms of the type of access (authorized access to commercial databases, unauthorized access to those databases, or access to databases available for free or the level of abilities. Of the respondents, 87% said they had not included important references in the articles that they had published because they had not had access to the full text of those items, and 56% said they had cited articles that they had not read in full. In addition, 7.6% of the respondents admitted to unauthorized use of limited-access databases, such as through borrowed passwords or copied disks. More than two-thirds of the respondents said they obtained the full text of articles through photocopies or directly from the authors. CONCLUSIONS: In order to encourage scientific output by Latin American researchers, more of them need to be trained in the use of the most frequently used databases, especially MEDLINE. Those researchers also need to have expanded access to the biomedical literature.
Haafkens, Joke; Moerman, Clara; Schuring, Merel; van Dijk, Frank
BACKGROUND: The work participation of people with chronic diseases is a growing concern within the field of occupational medicine. Information on this topic is dispersed across a variety of data sources, making it difficult for health professionals to find relevant studies for literature reviews and
Shile, Peter E.; Freiermuth, Jennifer
Publications of the International Society of Optical Engineering (SPIE) contain much of the relevant literature on Picture Archiving and Communications Systems (PACS) and related topics. In fact, many PACS-related articles indexed by the National Library of Medicine contain references to articles published by SPIE. Unfortunately, SPIE publications themselves are not indexed by the National Library of Medicine and thus can not be identified through Medline. The lack of a convenient mechanism for searching the SPIE literature is problematic for researchers in medical imaging. With the recent introduction on SPIE's Internet server of their Abstracts Online service and their In-CiteTM title and author searching software, the SPIE literature has become more accessible. However, the searching process is still a cumbersome and time consuming process, and it is not possible to perform key word searches of manuscript abstracts. In this paper we present results of our work on developing a mechanism to more thoroughly search SPIE publications for PACS-related articles.
Lertxundi, U; Marquínez, A C; Domingo-Echaburu, S; Solinís, M Á; Calvo, B; Del Pozo-Rodríguez, A; García, M; Aguirre, C; Isla, A
Some reports have suggested an association between dopamine agonists and hiccups, involuntary contractions that merit full clinical attention because they can be very debilitating. Many drugs frequently used to treat hiccups are formally contraindicated in Parkinson's disease due to their liability to worsen motor symptoms, making the treatment of hiccups problematic in this disease. The objective of the present study was to analyze all spontaneous reports of hiccups from the European Pharmacovigilance Database in patients with Parkinson's disease and/or on dopaminergic drugs. Finally, we sought to identify evidence-based recommendations on the management of hiccups in Parkinson's disease. We searched for all reports of hiccups in the European Pharmacovigilance Database (EudraVigilance) and calculated proportional reporting ratios for dopamine agonists and hiccups. We reviewed the literature on Parkinson's disease, dopamine agonists, and hiccups, searching for specific treatment recommendations for hiccups in this disease. Both rotigotine and pramipexole fulfilled the criteria to generate a safety signal. We found 32 and 13 cases of hiccups associated with dopamine agonists in EudraVigilance and the literature, respectively. There were no specific recommendations for the management of hiccups in Parkinson's disease in the clinical guidelines consulted. We have found evidence that rotigotine and pramipexole are associated with the appearance of hiccups and that this adverse reaction occurs predominantly in males. Given the scarce information available, specific recommendations are needed in clinical guidelines for the adequate management of hiccups in Parkinson's disease.
Aldemar Araujo Castro
Full Text Available OBJECTIVE: To define and disseminate the optimal search strategy for clinical trials in the Latin American and Caribbean Health Science Literature (LILACS. This strategy was elaborated based on the optimal search strategy for MEDLINE recommended by Cochrane Collaboration for the identification of clinical trials in electronic databases. DESIGN: Technical information. SETTING: Clinical Trials and Meta-Analysis Unit, Federal University of São Paulo, in conjunction with the Brazilian Cochrane Center, São Paulo, Brazil. (http://www.epm.br/cochrane. DATA: LILACS/CD-ROM (Latin American and Caribbean Health Science Information Database, 27th edition, January 1997, edited by BIREME (Latin American and Caribbean Health Science Information Center. LILACS Indexes 670 journals in the region, with abstracts in English, Portuguese or Spanish; only 41 overlap in the MEDLINE-EMBASE. Of the 168.902 citations since 1982, 104,016 are in human trials, and 38,261 citations are potentiality clinical trials. Search strategy was elaborated combining headings with text word in three languages, adapting the interface of the LILACS. We will be working by locating clinical trials in LILACS for Cochrane Controlled Trials Database. This effort is being coordinated by the Brazilian Cochrane Center.
Harkanwal Preet Singh
Full Text Available Science is a dynamic subject and it was never free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. So, we aimed to assess various factors associated with retraction of scientific articles from 2004 to 2013. Data were retrieved from PubMed and Medline using the keywords retraction of articles, retraction notice, and withdrawal of article in April 2014 to detect articles retracted from 2004 to 2013. Statistical analysis was carried out using t-test and Karl Pearson's correlation coefficient. Results showed that a total of 2343 articles were retracted between 2004 and 2013, and original articles followed by case reports constituted major part of it. Time interval between submission and retraction of article has reduced in recent times. Impact factor and retraction do not have any significant correlation. We conclude that although retraction of articles is a rare event, its constant rise in scientific literature is quite worrisome. It is still unclear whether misconduct/mistakes in articles are increasing hastily or the articles are retracted at a rapid rate in recent times. So, it should be considered as an urgent issue and it is the responsibility of journal editors to track misconduct by following Committee on Publication Ethics (COPE guidelines and making an effective strategy.
Singh, Harkanwal Preet; Mahendra, Ashish; Yadav, Bhupender; Singh, Harpreet; Arora, Nitin; Arora, Monika
Science is a dynamic subject and it was never free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. So, we aimed to assess various factors associated with retraction of scientific articles from 2004 to 2013. Data were retrieved from PubMed and Medline using the keywords retraction of articles, retraction notice, and withdrawal of article in April 2014 to detect articles retracted from 2004 to 2013. Statistical analysis was carried out using t-test and Karl Pearson's correlation coefficient. Results showed that a total of 2343 articles were retracted between 2004 and 2013, and original articles followed by case reports constituted major part of it. Time interval between submission and retraction of article has reduced in recent times. Impact factor and retraction do not have any significant correlation. We conclude that although retraction of articles is a rare event, its constant rise in scientific literature is quite worrisome. It is still unclear whether misconduct/mistakes in articles are increasing hastily or the articles are retracted at a rapid rate in recent times. So, it should be considered as an urgent issue and it is the responsibility of journal editors to track misconduct by following Committee on Publication Ethics (COPE) guidelines and making an effective strategy.
Singh, Harkanwal Preet; Mahendra, Ashish; Yadav, Bhupender; Singh, Harpreet; Arora, Nitin; Arora, Monika
Science is a dynamic subject and it was never free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. So, we aimed to assess various factors associated with retraction of scientific articles from 2004 to 2013. Data were retrieved from PubMed and Medline using the keywords retraction of articles, retraction notice, and withdrawal of article in April 2014 to detect articles retracted from 2004 to 2013. Statistical analysis was carried out using t-test and Karl Pearson's correlation coefficient. Results showed that a total of 2343 articles were retracted between 2004 and 2013, and original articles followed by case reports constituted major part of it. Time interval between submission and retraction of article has reduced in recent times. Impact factor and retraction do not have any significant correlation. We conclude that although retraction of articles is a rare event, its constant rise in scientific literature is quite worrisome. It is still unclear whether misconduct/mistakes in articles are increasing hastily or the articles are retracted at a rapid rate in recent times. So, it should be considered as an urgent issue and it is the responsibility of journal editors to track misconduct by following Committee on Publication Ethics (COPE) guidelines and making an effective strategy. PMID:25161916
Baumbach, J. I.; Vonirmer, A.
To assist current discussion in the field of ion mobility spectrometry, at the Institut fur Spectrochemie und angewandte Spektroskopie, Dortmund, start with 4th of December, 1994 work of an FTP-Server, available for all research groups at univerisities, institutes and research worker in industry. We support the exchange, interpretation, and database-search of ion mobility spectra through data format JCAMP-DS (Joint Committee on Atomic and Molecular Physical Data) as well as literature retrieval, pre-print, notice, and discussion board. We describe in general lines the entrance conditions, local addresses, and main code words. For further details, a monthly news report will be prepared for all common users. Internet email address for subscribing is included in document.
Tractenberg, Rochelle E; Gordon, Morris
Phenomenon: The purpose of "systematic" reviews/reviewers of medical and health professions educational research is to identify best practices. This qualitative article explores the question of whether systematic reviews can support "evidence informed" teaching and contrasts traditional systematic reviewing with a knowledge translation (KT) approach to this objective. Degrees of freedom analysis (DOFA) is used to examine the alignment of systematic review methods with educational research and the pedagogical strategies and approaches that might be considered with a decision-making framework developed to support valid assessment. This method is also used to explore how KT can be used to inform teaching and learning. The nature of educational research is not compatible with most (11/14) methods for systematic review. The inconsistency of systematic reviewing with the nature of educational research impedes both the identification and implementation of "best-evidence" pedagogy and teaching. This is primarily because research questions that do support the purposes of review do not support educational decision making. By contrast to systematic reviews of the literature, both a DOFA and KT are fully compatible with informing teaching using evidence. A DOFA supports the translation of theory to a specific teaching or learning case, so could be considered a type of KT. The DOFA results in a test of alignment of decision options with relevant educational theory, and KT leads to interventions in teaching or learning that can be evaluated. Examples of how to structure evaluable interventions are derived from a KT approach that are simply not available from a systematic review. Insights: Systematic reviewing of current empirical educational research is not suitable for deriving or supporting best practices in education. However, both "evidence-informed" and scholarly approaches to teaching can be supported as KT projects, which are inherently evaluable and can generate
Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485
Capurro, Daniel; Soto, Mauricio; Vivent, Macarena; Lopetegui, Marcelo; Herskovic, Jorge R
Biomedical Informatics is a new discipline that arose from the need to incorporate information technologies to the generation, storage, distribution and analysis of information in the domain of biomedical sciences. This discipline comprises basic biomedical informatics, and public health informatics. The development of the discipline in Chile has been modest and most projects have originated from the interest of individual people or institutions, without a systematic and coordinated national development. Considering the unique features of health care system of our country, research in the area of biomedical informatics is becoming an imperative.
Full Text Available List Contact us DMPD Literature list concerning differentiation and activation of macrophage and pathways fo...und in the literature Data detail Data name Literature list concerning differentiation and activation of macrophage and pathways...on and activation of macrophage and pathways found in the literature - DMPD | LSDB Archive ...
Han, E.H., and Kumar, V. (1999). Chameleon : A hierarchical clustering algorithm using dynamic modeling. IEEE Computer: Special Issue on Data...mortality OR neural OR learning OR pregnancy OR mutations OR object OR objects OR drug* OR human OR treatment* OR trial* OR patients))) UPDATE FOR...AND COMMUNICATIONS, PROCEEDINGS 1 0.1117% HUMAN MOLECULAR GENETICS 1 0.1117% HUMAN MUTATION 1 0.1117% HUMAN-COMPUTER INTERACTION - INTERACT 2005
Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.
The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709
Full Text Available Pamela Palmer,1 Xiang Ji,2 Jennifer Stephens21AcelRx Pharmaceuticals, Inc., Redwood City, CA, 2Pharmerit International, Bethesda, MD, USABackground: Intravenous patient-controlled analgesia (PCA equipment and opioid cost analyses on specific procedures are lacking. This study estimates the intravenous PCA hospital cost for the first 48 postoperative hours for three inpatient surgeries.Methods: Descriptive analyses using the Premier database (2010–2012 of more than 500 US hospitals were conducted on cost (direct acquisition and indirect cost for the hospital, such as overhead, labor, pharmacy services of intravenous PCA after total knee/hip arthroplasty (TKA/THA or open abdominal surgery. Weighted average cost of equipment and opioid drug and the literature-based cost of adverse events and complications were aggregated for total costs.Results: Of 11,805,513 patients, 272,443 (2.3%, 139,275 (1.2%, and 195,062 (1.7% had TKA, THA, and abdominal surgery, respectively, with approximately 20% of orthopedic and 29% of abdominal patients having specific intravenous PCA database cost entries. Morphine (57% and hydromorphone (44% were the most frequently used PCA drugs, with a mean cost per 30 cc syringe of $16 (30 mg and $21 (6 mg, respectively. The mean number of syringes used for morphine and hydromorphone in the first 48 hours were 1.9 and 3.2 (TKA, 2.0 and 4.2 (THA, and 2.5 and 3.9 (abdominal surgery, respectively. Average costs of PCA pump, intravenous tubing set, and drug ranged from $46 to $48, from $20 to $22, and from $33 to $46, respectively. Pump, tubing, and saline required to maintain patency of the intravenous PCA catheter over 48 hours ranged from $9 to $13, from $8 to $9, and from $20 to $22, respectively. Supplemental non-PCA opioid use ranged from $56 for THA to $87 for abdominal surgery. Aggregated mean intravenous PCA equipment and opioid cost per patient were $196 (THA, $204 (TKA, and $243 (abdominal surgery. Total costs, including
Shaped by Quantum Theory, Technology, and the Genomics RevolutionThe integration of photonics, electronics, biomaterials, and nanotechnology holds great promise for the future of medicine. This topic has recently experienced an explosive growth due to the noninvasive or minimally invasive nature and the cost-effectiveness of photonic modalities in medical diagnostics and therapy. The second edition of the Biomedical Photonics Handbook presents fundamental developments as well as important applications of biomedical photonics of interest to scientists, engineers, manufacturers, teachers, studen
Hurst, Sarah J
This chapter summarizes the roles of nanomaterials in biomedical applications, focusing on those highlighted in this volume. A brief history of nanoscience and technology and a general introduction to the field are presented. Then, the chemical and physical properties of nanostructures that make them ideal for use in biomedical applications are highlighted. Examples of common applications, including sensing, imaging, and therapeutics, are given. Finally, the challenges associated with translating this field from the research laboratory to the clinic setting, in terms of the larger societal implications, are discussed.
Suh, Sang C; Tanik, Murat M
Biomedical Engineering: Health Care Systems, Technology and Techniques is an edited volume with contributions from world experts. It provides readers with unique contributions related to current research and future healthcare systems. Practitioners and researchers focused on computer science, bioinformatics, engineering and medicine will find this book a valuable reference.
With the introduction of the Cumulative Index to Nursing and Allied Health Literature (CINAHL) on CD-ROM, research was initiated to compare coverage of nursing journals by CINAHL and MEDLINE in this format, expanding on previous comparison of these databases in print and online. The study assessed search results for eight topics in 1989 and 1990 citations in both databases, each produced by SilverPlatter. Results were tallied and analyzed for number of records retrieved, unique and overlapping records, relevance, and appropriateness. An overall precision score was developed. The goal of the research was to develop quantifiable tools to help determine which database to purchase for an academic library serving an undergraduate nursing program.
Wright, Tarah; Pullen, Sarah
Using the tool of bibliometry, this study examines journal articles related to Education for Sustainable Development (ESD) in academic journals from 1990 to 2005. It offers a statistical description of the literature, and analyses the development of ESD publications within the journal literature to date. The results show that the number of ESD…
Geusebroek, J.M.; Hoang, M.A.; van Gemert, J.; Worring, M.
We exploit the retrieval of visual information from biomedical scientific publication databses. Therefore, we consider the use of domain specific genres to automatically subdivide large image databases into smaller, consistent parts. Combination with Latent Semantic Indexing on the picture captions
Bazelier, Marloes T; Eriksson, Irene; de Vries, Frank; Schmidt, Marjanka K; Raitanen, Jani; Haukka, Jari; Starup-Linde, Jakob; De Bruin, Marie L; Andersen, Morten
To identify pharmacoepidemiological multi-database studies and to describe data management and data analysis techniques used for combining data. Systematic literature searches were conducted in PubMed and Embase complemented by a manual literature search. We included pharmacoepidemiological multi-database studies published from 2007 onwards that combined data for a pre-planned common analysis or quantitative synthesis. Information was retrieved about study characteristics, methods used for individual-level analyses and meta-analyses, data management and motivations for performing the study. We found 3083 articles by the systematic searches and an additional 176 by the manual search. After full-text screening of 75 articles, 22 were selected for final inclusion. The number of databases used per study ranged from 2 to 17 (median = 4.0). Most studies used a cohort design (82%) instead of a case-control design (18%). Logistic regression was most often used for individual-level analyses (41%), followed by Cox regression (23%) and Poisson regression (14%). As meta-analysis method, a majority of the studies combined individual patient data (73%). Six studies performed an aggregate meta-analysis (27%), while a semi-aggregate approach was applied in three studies (14%). Information on central programming or heterogeneity assessment was missing in approximately half of the publications. Most studies were motivated by improving power (86%). Pharmacoepidemiological multi-database studies are a well-powered strategy to address safety issues and have increased in popularity. To be able to correctly interpret the results of these studies, it is important to systematically report on database management and analysis techniques, including central programming and heterogeneity testing. © 2015 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.
Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database
Lee, Myunggyo; Lee, Kyubum; Yu, Namhee; Jang, Insu; Choi, Ikjung; Kim, Pora; Jang, Ye Eun; Kim, Byounggun; Kim, Sunkyu; Lee, Byungwook; Kang, Jaewoo; Lee, Sanghyuk
Fusion gene is an important class of therapeutic targets and prognostic markers in cancer. ChimerDB is a comprehensive database of fusion genes encompassing analysis of deep sequencing data and manual curations. In this update, the database coverage was enhanced considerably by adding two new modules of The Cancer Genome Atlas (TCGA) RNA-Seq analysis and PubMed abstract mining. ChimerDB 3.0 is composed of three modules of ChimerKB, ChimerPub and ChimerSeq. ChimerKB represents a knowledgebase including 1066 fusion genes with manual curation that were compiled from public resources of fusion genes with experimental evidences. ChimerPub includes 2767 fusion genes obtained from text mining of PubMed abstracts. ChimerSeq module is designed to archive the fusion candidates from deep sequencing data. Importantly, we have analyzed RNA-Seq data of the TCGA project covering 4569 patients in 23 cancer types using two reliable programs of FusionScan and TopHat-Fusion. The new user interface supports diverse search options and graphic representation of fusion gene structure. ChimerDB 3.0 is available at http://ercsb.ewha.ac.kr/fusiongene/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
GhaedAmini, Hossein; Okhovati, Maryam; Zare, Morteza; Saghafi, Zahra; Bazrafshan, Azam; GhaedAmini, Alireza; GhaedAmini, Mohammadreza
The aim of this study was to provide research and collaboration overview of Iranian research efforts in the field of traditional medicine during 2010-2014. This is a bibliometric study using the Scopus database as data source, using search affiliation address relevant to traditional medicine and Iran as the search strategy. Subject and geographical overlay maps were also applied to visualize the network activities of the Iranian authors. Highly cited articles (citations >10) were further explored to highlight the impact of research domains more specifically. About 3,683 articles were published by Iranian authors in Scopus database. The compound annual growth rate of Iranian publications was 0.14% during 2010-2014. Tehran University of Medical Sciences (932 articles), Shiraz University of Medical Sciences (404 articles) and Tabriz Islamic Medical University (391 articles), were the leading institutions in the field of traditional medicine. Medicinal plants (72%), digestive system's disease (21%), basics of traditional medicine (13%), mental disorders (8%) were the major research topics. United States (7%), Netherlands (3%), and Canada (2.6%) were the most important collaborators of Iranian authors. Iranian research efforts in the field of traditional medicine have been increased slightly over the last years. Yet, joint multi-disciplinary collaborations are needed to cover inadequately described areas of traditional medicine in the country.
Wang, Beichen; Chen, Xiaodong; Mamitsuka, Hiroshi; Zhu, Shanfeng
With the rapid development of biomedical sciences, a great number of documents have been published to report new scientific findings and advance the process of knowledge discovery. By the end of 2013, the largest biomedical literature database, MEDLINE, has indexed over 23 million abstracts. It is thus not easy for scientific professionals to find experts on a certain topic in the biomedical domain. In contrast to the existing services that use some ad hoc approaches, we developed a novel solution to biomedical expert finding, BMExpert, based on the language model. For finding biomedical experts, who are the most relevant to a specific topic query, BMExpert mines MEDLINE documents by considering three important factors: relevance of documents to the query topic, importance of documents, and associations between documents and experts. The performance of BMExpert was evaluated on a benchmark dataset, which was built by collecting the program committee members of ISMB in the past three years (2012-2014) on 14 different topics. Experimental results show that BMExpert outperformed three existing biomedical expert finding services: JANE, GoPubMed, and eTBLAST, with respect to both MAP (mean average precision) and P@50 (Precision). BMExpert is freely accessed at http://datamining-iip.fudan.edu.cn/service/BMExpert/.
Aggithaya, Madhur G; Narahari, Saravu R
The journals that publish on Ayurveda are increasingly indexed by popular medical databases in recent years. However, many Eastern journals are not indexed biomedical journal databases such as PubMed. Literature searches for Ayurveda continue to be challenging due to the nonavailability of active, unbiased dedicated databases for Ayurvedic literature. In 2010, authors identified 46 databases that can be used for systematic search of Ayurvedic papers and theses. This update reviewed our previous recommendation and identified current and relevant databases. To update on Ayurveda literature search and strategy to retrieve maximum publications. Author used psoriasis as an example to search previously listed databases and identify new. The population, intervention, control, and outcome table included keywords related to psoriasis and Ayurvedic terminologies for skin diseases. Current citation update status, search results, and search options of previous databases were assessed. Eight search strategies were developed. Hundred and five journals, both biomedical and Ayurveda, which publish on Ayurveda, were identified. Variability in databases was explored to identify bias in journal citation. Five among 46 databases are now relevant - AYUSH research portal, Annotated Bibliography of Indian Medicine, Digital Helpline for Ayurveda Research Articles (DHARA), PubMed, and Directory of Open Access Journals. Search options in these databases are not uniform, and only PubMed allows complex search strategy. "The Researches in Ayurveda" and "Ayurvedic Research Database" (ARD) are important grey resources for hand searching. About 44/105 (41.5%) journals publishing Ayurvedic studies are not indexed in any database. Only 11/105 (10.4%) exclusive Ayurveda journals are indexed in PubMed. AYUSH research portal and DHARA are two major portals after 2010. It is mandatory to search PubMed and four other databases because all five carry citations from different groups of journals. The hand
Since its beginnings in 1949, hydrogeologic investigations at the Idaho National Engineering Laboratory (INEL) have resulted in an extensive collection of technical publications providing information concerning ground water hydraulics and contaminant transport within the unsaturated zone. Funding has been provided by the Department of Energy through the Department of Energy Idaho Field Office in a grant to compile an INEL-wide summary of unsaturated zone studies based on a literature search. University of Idaho researchers are conducting a review of technical documents produced at or pertaining to the INEL, which present or discuss processes in the unsaturated zone and surface water-ground water interactions. Results of this review are being compiled as an electronic database. Fields are available in this database for document title and associated identification number, author, source, abstract, and summary of information (including types of data and parameters). AskSam reg-sign, a text-based database system, was chosen. WordPerfect 5.1 copyright is being used as a text-editor to input data records into askSam
Bousquet, P-J; Caillet, P; Coeuret-Pellicer, M; Goulard, H; Kudjawu, Y C; Le Bihan, C; Lecuyer, A I; Séguret, F
The development and use of healthcare databases accentuates the need for dedicated tools, including validated selection algorithms of cancer diseased patients. As part of the development of the French National Health Insurance System data network REDSIAM, the tumor taskforce established an inventory of national and internal published algorithms in the field of cancer. This work aims to facilitate the choice of a best-suited algorithm. A non-systematic literature search was conducted for various cancers. Results are presented for lung, breast, colon, and rectum. Medline, Scopus, the French Database in Public Health, Google Scholar, and the summaries of the main French journals in oncology and public health were searched for publications until August 2016. An extraction grid adapted to oncology was constructed and used for the extraction process. A total of 18 publications were selected for lung cancer, 18 for breast cancer, and 12 for colorectal cancer. Validation studies of algorithms are scarce. When information is available, the performance and choice of an algorithm are dependent on the context, purpose, and location of the planned study. Accounting for cancer disease specificity, the proposed extraction chart is more detailed than the generic chart developed for other REDSIAM taskforces, but remains easily usable in practice. This study illustrates the complexity of cancer detection through sole reliance on healthcare databases and the lack of validated algorithms specifically designed for this purpose. Studies that standardize and facilitate validation of these algorithms should be developed and promoted. Copyright © 2017. Published by Elsevier Masson SAS.
Rodriguez-Esteban, Raul; Bundschus, Markus
Biomedical text mining of scientific knowledge bases, such as Medline, has received much attention in recent years. Given that text mining is able to automatically extract biomedical facts that revolve around entities such as genes, proteins, and drugs, from unstructured text sources, it is seen as a major enabler to foster biomedical research and drug discovery. In contrast to the biomedical literature, research into the mining of biomedical patents has not reached the same level of maturity. Here, we review existing work and highlight the associated technical challenges that emerge from automatically extracting facts from patents. We conclude by outlining potential future directions in this domain that could help drive biomedical research and drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mar 10, 2018 ... feature database. The selection of feature descriptors affects the image retrieval performance. In early years, Manjunath et al  used features based on intensity histogram for biomedical image retrieval. However, their retrieval performance is usually limited especially on large databases due to lack of ...
Sixty-five chemicals in the ToxCast high-throughput screening (HTS) dataset have been linked to cleft palate based on data from ToxRefDB (rat or rabbit prenatal developmental toxicity studies) or from literature reports. These compounds are structurally diverse and thus likely to...
Dornic, N; Ficheux, A S; Roudot, A C
The risks related to the use of essential oils are difficult to ascertain at present, due in part to the large number of different oils available on the market, making it difficult for the risk assessor. Essential oils may contain skin allergens in significant amounts, and could thus pose a risk to the consumer. The aim of our study was to collect as much qualitative and quantitative data as possible on allergens present in essential oils. 11 types of essential oils, with 25 respective subspecies, were taken into account based on a previous survey. Based on the literature, 517 dosages were recorded from 112 publications, providing precious information for probabilistic exposure assessment purposes. 22 substances recognized as established allergens were found in the essential oils we included. Of these, 11 are also found in cosmetics as fragrance components. These results are of major importance regarding co-exposure to fragrance allergens. Moreover, this could lead to regulatory measures for essential oils in the future, as it is the case for cosmetic products, in order to better protect consumers against skin allergy. Copyright © 2016 Elsevier Inc. All rights reserved.
Pawar, S.H.; Khyalappa, R.J.; Yakhmi, J.V.
This book is predominantly a compilation of papers presented in the conference which is focused on the development in biomedical materials, biomedical devises and instrumentation, biomedical effects of electromagnetic radiation, electrotherapy, radiotherapy, biosensors, biotechnology, bioengineering, tissue engineering, clinical engineering and surgical planning, medical imaging, hospital system management, biomedical education, biomedical industry and society, bioinformatics, structured nanomaterial for biomedical application, nano-composites, nano-medicine, synthesis of nanomaterial, nano science and technology development. The papers presented herein contain the scientific substance to suffice the academic directivity of the researchers from the field of biomedicine, biomedical engineering, material science and nanotechnology. Papers relevant to INIS are indexed separately
Dhillon, Sarinder; Advances in biomedical infrastructure 2013
Current Biomedical Databases are independently administered in geographically distinct locations, lending them almost ideally to adoption of intelligent data management approaches. This book focuses on research issues, problems and opportunities in Biomedical Data Infrastructure identifying new issues and directions for future research in Biomedical Data and Information Retrieval, Semantics in Biomedicine, and Biomedical Data Modeling and Analysis. The book will be a useful guide for researchers, practitioners, and graduate-level students interested in learning state-of-the-art development in biomedical data management.
Biomedical journals must adhere to strict standards of editorial quality. In a globalized academic scenario, biomedical journals must compete firstly to publish the most relevant original research and secondly to obtain the broadest possible visibility and the widest dissemination of their scientific contents. The cornerstone of the scientific process is still the peer-review system but additional quality criteria should be met. Recently access to medical information has been revolutionized by electronic editions. Bibliometric databases such as MEDLINE, the ISI Web of Science and Scopus offer comprehensive online information on medical literature. Classically, the prestige of biomedical journals has been measured by their impact factor but, recently, other indicators such as SCImago SJR or the Eigenfactor are emerging as alternative indices of a journal's quality. Assessing the scholarly impact of research and the merits of individual scientists remains a major challenge. Allocation of authorship credit also remains controversial. Furthermore, in our Kafkaesque world, we prefer to count rather than read the articles we judge. Quantitative publication metrics (research output) and citations analyses (scientific influence) are key determinants of the scientific success of individual investigators. However, academia is embracing new objective indicators (such as the "h" index) to evaluate scholarly merit. The present review discusses some editorial issues affecting biomedical journals, currently available bibliometric databases, bibliometric indices of journal quality and, finally, indicators of research performance and scientific success. Copyright 2010 SEEN. Published by Elsevier Espana. All rights reserved.
Chen, Hongyu; Martin, Bronwen; Daimon, Caitlin M; Maudsley, Stuart
Text mining is rapidly becoming an essential technique for the annotation and analysis of large biological data sets. Biomedical literature currently increases at a rate of several thousand papers per week, making automated information retrieval methods the only feasible method of managing this expanding corpus. With the increasing prevalence of open-access journals and constant growth of publicly-available repositories of biomedical literature, literature mining has become much more effective with respect to the extraction of biomedically-relevant data. In recent years, text mining of popular databases such as MEDLINE has evolved from basic term-searches to more sophisticated natural language processing techniques, indexing and retrieval methods, structural analysis and integration of literature with associated metadata. In this review, we will focus on Latent Semantic Indexing (LSI), a computational linguistics technique increasingly used for a variety of biological purposes. It is noted for its ability to consistently outperform benchmark Boolean text searches and co-occurrence models at information retrieval and its power to extract indirect relationships within a data set. LSI has been used successfully to formulate new hypotheses, generate novel connections from existing data, and validate empirical data.
Full Text Available Text mining is rapidly becoming an essential technique for the annotation and analysis of large biological data sets. Biomedical literature currently increases at a rate of several thousand papers per week, making automated information retrieval methods the only feasible method of managing this expanding corpus. With the increasing prevalence of open-access journals and constant growth of publicly-available repositories of biomedical literature, literature mining has become much more effective with respect to the extraction of biomedically-relevant data. In recent years, text mining of popular databases such as MEDLINE has evolved from basic term-searches to more sophisticated natural language processing techniques, indexing and retrieval methods, structural analysis and integration of literature with associated metadata. In this review, we will focus on Latent Semantic Indexing (LSI, a computational linguistics technique increasingly used for a variety of biological purposes. It is noted for its ability to consistently outperform benchmark Boolean text searches and co-occurrence models at information retrieval and its power to extract indirect relationships within a data set. LSI has been used successfully to formulate new hypotheses, generate novel connections from existing data, and validate empirical data.
This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level. PMID:23572850
This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level.
Bronzino, Joseph D
Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Engineering Fundamentals, the first volume of the handbook, presents material from respected scientists with diverse backgrounds in physiological systems, biomechanics, biomaterials, bioelectric phenomena, and neuroengineering. More than three dozen specific topics are examined, including cardia
Pyysalo, Sampo; Ananiadou, Sophia
Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. email@example.com Supplementary data are available at Bioinformatics online.
Full Text Available Abstract Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is
Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events) involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized) together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is modelled using the Lexical
Catia Pesquita; Daniel Faria; Tiago Grego; Francisco Couto; Mário J. Silva
Biomedical research generates a vast amount of information that is ultimately stored in scientific publications or in databases. The information in scientific texts is unstructured and thus hard to access, whereas the information in databases, although more accessible, often lacks in contextualization. The integration of information from these two kinds of sources is crucial for managing and extracting knowledge. By structuring and defining the concepts and relationships within a biomedical d...
Piper, Brian J; Lambert, Drew A; Keefe, Ryan C; Smukler, Phoebe U; Selemon, Nicolas A; Duperry, Zachary R
Textbooks are a formative resource for health care providers during their education and are also an enduring reference for pathophysiology and treatment. Unlike the primary literature and clinical guidelines, biomedical textbook authors do not typically disclose potential financial conflicts of interest (pCoIs). The objective of this study was to evaluate whether the authors of textbooks used in the training of physicians, pharmacists, and dentists had appreciable undisclosed pCoIs in the form of patents or compensation received from pharmaceutical or biotechnology companies. The most recent editions of six medical textbooks, Harrison's Principles of Internal Medicine ( Har PIM), Katzung and Trevor's Basic and Clinical Pharmacology ( Kat BCP), the American Osteopathic Association's Foundations of Osteopathic Medicine ( AOA FOM), Remington: The Science and Practice of Pharmacy ( Rem SPP), Koda-Kimble and Young's Applied Therapeutics ( KKY AT), and Yagiela's Pharmacology and Therapeutics for Dentistry ( Yag PTD), were selected after consulting biomedical educators for evaluation. Author names (N = 1,152, 29.2% female) were submitted to databases to examine patents (Google Scholar) and compensation (ProPublica's Dollars for Docs [PDD]). Authors were listed as inventors on 677 patents (maximum/author = 23), with three-quarters (74.9%) to Har PIM authors. Females were significantly underrepresented among patent holders. The PDD 2009-2013 database revealed receipt of US$13.2 million, the majority to (83.9%) to Har PIM. The maximum compensation per author was $869,413. The PDD 2014 database identified receipt of $6.8 million, with 50.4% of eligible authors receiving compensation. The maximum compensation received by a single author was $560,021. Cardiovascular authors were most likely to have a PDD entry and neurologic disorders authors were least likely. An appreciable subset of biomedical authors have patents and have received remuneration from medical product
Chen, Yen-Wei; Moussi, Joelle; Drury, Jeanie L; Wataha, John C
The use of zirconia in medicine and dentistry has rapidly expanded over the past decade, driven by its advantageous physical, biological, esthetic, and corrosion properties. Zirconia orthopedic hip replacements have shown superior wear-resistance over other systems; however, risk of catastrophic fracture remains a concern. In dentistry, zirconia has been widely adopted for endosseous implants, implant abutments, and all-ceramic crowns. Because of an increasing demand for esthetically pleasing dental restorations, zirconia-based ceramic restorations have become one of the dominant restorative choices. Areas covered: This review provides an updated overview of the applications of zirconia in medicine and dentistry with a focus on dental applications. The MEDLINE electronic database (via PubMed) was searched, and relevant original and review articles from 2010 to 2016 were included. Expert commentary: Recent data suggest that zirconia performs favorably in both orthopedic and dental applications, but quality long-term clinical data remain scarce. Concerns about the effects of wear, crystalline degradation, crack propagation, and catastrophic fracture are still debated. The future of zirconia in biomedical applications will depend on the generation of these data to resolve concerns.
Galipeau, James; Barbour, Virginia; Baskin, Patricia; Bell-Syer, Sally; Cobey, Kelly; Cumpston, Miranda; Deeks, Jon; Garner, Paul; MacLehose, Harriet; Shamseer, Larissa; Straus, Sharon; Tugwell, Peter; Wager, Elizabeth; Winker, Margaret; Moher, David
Biomedical journals are the main route for disseminating the results of health-related research. Despite this, their editors operate largely without formal training or certification. To our knowledge, no body of literature systematically identifying core competencies for scientific editors of biomedical journals exists. Therefore, we aimed to conduct a scoping review to determine what is known on the competency requirements for scientific editors of biomedical journals. We searched the MEDLINE®, Cochrane Library, Embase®, CINAHL, PsycINFO, and ERIC databases (from inception to November 2014) and conducted a grey literature search for research and non-research articles with competency-related statements (i.e. competencies, knowledge, skills, behaviors, and tasks) pertaining to the role of scientific editors of peer-reviewed health-related journals. We also conducted an environmental scan, searched the results of a previous environmental scan, and searched the websites of existing networks, major biomedical journal publishers, and organizations that offer resources for editors. A total of 225 full-text publications were included, 25 of which were research articles. We extracted a total of 1,566 statements possibly related to core competencies for scientific editors of biomedical journals from these publications. We then collated overlapping or duplicate statements which produced a list of 203 unique statements. Finally, we grouped these statements into seven emergent themes: (1) dealing with authors, (2) dealing with peer reviewers, (3) journal publishing, (4) journal promotion, (5) editing, (6) ethics and integrity, and (7) qualities and characteristics of editors. To our knowledge, this scoping review is the first attempt to systematically identify possible competencies of editors. Limitations are that (1) we may not have captured all aspects of a biomedical editor's work in our searches, (2) removing redundant and overlapping items may have led to the
Full Text Available In order to achieve relevant scholarly information from the biomedical databases, researchers generally use technical terms as queries such as proteins, genes, diseases, and other biomedical descriptors. However, the technical terms have limits as query terms because there are so many indirect and conceptual expressions denoting them in scientific literatures. Combinatorial weighting schemes are proposed as an initial approach to this problem, which utilize various indexing and weighting methods and their combinations. In the experiments based on the proposed system and previously constructed evaluation collection, this approach showed promising results in that one could continually locate new relevant expressions by combining the proposed weighting schemes. Furthermore, it could be ascertained that the most outperforming binary combinations of the weighting schemes, showing the inherent traits of the weighting schemes, could be complementary to each other and it is possible to find hidden relevant documents based on the proposed methods.
Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Song, Jean; Athey, Brian; Watson, Stanley J; Meng, Fan
Background Effective Medline database exploration is critical for the understanding of high throughput experimental results and the development of novel hypotheses about the mechanisms underlying the targeted biological processes. While existing solutions enhance Medline exploration through different approaches such as document clustering, network presentations of underlying conceptual relationships and the mapping of search results to MeSH and Gene Ontology trees, we believe the use of multiple ontologies from the Open Biomedical Ontology can greatly help researchers to explore literature from different perspectives as well as to quickly locate the most relevant Medline records for further investigation. Results We developed an ontology-based interactive Medline exploration solution called PubOnto to enable the interactive exploration and filtering of search results through the use of multiple ontologies from the OBO foundry. The PubOnto program is a rich internet application based on the FLEX platform. It contains a number of interactive tools, visualization capabilities, an open service architecture, and a customizable user interface. It is freely accessible at: . PMID:19426463
Abbe, Adeline; Grouin, Cyril; Zweigenbaum, Pierre; Falissard, Bruno
The expansion of biomedical literature is creating the need for efficient tools to keep pace with increasing volumes of information. Text mining (TM) approaches are becoming essential to facilitate the automated extraction of useful biomedical information from unstructured text. We reviewed the applications of TM in psychiatry, and explored its advantages and limitations. A systematic review of the literature was carried out using the CINAHL, Medline, EMBASE, PsycINFO and Cochrane databases. In this review, 1103 papers were screened, and 38 were included as applications of TM in psychiatric research. Using TM and content analysis, we identified four major areas of application: (1) Psychopathology (i.e. observational studies focusing on mental illnesses) (2) the Patient perspective (i.e. patients' thoughts and opinions), (3) Medical records (i.e. safety issues, quality of care and description of treatments), and (4) Medical literature (i.e. identification of new scientific information in the literature). The information sources were qualitative studies, Internet postings, medical records and biomedical literature. Our work demonstrates that TM can contribute to complex research tasks in psychiatry. We discuss the benefits, limits, and further applications of this tool in the future. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
About the Book: A well set out textbook explains the fundamentals of biomedical engineering in the areas of biomechanics, biofluid flow, biomaterials, bioinstrumentation and use of computing in biomedical engineering. All these subjects form a basic part of an engineer''s education. The text is admirably suited to meet the needs of the students of mechanical engineering, opting for the elective of Biomedical Engineering. Coverage of bioinstrumentation, biomaterials and computing for biomedical engineers can meet the needs of the students of Electronic & Communication, Electronic & Instrumenta
Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N
Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem
Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed
Full Text Available The Ontology for Biomedical Investigations (OBI is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI and Phenotype Attribute and Trait Ontology (PATO without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT. The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org providing details on the people, policies, and issues being
Kuo, Tsung-Ting; Kim, Hyeon-Eui; Ohno-Machado, Lucila
To introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains. Biomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains. The covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Colson, Yolonda L.; Grinstaff, Mark W.
Superhydrophobic surfaces are actively studied across a wide range of applications and industries, and are now finding increased use in the biomedical arena as substrates to control protein adsorption, cellular interaction, and bacterial growth, as well as platforms for drug delivery devices and for diagnostic tools. The commonality in the design of these materials is to create a stable or metastable air state at the material surface, which lends itself to a number of unique properties. These activities are catalyzing the development of new materials, applications, and fabrication techniques, as well as collaborations across material science, chemistry, engineering, and medicine given the interdisciplinary nature of this work. The review begins with a discussion of superhydrophobicity, and then explores biomedical applications that are utilizing superhydrophobicity in depth including material selection characteristics, in vitro performance, and in vivo performance. General trends are offered for each application in addition to discussion of conflicting data in the literature, and the review concludes with the authors’ future perspectives on the utility of superhydrophobic surfaces for biomedical applications. PMID:27449946
Laenger, C. J., Sr.
The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.
This thesis is about Text Mining. Extracting important information from literature. In the last years, the number of biomedical articles and journals is growing exponentially. Scientists might not find the information they want because of the large number of publications. Therefore a system was
Full Text Available Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment.In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM, and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes.This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.
Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M
Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.
Sonia Mansoldo Dainesi
Full Text Available INTRODUCTION: Clinical research is essential for the advancement of Medicine, especially regarding the development of new drugs. Understanding the reasons behind patients' decision of participating in these studies is critical for the recruitment and retention in the research. OBJECTIVES: To examine the decision-making of participants in biomedical research, taking into account different settings and environments where clinical research is performed. Methods: A critical review of the literature was performed through several databases using the keywords: "motivation", "decision", "reason", "biomedical research", "clinical research", "recruitment", "enrollment", "participation", "benefits", "altruism", "decline", "vulnerability" and "ethics", between August and November 2013, in English and in Portuguese. RESULTS: The review pointed out that the reasons can be different according to some characteristics such as the disease being treated, study phase, prognoses and socioeconomic and cultural environment. Access to better health care, personal benefits, financial rewards and altruism are mentioned depending on the circumstances. CONCLUSION: Finding out more about individuals' reasons for taking part in the research will allow clinical investigators to design studies of greater benefit for the community and will probably help to remove undesirable barriers imposed to participation. Improving the information to health care professionals and patients on the benefits and risks of clinical trials is certainly a good start.
Liu, Yongli; Wan, Xing
Incremental fuzzy clustering combines advantages of fuzzy clustering and incremental clustering, and therefore is important in classifying large biomedical literature. Conventional algorithms, suffering from data sparsity and high-dimensionality, often fail to produce reasonable results and may even assign all the objects to a single cluster. In this paper, we propose two incremental algorithms based on information bottleneck, Single-Pass fuzzy c-means (spFCM-IB) and Online fuzzy c-means (oFCM-IB). These two algorithms modify conventional algorithms by considering different weights for each centroid and object and scoring mutual information loss to measure the distance between centroids and objects. spFCM-IB and oFCM-IB are used to group a collection of biomedical text abstracts from Medline database. Experimental results show that clustering performances of our approaches are better than such prominent counterparts as spFCM, spHFCM, oFCM and oHFCM, in terms of accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Full Text Available Titanium has gained immense popularity and has successfully established itself as the material of choice for dental implants. In both medical and dental fields, titanium and its alloys have demonstrated success as biomedical devices. Owing to its high resistance to corrosion in a physiological environment and the excellent biocompatibility that gives it a passive, stable oxide film, titanium is considered the material of choice for intraosseous use. There are certain studies which show titanium as an allergen but the resources to diagnose titanium sensivity are very limited. Attention is needed towards the development of new and precise method for early diagnosis of titanium allergy and also to find out the alternative biomaterial which can be used in place of titanium. A review of available articles from the Medline and PubMed database was done to find literature available regarding titanium allergy, its diagnosis and new alternative material for titanium.
Goutam, Manish; Giriyapura, Chandu; Mishra, Sunil Kumar; Gupta, Siddharth
Titanium has gained immense popularity and has successfully established itself as the material of choice for dental implants. In both medical and dental fields, titanium and its alloys have demonstrated success as biomedical devices. Owing to its high resistance to corrosion in a physiological environment and the excellent biocompatibility that gives it a passive, stable oxide film, titanium is considered the material of choice for intraosseous use. There are certain studies which show titanium as an allergen but the resources to diagnose titanium sensivity are very limited. Attention is needed towards the development of new and precise method for early diagnosis of titanium allergy and also to find out the alternative biomaterial which can be used in place of titanium. A review of available articles from the Medline and PubMed database was done to find literature available regarding titanium allergy, its diagnosis and new alternative material for titanium.
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
This review attempts an in-depth evaluation of progress and achievements made since the last 11th International Mass Spectrometry Conference in the application of mass spectrometric techniques to biochemistry and biomedicine. For this purpose, scientific contributions in this field at major international meetings have been monitored, together with an extensive appraisal of literature data covering the period from 1988 to 1991. A bibliometric evaluation of the MEDLINE database for this period provides a total of almost 4000 entries for mass spectrometry. This allows a detailed study of literature and geographical sources of the most frequent applications, of disciplines where mass spectrometry is most active and of types of sample and instrumentation most commonly used. In this regard major efforts according to number of publications (over 100 literature reports) are concentrated in countries like Canada, France, Germany, Italy, Japan, Sweden, UK and the USA. Also, most of the work using mass spectrometry in biochemistry and biomedicine is centred on studies on biotransformation, metabolism, pharmacology, pharmacokinetics and toxicology, which have been carried out on samples of blood, urine, plasma and tissue, by order of frequency of use. Human and animal studies appear to be evenly distributed in terms of the number of reports published in the literature in which the authors make use of experimental animals or describe work on human samples. Along these lines, special attention is given to the real usefulness of mass spectrometry (MS) technology in routine medical practice. Thus the review concentrates on evaluating the progress made in disease diagnosis and overall patient care. As regards prevailing techniques, GCMS continues to be the mainstay of the state of the art methods for multicomponent analysis, stable isotope tracer studies and metabolic profiling, while HPLC--MS and tandem MS are becoming increasingly important in biomedical research. However
Berhidi, Anna; Geges, József; Vasas, Lívia
The majority of Hungarian scientific results are published in international periodicals in foreign languages. Yet the publications in Hungarian scientific periodicals also should not be ignored. This study analyses biomedical periodicals of Hungarian edition from different points of view. Based on different databases a list of titles consisting of 119 items resulted, which contains both the core and the peripheral journals of the biomedical field. These periodicals were analysed empirically, one by one: checking out the titles. 13 of the titles are ceased, among the rest 106 Hungarian scientific journals 10 are published in English language. From the remaining majority of Hungarian language and publishing only a few show up in international databases. Although quarter of the Hungarian biomedical journals meet the requirements, which means they could be represented in international databases, these periodicals are not indexed. 42 biomedical periodicals are available online. Although quarter of these journals come with restricted access. 2/3 of the Hungarian biomedical journals have detailed instructions to authors. These instructions inform the publishing doctors and researchers of the requirements of a biomedical periodical. The increasing number of Hungarian biomedical journals published is welcome news. But it would be important for quality publications which are cited a lot to appear in the Hungarian journals. The more publications are cited, the more journals and authors gain in prestige on home and international level.
Boas, David A
Biomedical optics holds tremendous promise to deliver effective, safe, non- or minimally invasive diagnostics and targeted, customizable therapeutics. Handbook of Biomedical Optics provides an in-depth treatment of the field, including coverage of applications for biomedical research, diagnosis, and therapy. It introduces the theory and fundamentals of each subject, ensuring accessibility to a wide multidisciplinary readership. It also offers a view of the state of the art and discusses advantages and disadvantages of various techniques.Organized into six sections, this handbook: Contains intr
Gebelein, C G
The biomedical applications of polymers span an extremely wide spectrum of uses, including artificial organs, skin and soft tissue replacements, orthopaedic applications, dental applications, and controlled release of medications. No single, short review can possibly cover all these items in detail, and dozens of books andhundreds of reviews exist on biomedical polymers. Only a few relatively recent examples will be cited here;additional reviews are listed under most of the major topics in this book. We will consider each of the majorclassifications of biomedical polymers to some extent, inclu
From exoskeletons to neural implants, biomedical devices are no less than life-changing. Compact and constant power sources are necessary to keep these devices running efficiently. Edwar Romero's Powering Biomedical Devices reviews the background, current technologies, and possible future developments of these power sources, examining not only the types of biomedical power sources available (macro, mini, MEMS, and nano), but also what they power (such as prostheses, insulin pumps, and muscular and neural stimulators), and how they work (covering batteries, biofluids, kinetic and ther
Full Text Available Chaotic modulation is a strong method of improving communication security. Analog and discrete chaotic systems are presented in actual literature. Due to the expansion of digital communication, discrete-time systems become more efficient and closer to actual technology. The present contribution offers an in-depth analysis of the effects chaos encryption produce on 1D and 2D biomedical signals. The performed simulations show that modulating signals are precisely recovered by the synchronizing receiver if discrete systems are digitally implemented and the coefficients precisely correspond. Channel noise is also applied and its effects on biomedical signal demodulation are highlighted.
Danishuddin, Mohd; Kaushal, Lalima; Hassan Baig, Mohd; Khan, Asad U.
Drug resistance is one of the major concerns for antimicrobial chemotherapy against any particular target. Knowledge of the primary structure of antimicrobial agents and their activities is essential for rational drug design. Thus, we developed a comprehensive database, anti microbial drug database (AMDD), of known synthetic antibacterial and antifungal compounds that were extracted from the available literature and other chemical databases, e.g., PubChem, PubChem BioAssay and ZINC, etc. The ...
... and on-line analysis of the biomedical signals. Each Biopac system-based laboratory station consists of real-time data acquisition system, amplifiers for EMG, EKG, EEG, and equipment for the study of Plethysmography, evoked response, cardio...
Rangayyan, Rangaraj M
The book will help assist a reader in the development of techniques for analysis of biomedical signals and computer aided diagnoses with a pedagogical examination of basic and advanced topics accompanied by over 350 figures and illustrations. Wide range of filtering techniques presented to address various applications. 800 mathematical expressions and equations. Practical questions, problems and laboratory exercises. Includes fractals and chaos theory with biomedical applications.
Sophisticated techniques for signal processing are now available to the biomedical specialist! Written in an easy-to-read, straightforward style, Biomedical Signal Processing presents techniques to eliminate background noise, enhance signal detection, and analyze computer data, making results easy to comprehend and apply. In addition to examining techniques for electrical signal analysis, filtering, and transforms, the author supplies an extensive appendix with several computer programs that demonstrate techniques presented in the text.
[Algorithms for the identification of hospital stays due to osteoporotic femoral neck fractures in European medical administrative databases using ICD-10 codes: A non-systematic review of the literature].
Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M
Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Dario, Paolo; Chiara Carrozza, Maria; Benvenuto, Antonella; Menciassi, Arianna
In this paper we analyse the main characteristics of some micro-devices which have been developed recently for biomedical applications. Among the many biomedical micro-systems proposed in the literature or already on the market, we have selected a few which, in our opinion, represent particularly well the technical problems to be solved, the research topics to be addressed and the opportunities offered by micro-system technology (MST) in the biomedical field. For this review we have identified four important areas of application of micro-systems in medicine and biology: (1) diagnostics (2) drug delivery; (3) neural prosthetics and tissue engineering; and (4) minimally invasive surgery. We conclude that MST has the potential to play a major role in the development of new medical instrumentation and to have a considerable industrial impact in this field.
Full Text Available Abstract Background Reactive oxygen species (ROS are known mediators of cellular damage in multiple diseases including diabetic complications. Despite its importance, no comprehensive database is currently available for the genes associated with ROS. Methods We present ROS- and diabetes-related targets (genes/proteins collected from the biomedical literature through a text mining technology. A web-based literature mining tool, SciMiner, was applied to 1,154 biomedical papers indexed with diabetes and ROS by PubMed to identify relevant targets. Over-represented targets in the ROS-diabetes literature were obtained through comparisons against randomly selected literature. The expression levels of nine genes, selected from the top ranked ROS-diabetes set, were measured in the dorsal root ganglia (DRG of diabetic and non-diabetic DBA/2J mice in order to evaluate the biological relevance of literature-derived targets in the pathogenesis of diabetic neuropathy. Results SciMiner identified 1,026 ROS- and diabetes-related targets from the 1,154 biomedical papers (http://jdrf.neurology.med.umich.edu/ROSDiabetes/. Fifty-three targets were significantly over-represented in the ROS-diabetes literature compared to randomly selected literature. These over-represented targets included well-known members of the oxidative stress response including catalase, the NADPH oxidase family, and the superoxide dismutase family of proteins. Eight of the nine selected genes exhibited significant differential expression between diabetic and non-diabetic mice. For six genes, the direction of expression change in diabetes paralleled enhanced oxidative stress in the DRG. Conclusions Literature mining compiled ROS-diabetes related targets from the biomedical literature and led us to evaluate the biological relevance of selected targets in the pathogenesis of diabetic neuropathy.
Amith, Muhammad; He, Zhe; Bian, Jiang; Lossio-Ventura, Juan Antonio; Tao, Cui
With the proliferation of heterogeneous health care data in the last three decades, biomedical ontologies and controlled biomedical terminologies play a more and more important role in knowledge representation and management, data integration, natural language processing, as well as decision support for health information systems and biomedical research. Biomedical ontologies and controlled terminologies are intended to assure interoperability. Nevertheless, the quality of biomedical ontologies has hindered their applicability and subsequent adoption in real-world applications. Ontology evaluation is an integral part of ontology development and maintenance. In the biomedicine domain, ontology evaluation is often conducted by third parties as a quality assurance (or auditing) effort that focuses on identifying modeling errors and inconsistencies. In this work, we first organized four categorical schemes of ontology evaluation methods in the existing literature to create an integrated taxonomy. Further, to understand the ontology evaluation practice in the biomedicine domain, we reviewed a sample of 200 ontologies from the National Center for Biomedical Ontology (NCBO) BioPortal-the largest repository for biomedical ontologies-and observed that only 15 of these ontologies have documented evaluation in their corresponding inception papers. We then surveyed the recent quality assurance approaches for biomedical ontologies and their use. We also mapped these quality assurance approaches to the ontology evaluation criteria. It is our anticipation that ontology evaluation and quality assurance approaches will be more widely adopted in the development life cycle of biomedical ontologies. Copyright © 2018 Elsevier Inc. All rights reserved.
Rajan, Renju; Robin, Delvin T; M, Vandanarani
Biomedical waste management is an integral part of traditional and contemporary system of health care. The paper focuses on the identification and classification of biomedical wastes in Ayurvedic hospitals, current practices of its management in Ayurveda hospitals and its future prospective. Databases like PubMed (1975-2017 Feb), Scopus (1960-2017), AYUSH Portal, DOAJ, DHARA and Google scholar were searched. We used the medical subject headings 'biomedical waste' and 'health care waste' for identification and classification. The terms 'biomedical waste management', 'health care waste management' alone and combined with 'Ayurveda' or 'Ayurvedic' for current practices and recent advances in the treatment of these wastes were used. We made a humble attempt to categorize the biomedical wastes from Ayurvedic hospitals as the available data about its grouping is very scarce. Proper biomedical waste management is the mainstay of hospital cleanliness, hospital hygiene and maintenance activities. Current disposal techniques adopted for Ayurveda biomedical wastes are - sewage/drains, incineration and land fill. But these methods are having some merits as well as demerits. Our review has identified a number of interesting areas for future research such as the logical application of bioremediation techniques in biomedical waste management and the usage of effective micro-organisms and solar energy in waste disposal. Copyright © 2017 Transdisciplinary University, Bangalore and World Ayurveda Foundation. Published by Elsevier B.V. All rights reserved.
GENERAL BIOMEDICAL OPTICS THEORYIntroduction to the Use of Light for Diagnostic and Therapeutic ModalitiesWhat Is Biomedical Optics?Biomedical Optics TimelineElementary Optical DiscoveriesHistorical Events in Therapeutic and Diagnostic Use of LightLight SourcesCurrent State of the ArtSummaryAdditional ReadingProblemsReview of Optical Principles: Fundamental Electromagnetic Theory and Description of Light SourcesDefinitions in OpticsKirchhoff's Laws of RadiationElectromagnetic Wave TheoryLight SourcesApplications of Various LasersSummaryAdditional ReadingProblemsReview of Optical Principles: Classical OpticsGeometrical OpticsOther Optical PrinciplesQuantum PhysicsGaussian OpticsSummaryAdditional ReadingProblemsReview of Optical Interaction PropertiesAbsorption and ScatteringSummaryAdditional ReadingProblemsLight-Tissue Interaction VariablesLaser VariablesTissue VariablesLight Transportation TheoryLight Propagation under Dominant AbsorptionSummaryNomenclatureAdditional ReadingProblemsLight-Tissue Interaction Th...
Building of the System of biomedical scientific information of Yugoslavia (SBMSI YU) began, by the end of 1980, and the system became operative officially in 1986. After the political disintegration of former Yugoslavia SBMSI of Serbia was formed. SBMSI is developed according to the policy of developing of the System of scientific technologic information of Serbia (SSTI S), and with technical support of SSTI S. Reconstruction of the System is done by using former SBMSI YU as a model. Unlike the former SBMSI YU, SBMSI S owns besides the database Biomedicina Serbica, three important databases: database of doctoral dissertations promoted at University Medical School in Belgrade in the period from 1955-1993, database of Master's theses promoted at the University School of Medicine in Belgrade from 1965-1993; A database of foreign biomedical periodicals in libraries of Serbia.
Brown, J H U
Advances in Biomedical Engineering, Volume 6, is a collection of papers that discusses the role of integrated electronics in medical systems and the usage of biological mathematical models in biological systems. Other papers deal with the health care systems, the problems and methods of approach toward rehabilitation, as well as the future of biomedical engineering. One paper discusses the use of system identification as it applies to biological systems to estimate the values of a number of parameters (for example, resistance, diffusion coefficients) by indirect means. More particularly, the i
Brown, J H U
Advances in Biomedical Engineering, Volume 5, is a collection of papers that deals with application of the principles and practices of engineering to basic and applied biomedical research, development, and the delivery of health care. The papers also describe breakthroughs in health improvements, as well as basic research that have been accomplished through clinical applications. One paper examines engineering principles and practices that can be applied in developing therapeutic systems by a controlled delivery system in drug dosage. Another paper examines the physiological and materials vari
Biomedical enhancements, the applications of medical technology to make better those who are neither ill nor deficient, have made great strides in the past few decades. Using Amartya Sen's capability approach as my framework, I argue in this article that far from being simply permissible, we have a prima facie moral obligation to use these new developments for the end goal of promoting social justice. In terms of both range and magnitude, the use of biomedical enhancements will mark a radical advance in how we compensate the most disadvantaged members of society. © 2013 John Wiley & Sons Ltd.
Bell, D A
Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The
Zhao, M; Chen, L; Qu, H
Cell senescence is a cellular process in which normal diploid cells cease to replicate and is a major driving force for human cancers and aging-associated diseases. Recent studies on cell senescence have identified many new genetic components and pathways that control cell aging. However, there is no comprehensive resource for cell senescence that integrates various genetic studies and relationships with cell senescence, and the risk associated with complex diseases such as cancer is still unexplored. We have developed the first literature-based gene resource for exploring cell senescence genes, CSGene. We complied 504 experimentally verified genes from public data resources and published literature. Pathway analyses highlighted the prominent roles of cell senescence genes in the control of rRNA gene transcription and unusual rDNA repeat that constitute a center for the stability of the whole genome. We also found a strong association of cell senescence with HIV-1 infection and viral carcinogenesis that are mainly related to promoter/enhancer binding and chromatin modification processes. Moreover, pan-cancer mutation and network analysis also identified common cell aging mechanisms in cancers and uncovered a highly modular network structure. These results highlight the utility of CSGene for elucidating the complex cellular events of cell senescence.
Attinger, E. O.
Considers definition of biomedical engineering (BME) and how biomedical engineers should be trained. State of the art descriptions of BME and BME education are followed by a brief look at the future of BME. (TS)
Peek, N.; Combi, C.; Tucker, A.
Objective: To introduce the special topic of Methods of Information in Medicine on data mining in biomedicine, with selected papers from two workshops on Intelligent Data Analysis in bioMedicine (IDAMAP) held in Verona (2006) and Amsterdam (2007). Methods: Defining the field of biomedical data
Carmichael, Stephen W.; Robb, Richard A.
There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that…
The biomedical research Panel believes that the Calutron facility at Oak Ridge is a national and international resource of immense scientific value and of fundamental importance to continued biomedical research. This resource is essential to the development of new isotope uses in biology and medicine. It should therefore be nurtured by adequate support and operated in a way that optimizes its services to the scientific and technological community. The Panel sees a continuing need for a reliable supply of a wide variety of enriched stable isotopes. The past and present utilization of stable isotopes in biomedical research is documented in Appendix 7. Future requirements for stable isotopes are impossible to document, however, because of the unpredictability of research itself. Nonetheless we expect the demand for isotopes to increase in parallel with the continuing expansion of biomedical research as a whole. There are a number of promising research projects at the present time, and these are expected to lead to an increase in production requirements. The Panel also believes that a high degree of priority should be given to replacing the supplies of the 65 isotopes (out of the 224 previously available enriched isotopes) no longer available from ORNL
Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro
The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN’s original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists’ Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information. PMID:21076152
Clinical trial results have been traditionally communicated through the publication of scholarly reports and reviews in biomedical journals. However, this dissemination of information can be delayed or incomplete, making it difficult to appraise new treatments, or in the case of missing data, evaluate older interventions. Going beyond the routine search of PubMed, it is possible to discover additional information in the "grey literature." Examples of the grey literature include clinical trial registries, patent databases, company and industrywide repositories, regulatory agency digital archives, abstracts of paper and poster presentations on meeting/congress websites, industry investor reports and press releases, and institutional and personal websites.
Zhao, Ping; Xu, Ping; Li, Bingyan; Wang, Zhengrong
This investigation was made to reveal the current status, research trend and research level of biomedical engineering in Chinese mainland by means of scientometrics and to assess the quality of the four domestic publications by bibliometrics. We identified all articles of four related publications by searching Chinese and foreign databases from 1997 to 2001. All articles collected or cited by these databases were searched and statistically analyzed for finding out the relevant distributions, including databases, years, authors, institutions, subject headings and subheadings. The source of sustentation funds and the related articles were analyzed too. The results showed that two journals were cited by two foreign databases and five Chinese databases simultaneously. The output of Journal of Biomedical Engineering was the highest. Its quantity of original papers cited by EI, CA and the totality of papers sponsored by funds were higher than those of the others, but the quantity and percentage per year of biomedical articles cited by EI were decreased in all. Inland core authors and institutions had come into being in the field of biomedical engineering. Their research topics were mainly concentrated on ten subject headings which included biocompatible materials, computer-assisted signal processing, electrocardiography, computer-assisted image processing, biomechanics, algorithms, electroencephalography, automatic data processing, mechanical stress, hemodynamics, mathematical computing, microcomputers, theoretical models, etc. The main subheadings were concentrated on instrumentation, physiopathology, diagnosis, therapy, ultrasonography, physiology, analysis, surgery, pathology, method, etc.
National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...
Biofuel Database (Web, free access) This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.
Biomedical Science Technologists in Lagos Universities: Meeting Modern Standards in Biomedical Research. ... biomedical techniques. SOTA biomedical science needs adequate financial investment for the scientific resources as well as stable civic infrastructure, thus these public institutions need more of such provisions.
The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…
Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália
Seoane, Jose A; Aguiar-Pulido, Vanessa; Munteanu, Cristian R; Rivero, Daniel; Rabunal, Juan R; Dorado, Julian; Pazos, Alejandro
In recent years, in the post genomic era, more and more data is being generated by biological high throughput technologies, such as proteomics and transcriptomics. This omics data can be very useful, but the real challenge is to analyze all this data, as a whole, after integrating it. Biomedical data integration enables making queries to different, heterogeneous and distributed biomedical data sources. Data integration solutions can be very useful not only in the context of drug design, but also in biomedical information retrieval, clinical diagnosis, system biology, etc. In this review, we analyze the most common approaches to biomedical data integration, such as federated databases, data warehousing, multi-agent systems and semantic technology, as well as the solutions developed using these approaches in the past few years.
Bronzino, Joseph D
Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Signals, Imaging, and Informatics, the third volume of the handbook, presents material from respected scientists with diverse backgrounds in biosignal processing, medical imaging, infrared imaging, and medical informatics.More than three dozen specific topics are examined, including biomedical s
Friedman, C P; Wildemuth, B M; Muriuki, M; Gant, S P; Downs, S M; Twarog, R G; de Bliek, R
This study explored which of two modes of access to a biomedical database better supported problem solving in bacteriology. Boolean access, which allowed subjects to frame their queries as combinations of keywords, was compared to hypertext access, which allowed subjects to navigate from one database node to another. The accessible biomedical data were identical across systems. Data were collected from 42 first year medical students, each randomized to the Boolean or hypertext system, before and after their bacteriology course. Subjects worked eight clinical case problems, first using only their personal knowledge and, subsequently, with aid from the database. Database retrievals enabled students to answer questions they could not answer based on personal knowledge only. This effect was greater when personal knowledge of bacteriology was lower. The results also suggest that hypertext was superior to Boolean access in helping subjects identify possible infectious agents in these clinical case problems.
Marshall, Joanne Gard
Recent trends in the marketing of electronic information technology have increased interest among health professionals in obtaining direct access to online biomedical databases such as Medline. During 1985, the Canadian Medical Association (CMA) and Telecom Canada conducted an eight-month trial of the use made of online information retrieval systems by 23 practising physicians and one pharmacist. The results of this project demonstrated both the value and the limitations of these systems in p...
Tuchin, Valery V; Zimnyakov, Dmitry A
Optical Polarization in Biomedical Applications introduces key developments in optical polarization methods for quantitative studies of tissues, while presenting the theory of polarization transfer in a random medium as a basis for the quantitative description of polarized light interaction with tissues. This theory uses the modified transfer equation for Stokes parameters and predicts the polarization structure of multiple scattered optical fields. The backscattering polarization matrices (Jones matrix and Mueller matrix) important for noninvasive medical diagnostic are introduced. The text also describes a number of diagnostic techniques such as CW polarization imaging and spectroscopy, polarization microscopy and cytometry. As a new tool for medical diagnosis, optical coherent polarization tomography is analyzed. The monograph also covers a range of biomedical applications, among them cataract and glaucoma diagnostics, glucose sensing, and the detection of bacteria.
Full Text Available Shape memory polymers(SMPs are a class of functional "smart" materials that have shown bright prospects in the area of biomedical applications. The novel smart materials with multifunction of biodegradability and biocompatibility can be designed based on their general principle, composition and structure. In this review, the latest process of three typical biodegradable SMPs(poly(lactide acide, poly(ε-caprolactone, polyurethane was summarized. These three SMPs were classified in different structures and discussed, and shape-memory mechanism, recovery rate and fixed rate, response speed was analysed in detail, also, some biomedical applications were presented. Finally, the future development and applications of SMPs are prospected: two-way SMPs and body temperature induced SMPs will be the focus attension by researchers.
Shen, He; Zhang, Liming; Liu, Min; Zhang, Zhijun
Graphene exhibits unique 2-D structure and exceptional phyiscal and chemical properties that lead to many potential applications. Among various applications, biomedical applications of graphene have attracted ever-increasing interests over the last three years. In this review, we present an overview of current advances in applications of graphene in biomedicine with focus on drug delivery, cancer therapy and biological imaging, together with a brief discussion on the challenges and perspectives for future research in this field. PMID:22448195
Daumke, Philipp; Markó, Kornél; Poprat, Michael; Schulz, Stefan
We present a unique technique to create a multilingual biomedical dictionary, based on a methodology called Morpho-Semantic indexing. Our approach closes a gap caused by the absence of free available multilingual medical dictionaries and the lack of accuracy of non-medical electronic translation tools. We first explain the underlying technology followed by a description of the dictionary interface, which makes use of a multilingual subword thesaurus and of statistical information from a domain-specific, multilingual corpus.
It has been well documented that reaching beyond MEDLINE into a diversity of databases enhances search results, but a chronic question in comprehensive and systematic searching is how far, and where, to search. When published in business or economics sources, articles focusing on cost outcomes of health and health policy interventions may not be indexed in the biomedical databases that are traditionally consulted for clinical systematic reviews. This dual case study explores and documents ...
Yoo, Illhoi; Alafaireet, Patricia; Marinov, Miroslav; Pena-Hernandez, Keila; Gopidi, Rajitha; Chang, Jia-Fu; Hua, Lei
As a new concept that emerged in the middle of 1990's, data mining can help researchers gain both novel and deep insights and can facilitate unprecedented understanding of large biomedical datasets. Data mining can uncover new biomedical and healthcare knowledge for clinical and administrative decision making as well as generate scientific hypotheses from large experimental data, clinical databases, and/or biomedical literature. This review first introduces data mining in general (e.g., the background, definition, and process of data mining), discusses the major differences between statistics and data mining and then speaks to the uniqueness of data mining in the biomedical and healthcare fields. A brief summarization of various data mining algorithms used for classification, clustering, and association as well as their respective advantages and drawbacks is also presented. Suggested guidelines on how to use data mining algorithms in each area of classification, clustering, and association are offered along with three examples of how data mining has been used in the healthcare industry. Given the successful application of data mining by health related organizations that has helped to predict health insurance fraud and under-diagnosed patients, and identify and classify at-risk people in terms of health with the goal of reducing healthcare cost, we introduce how data mining technologies (in each area of classification, clustering, and association) have been used for a multitude of purposes, including research in the biomedical and healthcare fields. A discussion of the technologies available to enable the prediction of healthcare costs (including length of hospital stay), disease diagnosis and prognosis, and the discovery of hidden biomedical and healthcare patterns from related databases is offered along with a discussion of the use of data mining to discover such relationships as those between health conditions and a disease, relationships among diseases, and
Marius Cristian MAZILU
Full Text Available For someone who has worked in an environment in which the same database is used for data entry and reporting, or perhaps managed a single database server that was utilized by too many users, the advantages brought by data replication are clear. The main purpose of this paper is to emphasize those advantages as well as presenting the different types of Database Replication and the cases in which their use is recommended.
Negi, Simarjeet; Pandey, Sanjit; Srinivasan, Satish M.; Mohammed, Akram; Guda, Chittibabu
LocSigDB (http://genome.unmc.edu/LocSigDB/) is a manually curated database of experimental protein localization signals for eight distinct subcellular locations; primarily in a eukaryotic cell with brief coverage of bacterial proteins. Proteins must be localized at their appropriate subcellular compartment to perform their desired function. Mislocalization of proteins to unintended locations is a causative factor for many human diseases; therefore, collection of known sorting signals will help support many important areas of biomedical research. By performing an extensive literature study, we compiled a collection of 533 experimentally determined localization signals, along with the proteins that harbor such signals. Each signal in the LocSigDB is annotated with its localization, source, PubMed references and is linked to the proteins in UniProt database along with the organism information that contain the same amino acid pattern as the given signal. From LocSigDB webserver, users can download the whole database or browse/search for data using an intuitive query interface. To date, LocSigDB is the most comprehensive compendium of protein localization signals for eight distinct subcellular locations. Database URL: http://genome.unmc.edu/LocSigDB/ PMID:25725059
Huan, L N; Tejani, A M; Egan, G
An increasing amount of recently published literature has implicated outcome reporting bias (ORB) as a major contributor to skewing data in both randomized controlled trials and systematic reviews; however, little is known about the current methods in place to detect ORB. This study aims to gain insight into the detection and management of ORB by biomedical journals. This was a cross-sectional analysis involving standardized questions via email or telephone with the top 30 biomedical journals (2012) ranked by impact factor. The Cochrane Database of Systematic Reviews was excluded leaving 29 journals in the sample. Of 29 journals, 24 (83%) responded to our initial inquiry of which 14 (58%) answered our questions and 10 (42%) declined participation. Five (36%) of the responding journals indicated they had a specific method to detect ORB, whereas 9 (64%) did not have a specific method in place. The prevalence of ORB in the review process seemed to differ with 4 (29%) journals indicating ORB was found commonly, whereas 7 (50%) indicated ORB was uncommon or never detected by their journal previously. The majority (n = 10/14, 72%) of journals were unwilling to report or make discrepancies found in manuscripts available to the public. Although the minority, there were some journals (n = 4/14, 29%) which described thorough methods to detect ORB. Many journals seemed to lack a method with which to detect ORB and its estimated prevalence was much lower than that reported in literature suggesting inadequate detection. There exists a potential for overestimation of treatment effects of interventions and unclear risks. Fortunately, there are journals within this sample which appear to utilize comprehensive methods for detection of ORB, but overall, the data suggest improvements at the biomedical journal level for detecting and minimizing the effect of this bias are needed. © 2014 John Wiley & Sons Ltd.
Rashid, Mamoon; Singla, Deepak; Sharma, Arun; Kumar, Manish; Raghava, Gajendra PS
Background Hormones are signaling molecules that play vital roles in various life processes, like growth and differentiation, physiology, and reproduction. These molecules are mostly secreted by endocrine glands, and transported to target organs through the bloodstream. Deficient, or excessive, levels of hormones are associated with several diseases such as cancer, osteoporosis, diabetes etc. Thus, it is important to collect and compile information about hormones and their receptors. Description This manuscript describes a database called Hmrbase which has been developed for managing information about hormones and their receptors. It is a highly curated database for which information has been collected from the literature and the public databases. The current version of Hmrbase contains comprehensive information about ~2000 hormones, e.g., about their function, source organism, receptors, mature sequences, structures etc. Hmrbase also contains information about ~3000 hormone receptors, in terms of amino acid sequences, subcellular localizations, ligands, and post-translational modifications etc. One of the major features of this database is that it provides data about ~4100 hormone-receptor pairs. A number of online tools have been integrated into the database, to provide the facilities like keyword search, structure-based search, mapping of a given peptide(s) on the hormone/receptor sequence, sequence similarity search. This database also provides a number of external links to other resources/databases in order to help in the retrieving of further related information. Conclusion Owing to the high impact of endocrine research in the biomedical sciences, the Hmrbase could become a leading data portal for researchers. The salient features of Hmrbase are hormone-receptor pair-related information, mapping of peptide stretches on the protein sequences of hormones and receptors, Pfam domain annotations, categorical browsing options, online data submission, Drug
Duchange, Nathalie; Autard, Delphine; Pinhas, Nicole
Open access within the scientific community depends on the scientific context and the practices of the field. In the biomedical domain, the communication of research results is characterised by the importance of the peer reviewing process, the existence of a hierarchy among journals and the transfer of copyright to the editor. Biomedical publishing has become a lucrative market and the growth of electronic journals has not helped lower the costs. Indeed, it is difficult for today's public institutions to gain access to all the scientific literature. Open access is thus imperative, as demonstrated through the positions taken by a growing number of research funding bodies, the development of open access journals and efforts made in promoting open archives. This article describes the setting up of an Inserm portal for publication in the context of the French national protocol for open-access self-archiving and in an international context.
Glonti, Ketevan; Cauchi, Daniel; Cobo, Erik; Boutron, Isabelle; Moher, David; Hren, Darko
The primary functions of peer reviewers are poorly defined. Thus far no body of literature has systematically identified the roles and tasks of peer reviewers of biomedical journals. A clear establishment of these can lead to improvements in the peer review process. The purpose of this scoping review is to determine what is known on the roles and tasks of peer reviewers. We will use the methodological framework first proposed by Arksey and O'Malley and subsequently adapted by Levac et al and the Joanna Briggs Institute. The scoping review will include all study designs, as well as editorials, commentaries and grey literature. The following eight electronic databases will be searched (from inception to May 2017): Cochrane Library, Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, EMBASE, MEDLINE, PsycINFO, Scopus and Web of Science. Two reviewers will use inclusion and exclusion criteria based on the 'Population-Concept-Context' framework to independently screen titles and abstracts of articles considered for inclusion. Full-text screening of relevant eligible articles will also be carried out by two reviewers. The search strategy for grey literature will include searching in websites of existing networks, biomedical journal publishers and organisations that offer resources for peer reviewers. In addition we will review journal guidelines to peer reviewers on how to perform the manuscript review. Journals will be selected using the 2016 journal impact factor. We will identify and assess the top five, middle five and lowest-ranking five journals across all medical specialties. This scoping review will undertake a secondary analysis of data already collected and does not require ethical approval. The results will be disseminated through journals and conferences targeting stakeholders involved in peer review in biomedical research. © Article author(s) (or their employer(s) unless otherwise stated in the text of the
Welch, M.J.; Welles, B.W.
Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest
Elkin, P L; Brown, S H; Wright, G
This article is part of a For-Discussion-Section of Methods of Information in Medicine on "Biomedical Informatics: We are what we publish". It is introduced by an editorial and followed by a commentary paper with invited comments. In subsequent issues the discussion may continue through letters to the editor. Informatics experts have attempted to define the field via consensus projects which has led to consensus statements by both AMIA. and by IMIA. We add to the output of this process the results of a study of the Pubmed publications with abstracts from the field of Biomedical Informatics. We took the terms from the AMIA consensus document and the terms from the IMIA definitions of the field of Biomedical Informatics and combined them through human review to create the Health Informatics Ontology. We built a terminology server using the Intelligent Natural Language Processor (iNLP). Then we downloaded the entire set of articles in Medline identified by searching the literature by "Medical Informatics" OR "Bioinformatics". The articles were parsed by the joint AMIA / IMIA terminology and then again using SNOMED CT and for the Bioinformatics they were also parsed using HGNC Ontology. We identified 153,580 articles using "Medical Informatics" and 20,573 articles using "Bioinformatics". This resulted in 168,298 unique articles and an overlap of 5,855 articles. Of these 62,244 articles (37%) had titles and abstracts that contained at least one concept from the Health Informatics Ontology. SNOMED CT indexing showed that the field interacts with most all clinical fields of medicine. Further defining the field by what we publish can add value to the consensus driven processes that have been the mainstay of the efforts to date. Next steps should be to extract terms from the literature that are uncovered and create class hierarchies and relationships for this content. We should also examine the high occurring of MeSH terms as markers to define Biomedical Informatics
Mora, Oscar; Bisbal, Jesús
In this paper, we present BIMS (Biomedical Information Management System). BIMS is a software architecture designed to provide a flexible computational framework to manage the information needs of a wide range of biomedical research projects. The main goal is to facilitate the clinicians' job in data entry, and researcher's tasks in data management, in high data quality biomedical research projects. The BIMS architecture has been designed following the two-level modeling paradigm, a promising...
Brown, J H U
Advances in Biomedical Engineering, Volume 2, is a collection of papers that discusses the basic sciences, the applied sciences of engineering, the medical sciences, and the delivery of health services. One paper discusses the models of adrenal cortical control, including the secretion and metabolism of cortisol (the controlled process), as well as the initiation and modulation of secretion of ACTH (the controller). Another paper discusses hospital computer systems-application problems, objective evaluation of technology, and multiple pathways for future hospital computer applications. The pos
Tranquillo, Joseph V
Biomedical Signals and Systems is meant to accompany a one-semester undergraduate signals and systems course. It may also serve as a quick-start for graduate students or faculty interested in how signals and systems techniques can be applied to living systems. The biological nature of the examples allows for systems thinking to be applied to electrical, mechanical, fluid, chemical, thermal and even optical systems. Each chapter focuses on a topic from classic signals and systems theory: System block diagrams, mathematical models, transforms, stability, feedback, system response, control, time
Full Text Available The discipline of biostatistics is nowadays a fundamental scientific component of biomedical, public health and health services research. Traditional and emerging areas of application include clinical trials research, observational studies, physiology, imaging, and genomics. The present article reviews the current situation of biostatistics, considering the statistical methods traditionally used in biomedical research, as well as the ongoing development of new methods in response to the new problems arising in medicine. Clearly, the successful application of statistics in biomedical research requires appropriate training of biostatisticians. This training should aim to give due consideration to emerging new areas of statistics, while at the same time retaining full coverage of the fundamentals of statistical theory and methodology. In addition, it is important that students of biostatistics receive formal training in relevant biomedical disciplines, such as epidemiology, clinical trials, molecular biology, genetics, and neuroscience.La Bioestadística es hoy en día una componente científica fundamental de la investigación en Biomedicina, salud pública y servicios de salud. Las áreas tradicionales y emergentes de aplicación incluyen ensayos clínicos, estudios observacionales, fisología, imágenes, y genómica. Este artículo repasa la situación actual de la Bioestadística, considerando los métodos estadísticos usados tradicionalmente en investigación biomédica, así como los recientes desarrollos de nuevos métodos, para dar respuesta a los nuevos problemas que surgen en Medicina. Obviamente, la aplicación fructífera de la estadística en investigación biomédica exige una formación adecuada de los bioestadísticos, formación que debería tener en cuenta las áreas emergentes en estadística, cubriendo al mismo tiempo los fundamentos de la teoría estadística y su metodología. Es importante, además, que los estudiantes de
1.Biomedical Photonics: A Revolution at the Interface of Science and Technology, T. Vo-DinhPHOTONICS AND TISSUE OPTICS2.Optical Properties of Tissues, J. Mobley and T. Vo-Dinh3.Light-Tissue Interactions, V.V. Tuchin 4.Theoretical Models and Algorithms in Optical Diffusion Tomography, S.J. Norton and T. Vo-DinhPHOTONIC DEVICES5.Laser Light in Biomedicine and the Life Sciences: From the Present to the Future, V.S. Letokhov6.Basic Instrumentation in Photonics, T. Vo-Dinh7.Optical Fibers and Waveguides for Medical Applications, I. Gannot and
Evans, E.A.; Oldham, K.G.
This volume describes the role of radiochemicals in biomedical research, as tracers in the development of new drugs, their interaction and function with receptor proteins, with the kinetics of binding of hormone - receptor interactions, and their use in cancer research and clinical oncology. The book also aims to identify future trends in this research, the main objective of which is to provide information leading to improvements in the quality of life, and to give readers a basic understanding of the development of new drugs, how they function in relation to receptor proteins and lead to a better understanding of the diagnosis and treatment of cancers. (author)
Hoffman, Sharona; Podgurski, Andy
The accelerating adoption of electronic health record (EHR) systems will have far-reaching implications for public health research and surveillance, which in turn could lead to changes in public policy, statutes, and regulations. The public health benefits of EHR use can be significant. However, researchers and analysts who rely on EHR data must proceed with caution and understand the potential limitations of EHRs. Because of clinicians' workloads, poor user-interface design, and other factors, EHR data can be erroneous, miscoded, fragmented, and incomplete. In addition, public health findings can be tainted by the problems of selection bias, confounding bias, and measurement bias. These flaws may become all the more troubling and important in an era of electronic "big data," in which a massive amount of information is processed automatically, without human checks. Thus, we conclude the paper by outlining several regulatory and other interventions to address data analysis difficulties that could result in invalid conclusions and unsound public health policies. © 2013 American Society of Law, Medicine & Ethics, Inc.
Kiela, Douwe; Guo, Yufan; Stenius, Ulla; Korhonen, Anna
Information structure (IS) analysis is a text mining technique, which classifies text in biomedical articles into categories that capture different types of information, such as objectives, methods, results and conclusions of research. It is a highly useful technique that can support a range of Biomedical Text Mining tasks and can help readers of biomedical literature find information of interest faster, accelerating the highly time-consuming process of literature review. Several approaches to IS analysis have been presented in the past, with promising results in real-world biomedical tasks. However, all existing approaches, even weakly supervised ones, require several hundreds of hand-annotated training sentences specific to the domain in question. Because biomedicine is subject to considerable domain variation, such annotations are expensive to obtain. This makes the application of IS analysis across biomedical domains difficult. In this article, we investigate an unsupervised approach to IS analysis and evaluate the performance of several unsupervised methods on a large corpus of biomedical abstracts collected from PubMed. Our best unsupervised algorithm (multilevel-weighted graph clustering algorithm) performs very well on the task, obtaining over 0.70 F scores for most IS categories when applied to well-known IS schemes. This level of performance is close to that of lightly supervised IS methods and has proven sufficient to aid a range of practical tasks. Thus, using an unsupervised approach, IS could be applied to support a wide range of tasks across sub-domains of biomedicine. We also demonstrate that unsupervised learning brings novel insights into IS of biomedical literature and discovers information categories that are not present in any of the existing IS schemes. The annotated corpus and software are available at http://www.cl.cam.ac.uk/∼dk427/bio14info.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For
Berlanga, Rafael; Jiménez-Ruiz, Ernesto; Nebot, Victoria
The semantic integration of biomedical resources is still a challenging issue which is required for effective information processing and data analysis. The availability of comprehensive knowledge resources such as biomedical ontologies and integrated thesauri greatly facilitates this integration effort by means of semantic annotation, which allows disparate data formats and contents to be expressed under a common semantic space. In this paper, we propose a multidimensional representation for such a semantic space, where dimensions regard the different perspectives in biomedical research (e.g., population, disease, anatomy and protein/genes). This paper presents a novel method for building multidimensional semantic spaces from semantically annotated biomedical data collections. This method consists of two main processes: knowledge and data normalization. The former one arranges the concepts provided by a reference knowledge resource (e.g., biomedical ontologies and thesauri) into a set of hierarchical dimensions for analysis purposes. The latter one reduces the annotation set associated to each collection item into a set of points of the multidimensional space. Additionally, we have developed a visual tool, called 3D-Browser, which implements OLAP-like operators over the generated multidimensional space. The method and the tool have been tested and evaluated in the context of the Health-e-Child (HeC) project. Automatic semantic annotation was applied to tag three collections of abstracts taken from PubMed, one for each target disease of the project, the Uniprot database, and the HeC patient record database. We adopted the UMLS Meta-thesaurus 2010AA as the reference knowledge resource. Current knowledge resources and semantic-aware technology make possible the integration of biomedical resources. Such an integration is performed through semantic annotation of the intended biomedical data resources. This paper shows how these annotations can be exploited for
Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and
Ambler, Scott W
Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...
National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...
National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...
National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...
SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase) The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.
Rios, Anthony; Kavuluru, Ramakanth
Building high accuracy text classifiers is an important task in biomedicine given the wealth of information hidden in unstructured narratives such as research articles and clinical documents. Due to large feature spaces, traditionally, discriminative approaches such as logistic regression and support vector machines with n-gram and semantic features (e.g., named entities) have been used for text classification where additional performance gains are typically made through feature selection and ensemble approaches. In this paper, we demonstrate that a more direct approach using convolutional neural networks (CNNs) outperforms several traditional approaches in biomedical text classification with the specific use-case of assigning medical subject headings (or MeSH terms) to biomedical articles. Trained annotators at the national library of medicine (NLM) assign on an average 13 codes to each biomedical article, thus semantically indexing scientific literature to support NLM's PubMed search system. Recent evidence suggests that effective automated efforts for MeSH term assignment start with binary classifiers for each term. In this paper, we use CNNs to build binary text classifiers and achieve an absolute improvement of over 3% in macro F-score over a set of selected hard-to-classify MeSH terms when compared with the best prior results on a public dataset. Additional experiments on 50 high frequency terms in the dataset also show improvements with CNNs. Our results indicate the strong potential of CNNs in biomedical text classification tasks.
Kristensen, Helen Grundtvig; Stjernø, Henrik
Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....
Barnard, K. D.; Lloyd, C. E.; Skinner, T. C.
Aim: To review systematically the published literature addressing whether continuous subcutaneous insulin infusion (CSII) provides any quality of life benefits to people with Type 1 diabetes. Methods: Electronic databases and published references were searched and a consultation with two professi...
Long, Francis M.
Discusses four methods of professional identification in biomedical engineering including registration, certification, accreditation, and possible membership qualification of the societies. Indicates that the destiny of the biomedical engineer may be under the control of a new profession, neither the medical nor the engineering. (CC)
The Egyptian Journal of Biomedical Sciences publishes in all aspects of biomedical research sciences. Both basic and clinical research papers are welcomed. Vol 23 (2007). DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Table of Contents. Articles. Phytochemical And ...
Full Text Available Introduction: selection of journal for publication purpose is one of concerns of biomedical researchers. They apply various criteria to choose appropriate journal. Here, we have tried to collect main criteria biomedical researchers use to select a journal to submit their work. Methods: we collected these criteria through focus group conversations with researchers during our careers, feedbacks from participants of our scientific writing workshops and non-systematic reviewing of some related literature. Results: we have presented a summative and informative guidance in selection of biomedical journals for biomedical paper submission and publication. Conclusion: Categorized criteria as a mnemonic tool for authors may help the authors in journal selection process.
Relevant biomedical research literatures on Human Research Participants from Scirus, Pubmed and Medline computerized search were critically evaluated and highlighted. Information was also obtained from research ethics training as well as texts and journals in the medical libraries of the research ethics departments of ...
Regrettably, the discussion is usually poorly written in most biomedical journals. The author's contribution is often widely scattered throughout the discussion. Sometimes it is overwhelmed by the cited literature. This weakens the message and the impact of the contribution. The approach suggested by this author is aimed at ...
Full Text Available Abstract Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians" can be essential members of translational medicine teams.
Sarkar, Indra Neil
Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics) may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records) and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians") can be essential members of translational medicine teams.
This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...
Full Text Available The prominence of biomedical criteria relying on brain death reduces the impact of metaphysical, anthropological, psychosocial, cultural, religious, and legal aspects disclosing the real value and essence of human life. The aim of this literature review is to discuss metaphysical and biomedical approaches toward death and their complimentary relationship in the determination of death. A critical appraisal of theoretical and scientific evidence and legal documents supported analytical discourse. In the metaphysical discourse of death, two main questions about what human death is and how to determine the fact of death clearly separate the ontological and epistemological aspects of death. During the 20th century, various understandings of human death distinguished two different approaches toward the human: the human is a subject of activities or a subject of the human being. Extinction of the difference between the entities and the being, emphasized as rational–logical instrumentation, is not sufficient to understand death thoroughly. Biological criteria of death are associated with biological features and irreversible loss of certain cognitive capabilities. Debating on the question “Does a brain death mean death of a human being?” two approaches are considering: the body-centrist and the mind-centrist. By bridging those two alternatives human death appears not only as biomedical, but also as metaphysical phenomenon. It was summarized that a predominance of clinical criteria for determination of death in practice leads to medicalization of death and limits the holistic perspective toward individual's death. Therefore, the balance of metaphysical and biomedical approaches toward death and its determination would decrease the medicalization of the concept of death.
Ramos, Ana P; Cruz, Marcos A E; Tovani, Camila B; Ciancaglini, Pietro
The ability to investigate substances at the molecular level has boosted the search for materials with outstanding properties for use in medicine. The application of these novel materials has generated the new research field of nanobiotechnology, which plays a central role in disease diagnosis, drug design and delivery, and implants. In this review, we provide an overview of the use of metallic and metal oxide nanoparticles, carbon-nanotubes, liposomes, and nanopatterned flat surfaces for specific biomedical applications. The chemical and physical properties of the surface of these materials allow their use in diagnosis, biosensing and bioimaging devices, drug delivery systems, and bone substitute implants. The toxicology of these particles is also discussed in the light of a new field referred to as nanotoxicology that studies the surface effects emerging from nanostructured materials.
Kanter, S L; Miller, R A; Tan, M; Schwartz, J
Recognition of the biomedical concepts in a document is prerequisite to further processing of the document: medical educators examine curricular documents to discover the coverage of certain topics, detect unwanted redundancies, integrate new content, and delete old content; and clinicians are concerned with terms in patient medical records for purposes ranging from creation of an electronic medical record to identification of medical literature relevant to a particular case. POSTDOC (POSTprocessor of DOCuments) is a computer application that (1) accepts as input a free-text, ASCII-formatted document and uses the Unified Medical Language System (UMLS) Metathesaurus to recognize relevant main concept terms; (2) provides term co-occurrence data and thus is able to identify potentially increasing correlations among concepts within the document; and (3) retrieves references from MEDLINE files based on user identification of relevant subjects. This paper describes a formative evaluation of POSTDOC's ability to recognize UMLS Metathesaurus biomedical concepts in medical school lecture outlines. The "precision" and "recall" varied over a wide range and were deemed not yet acceptable for automated creation of a database of concepts from curricular documents. However, results were good enough to warrant further study and continued system development.
Full Text Available Revisión de las principales bases de datos nacionales e internacionales dedicadas completa o parcialmente a la Medicina. Se analizan los criterios que emplean para la selección de sus fondos, especialmente revistas, y cual es la participación de los países latinoamericanos en las mismas. Por otra parte se analizan los criterios empleados por los investigadores para seleccionar la información y las razones que explican la importancia que han adquirido los indicadores cuantitativos del tipo del factor de impacto.The main national and international databases on health sciences are revised in order to ascertain the criteria used for selecting their material -mainly journals- and the share of spanish and latin american journals in them. On the other hand the criteria used by researchers for selecting the information are analyzed as well as the reasons to explain the importance nowadays of quantitative indicators such as the so called "impact factor".
Vanschoren, Joaquin; Blockeel, Hendrik
Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.
Blümle, A; Lagrèze, W A; Motschall, E
In order to identify current (and relevant) evidence for a specific clinical question within the unmanageable amount of information available, solid skills in performing a systematic literature search are essential. An efficient approach is to search a biomedical database containing relevant literature citations of study reports. The best known database is MEDLINE, which is searchable for free via the PubMed interface. In this article, we explain step by step how to perform a systematic literature search via PubMed by means of an example research question in the field of ophthalmology. First, we demonstrate how to translate the clinical problem into a well-framed and searchable research question, how to identify relevant search terms and how to conduct a text word search and a search with keywords in medical subject headings (MeSH) terms. We then show how to limit the number of search results if the search yields too many irrelevant hits and how to increase the number in the case of too few citations. Finally, we summarize all essential principles that guide a literature search via PubMed.
SRD 106 IUPAC-NIST Solubility Database (Web, free access) These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.
Jimeno Yepes, Antonio; Berlanga, Rafael
Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Full Text Available In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research.
Pesquita, Catia; Faria, Daniel; Falcão, André O; Lord, Phillip; Couto, Francisco M
In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research.
In June 1996, NASA released a Cooperative Agreement Notice (CAN) inviting proposals to establish a National Space Biomedical Research Institute (9-CAN-96-01). This CAN stated that: The Mission of the Institute will be to lead a National effort for accomplishing the integrated, critical path, biomedical research necessary to support the long term human presence, development, and exploration of space and to enhance life on Earth by applying the resultant advances in human knowledge and technology acquired through living and working in space. The Institute will be the focal point of NASA sponsored space biomedical research. This statement has not been amended by NASA and remains the mission of the NSBRI.
Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou
The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state
Federal Laboratory Consortium — The Molecular Biomedical Imaging Laboratory (MBIL) is adjacent-a nd has access-to the Department of Radiology and Imaging Sciences clinical imaging facilities. MBIL...
Vardharajula, Sandhya; Ali, Sk Z; Tiwari, Pooja M; Eroğlu, Erdal; Vig, Komal; Dennis, Vida A; Singh, Shree R
Carbon nanotubes (CNTs) are emerging as novel nanomaterials for various biomedical applications. CNTs can be used to deliver a variety of therapeutic agents, including biomolecules, to the target disease sites. In addition, their unparalleled optical and electrical properties make them excellent candidates for bioimaging and other biomedical applications. However, the high cytotoxicity of CNTs limits their use in humans and many biological systems. The biocompatibility and low cytotoxicity of CNTs are attributed to size, dose, duration, testing systems, and surface functionalization. The functionalization of CNTs improves their solubility and biocompatibility and alters their cellular interaction pathways, resulting in much-reduced cytotoxic effects. Functionalized CNTs are promising novel materials for a variety of biomedical applications. These potential applications are particularly enhanced by their ability to penetrate biological membranes with relatively low cytotoxicity. This review is directed towards the overview of CNTs and their functionalization for biomedical applications with minimal cytotoxicity. PMID:23091380
This book is based on a graduate course entitled, Ubiquitous Healthcare Circuits and Systems, that was given by one of the editors. It includes an introduction and overview to biomedical ICs and provides information on the current trends in research.
Deloatch, E. M.
The five problems studied for biomedical applications of NASA technology are reported. The studies reported are: design modification of electrophoretic equipment, operating room environment control, hematological viscometry, handling system for iridium, and indirect blood pressure measuring device.
Discusses the definition of "biomedical engineering" and the development of educational programs in the field. Includes detailed descriptions of the roles of bioengineers, medical engineers, and chemical engineers. (CC)
Hydroxyapatite coatings are of great importance in the biological and biomedical coatings fields, especially in the current era of nanotechnology and bioapplications. With a bonelike structure that promotes osseointegration, hydroxyapatite coating can be applied to otherwise bioinactive implants to make their surface bioactive, thus achieving faster healing and recovery. In addition to applications in orthopedic and dental implants, this coating can also be used in drug delivery. Hydroxyapatite Coatings for Biomedical Applications explores developments in the processing and property characteri
Cheng, JX; Widjaja, F; Choi, JE; Hendren, RL
Complementary and alternative medicine (CAM) is widely used to treat children with psychiatric disorders. In this review, MedLine was searched for various biomedical/CAM treatments in combination with the key words "children," "adolescents," "psychiatric disorders," and "complementary alternative medicine." The biomedical/CAM treatments most thoroughly researched were omega-3 fatty acids, melatonin, and memantine. Those with the fewest published studies were N-acetylcysteine, vitamin B 12 , a...
Pesquita, Catia; Faria, Daniel; Falc?o, Andr? O.; Lord, Phillip; Couto, Francisco M.
In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies hav...
Mora Pérez, Oscar
This final year project presents the design principles and prototype implementation of BIMS (Biomedical Information Management System), a flexible software system which provides an infrastructure to manage all information required by biomedical research projects.The BIMS project was initiated with the motivation to solve several limitations in medical data acquisition of some research projects, in which Universitat Pompeu Fabra takes part. These limitations,based on the lack of control mechan...
The John Glenn Biomedical Engineering Consortium is an inter-institutional research and technology development, beginning with ten projects in FY02 that are aimed at applying GRC expertise in fluid physics and sensor development with local biomedical expertise to mitigate the risks of space flight on the health, safety, and performance of astronauts. It is anticipated that several new technologies will be developed that are applicable to both medical needs in space and on earth.
ABSTRACT: The subject of this thesis is the exploration of the suitability of chitosan and some of its derivatives for some chosen biomedical applications. Chitosan-graft-poly (N-vinyl imidazole), Chitosan-tripolyphosphate and ascorbyl chitosan were synthesized and characterized for specific biomedical applications in line with their chemical functionalities. Chitosan-graft-poly (N-vinyl imidazole), Chi-graft-PNVI, was synthesized by two methods; via an N-protection route and without N-pr...
Westra, Bonnie L; Sylvia, Martha; Weinfurter, Elizabeth F; Pruinelli, Lisiane; Park, Jung In; Dodd, Dianna; Keenan, Gail M; Senk, Patricia; Richesson, Rachel L; Baukner, Vicki; Cruz, Christopher; Gao, Grace; Whittenburg, Luann; Delaney, Connie W
Big data and cutting-edge analytic methods in nursing research challenge nurse scientists to extend the data sources and analytic methods used for discovering and translating knowledge. The purpose of this study was to identify, analyze, and synthesize exemplars of big data nursing research applied to practice and disseminated in key nursing informatics, general biomedical informatics, and nursing research journals. A literature review of studies published between 2009 and 2015. There were 650 journal articles identified in 17 key nursing informatics, general biomedical informatics, and nursing research journals in the Web of Science database. After screening for inclusion and exclusion criteria, 17 studies published in 18 articles were identified as big data nursing research applied to practice. Nurses clearly are beginning to conduct big data research applied to practice. These studies represent multiple data sources and settings. Although numerous analytic methods were used, the fundamental issue remains to define the types of analyses consistent with big data analytic methods. There are needs to increase the visibility of big data and data science research conducted by nurse scientists, further examine the use of state of the science in data analytics, and continue to expand the availability and use of a variety of scientific, governmental, and industry data resources. A major implication of this literature review is whether nursing faculty and preparation of future scientists (PhD programs) are prepared for big data and data science. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Liming; Chen, Chunying
Nanomaterials (NMs) have been widespread used in biomedical fields, daily consuming, and even food industry. It is crucial to understand the safety and biomedical efficacy of NMs. In this review, we summarized the recent progress about the physiological and pathological effects of NMs from several levels: protein-nano interface, NM-subcellular structures, and cell–cell interaction. We focused on the detailed information of nano-bio interaction, especially about protein adsorption, intracellular trafficking, biological barriers, and signaling pathways as well as the associated mechanism mediated by nanomaterials. We also introduced related analytical methods that are meaningful and helpful for biomedical effect studies in the future. We believe that knowledge about pathophysiologic effects of NMs is not only significant for rational design of medical NMs but also helps predict their safety and further improve their applications in the future. - Highlights: • Rapid protein adsorption onto nanomaterials that affects biomedical effects • Nanomaterials and their interaction with biological membrane, intracellular trafficking and specific cellular effects • Nanomaterials and their interaction with biological barriers • The signaling pathways mediated by nanomaterials and related biomedical effects • Novel techniques for studying translocation and biomedical effects of NMs
Chan, Wallace K B; Zhang, Hongjiu; Yang, Jianyi; Brender, Jeffrey R; Hur, Junguk; Özgür, Arzucan; Zhang, Yang
G protein-coupled receptors (GPCRs) are probably the most attractive drug target membrane proteins, which constitute nearly half of drug targets in the contemporary drug discovery industry. While the majority of drug discovery studies employ existing GPCR and ligand interactions to identify new compounds, there remains a shortage of specific databases with precisely annotated GPCR-ligand associations. We have developed a new database, GLASS, which aims to provide a comprehensive, manually curated resource for experimentally validated GPCR-ligand associations. A new text-mining algorithm was proposed to collect GPCR-ligand interactions from the biomedical literature, which is then crosschecked with five primary pharmacological datasets, to enhance the coverage and accuracy of GPCR-ligand association data identifications. A special architecture has been designed to allow users for making homologous ligand search with flexible bioactivity parameters. The current database contains ∼500 000 unique entries, of which the vast majority stems from ligand associations with rhodopsin- and secretin-like receptors. The GLASS database should find its most useful application in various in silico GPCR screening and functional annotation studies. The website of GLASS database is freely available at http://zhanglab.ccmb.med.umich.edu/GLASS/. firstname.lastname@example.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Bennett, Bradley C; Balick, Michael J
Medical research on plant-derived compounds requires a breadth of expertise from field to laboratory and clinical skills. Too often basic botanical skills are evidently lacking, especially with respect to plant taxonomy and botanical nomenclature. Binomial and familial names, synonyms and author citations are often misconstrued. The correct botanical name, linked to a vouchered specimen, is the sine qua non of phytomedical research. Without the unique identifier of a proper binomial, research cannot accurately be linked to the existing literature. Perhaps more significant, is the ambiguity of species determinations that ensues of from poor taxonomic practices. This uncertainty, not surprisingly, obstructs reproducibility of results-the cornerstone of science. Based on our combined six decades of experience with medicinal plants, we discuss the problems of inaccurate taxonomy and botanical nomenclature in biomedical research. This problems appear all too frequently in manuscripts and grant applications that we review and they extend to the published literature. We also review the literature on the importance of taxonomy in other disciplines that relate to medicinal plant research. In most cases, questions regarding orthography, synonymy, author citations, and current family designations of most plant binomials can be resolved using widely-available online databases and other electronic resources. Some complex problems require consultation with a professional plant taxonomist, which also is important for accurate identification of voucher specimens. Researchers should provide the currently accepted binomial and complete author citation, provide relevant synonyms, and employ the Angiosperm Phylogeny Group III family name. Taxonomy is a vital adjunct not only to plant-medicine research but to virtually every field of science. Medicinal plant researchers can increase the precision and utility of their investigations by following sound practices with respect to botanical
Kozlovskaia, Maria; Vlahovich, Nicole; Ashton, Kevin J; Hughes, David C
Achilles tendinopathy is the most prevalent tendon disorder in people engaged in running and jumping sports. Aetiology of Achilles tendinopathy is complex and requires comprehensive research of contributing risk factors. There is relatively little research focussing on potential biomedical risk factors for Achilles tendinopathy. The purpose of this systematic review is to identify studies and summarise current knowledge of biomedical risk factors of Achilles tendinopathy in physically active people. Research databases were searched for relevant articles followed by assessment in accordance with PRISMA statement and standards of Cochrane collaboration. Levels of evidence and quality assessment designation were implemented in accordance with OCEBM levels of evidence and Newcastle-Ottawa Quality Assessment Scale, respectively. A systematic review of the literature identified 22 suitable articles. All included studies had moderate level of evidence (2b) with the Newcastle-Ottawa score varying between 6 and 9. The majority (17) investigated genetic polymorphisms involved in tendon structure and homeostasis and apoptosis and inflammation pathways. Overweight as a risk factor of Achilles tendinopathy was described in five included studies that investigated non-genetic factors. COL5A1 genetic variants were the most extensively studied, particularly in association with genetic variants in the genes involved in regulation of cell-matrix interaction in tendon and matrix homeostasis. It is important to investigate connections and pathways whose interactions might be disrupted and therefore alter collagen structure and lead to the development of pathology. Polymorphisms in genes involved in apoptosis and inflammation, and Achilles tendinopathy did not show strong association and, however, should be considered for further investigation. This systematic review suggests that biomedical risk factors are an important consideration in the future study of propensity to the development
Jobbágy, Akos; Benyó, Zoltán; Monos, Emil
The Bologna Declaration aims at harmonizing the European higher education structure. In accordance with the Declaration, biomedical engineering will be offered as a master (MSc) course also in Hungary, from year 2009. Since 1995 biomedical engineering course has been held in cooperation of three universities: Semmelweis University, Budapest Veterinary University, and Budapest University of Technology and Economics. One of the latter's faculties, Faculty of Electrical Engineering and Informatics, has been responsible for the course. Students could start their biomedical engineering studies - usually in parallel with their first degree course - after they collected at least 180 ECTS credits. Consequently, the biomedical engineering course could have been considered as a master course even before the Bologna Declaration. Students had to collect 130 ECTS credits during the six-semester course. This is equivalent to four-semester full-time studies, because during the first three semesters the curriculum required to gain only one third of the usual ECTS credits. The paper gives a survey on the new biomedical engineering master course, briefly summing up also the subjects in the curriculum.
Full Text Available Implantable sensor systems are effective tools for biomedical diagnosis, visualization and treatment of various health conditions, attracting the interest of researchers, as well as healthcare practitioners. These systems efficiently and conveniently provide essential data of the body part being diagnosed, such as gastrointestinal (temperature, pH, pressure parameter values, blood glucose and pressure levels and electrocardiogram data. Such data are first transmitted from the implantable sensor units to an external receiver node or network and then to a central monitoring and control (computer unit for analysis, diagnosis and/or treatment. Implantable sensor units are typically in the form of mobile microrobotic capsules or implanted stationary (body-fixed units. In particular, capsule-based systems have attracted significant research interest recently, with a variety of applications, including endoscopy, microsurgery, drug delivery and biopsy. In such implantable sensor systems, one of the most challenging problems is the accurate localization and tracking of the microrobotic sensor unit (e.g., robotic capsule inside the human body. This article presents a literature review of the existing localization and tracking techniques for robotic implantable sensor systems with their merits and limitations and possible solutions of the proposed localization methods. The article also provides a brief discussion on the connection and cooperation of such techniques with wearable biomedical sensor systems.
Beninger, Paul; Ibara, Michael A
The discipline of pharmacovigilance is rooted in the aftermath of the thalidomide tragedy of 1961. It has evolved as a result of collaborative efforts by many individuals and organizations, including physicians, patients, Health Authorities, universities, industry, the World Health Organization, the Council for International Organizations of Medical Sciences, and the International Conference on Harmonisation. Biomedical informatics is rooted in technologically based methodologies and has evolved at the speed of computer technology. The purpose of this review is to bring a novel lens to pharmacovigilance, looking at the evolution and development of the field of pharmacovigilance from the perspective of biomedical informatics, with the explicit goal of providing a foundation for discussion of the future direction of pharmacovigilance as a discipline. For this review, we searched [publication trend for the log 10 value of the numbers of publications identified in PubMed] using the key words [informatics (INF), pharmacovigilance (PV), phar-macovigilance þ informatics (PV þ INF)], for [study types] articles published between [1994-2015]. We manually searched the reference lists of identified articles for additional information. Biomedical informatics has made significant contributions to the infrastructural development of pharmacovigilance. However, there has not otherwise been a systematic assessment of the role of biomedical informatics in enhancing the field of pharmacovigilance, and there has been little cross-discipline scholarship. Rapidly developing innovations in biomedical informatics pose a challenge to pharmacovigilance in finding ways to include new sources of safety information, including social media, massively linked databases, and mobile and wearable wellness applications and sensors. With biomedical informatics as a lens, it is evident that certain aspects of pharmacovigilance are evolving more slowly. However, the high levels of mutual interest in
Taylor, Donald P
High content screening (HCS) requires time-consuming and often complex iterative information retrieval and assessment approaches to optimally conduct drug discovery programs and biomedical research. Pre- and post-HCS experimentation both require the retrieval of information from public as well as proprietary literature in addition to structured information assets such as compound libraries and projects databases. Unfortunately, this information is typically scattered across a plethora of proprietary bioinformatics tools and databases and public domain sources. Consequently, single search requests must be presented to each information repository, forcing the results to be manually integrated for a meaningful result set. Furthermore, these bioinformatics tools and data repositories are becoming increasingly complex to use; typically they fail to allow for more natural query interfaces. Vivisimo has developed an enterprise software platform to bridge disparate silos of information. The platform automatically categorizes search results into descriptive folders without the use of taxonomies to drive the categorization. A new approach to information retrieval for HCS experimentation is proposed.
Tully, R. Brent; Courtois, Helene M.; Jacobs, Bradley A.; Rizzi, Luca; Shaya, Edward J.; Makarov, Dmitry I.
A database can be accessed on the Web at http://edd.ifa.hawaii.edu that was developed to promote access to information related to galaxy distances. The database has three functional components. First, tables from many literature sources have been gathered and enhanced with links through a distinct galaxy naming convention. Second, comparisons of results both at the levels of parameters and of techniques have begun and are continuing, leading to increasing homogeneity and consistency of distance measurements. Third, new material is presented arising from ongoing observational programs at the University of Hawaii 2.2 m telescope, radio telescopes at Green Bank, Arecibo, and Parkes and with the Hubble Space Telescope. This new observational material is made available in tandem with related material drawn from archives and passed through common analysis pipelines.
Afshinnekoo, Ebrahim; Ahsanuddin, Sofia; Mason, Christopher E
Crowdfunding and crowdsourcing of medical research has emerged as a novel paradigm for many biomedical disciplines to rapidly collect, process and interpret data from high-throughput and high-dimensional experiments. The novelty and promise of these approaches have led to fundamental discoveries about RNA mechanisms, microbiome dynamics and even patient interpretation of test results. However, these methods require robust training protocols, uniform sampling methods and experimental rigor in order to be useful for subsequent research efforts. Executed correctly, crowdfunding and crowdsourcing can leverage public resources and engagement to generate support for scientific endeavors that would otherwise be impossible due to funding constraints and or the large number of participants needed for data collection. We conducted a comprehensive literature review of scientific studies that utilized crowdsourcing and crowdfunding to generate data. We also discuss our own experiences conducting citizen-science research initiatives (MetaSUB and PathoMap) in ensuring data robustness, educational outreach and public engagement. We demonstrate the efficacy of crowdsourcing mechanisms for revolutionizing microbiome and metagenomic research to better elucidate the microbial and genetic dynamics of cities around the world (as well as non-urban areas). Crowdsourced studies have been able to create an improved and unprecedented ability to monitor, design and measure changes at the microbial and macroscopic scale. Thus, the use of crowdsourcing strategies has dramatically altered certain genomics research to create global citizen-science initiatives that reveal new discoveries about the world's genetic dynamics. The effectiveness of crowdfunding and crowdsourcing is largely dependent on the study design and methodology. One point of contention for the present discussion is the validity and scientific rigor of data that are generated by non-scientists. Selection bias, limited sample
Full Text Available Abstract Background The vast amount of data published in the primary biomedical literature represents a challenge for the automated extraction and codification of individual data elements. Biological databases that rely solely on manual extraction by expert curators are unable to comprehensively annotate the information dispersed across the entire biomedical literature. The development of efficient tools based on natural language processing (NLP systems is essential for the selection of relevant publications, identification of data attributes and partially automated annotation. One of the tasks of the Biocreative 2010 Challenge III was devoted to the evaluation of NLP systems developed to identify articles for curation and extraction of protein-protein interaction (PPI data. Results The Biocreative 2010 competition addressed three tasks: gene normalization, article classification and interaction method identification. The BioGRID and MINT protein interaction databases both participated in the generation of the test publication set for gene normalization, annotated the development and test sets for article classification, and curated the test set for interaction method classification. These test datasets served as a gold standard for the evaluation of data extraction algorithms. Conclusion The development of efficient tools for extraction of PPI data is a necessary step to achieve full curation of the biomedical literature. NLP systems can in the first instance facilitate expert curation by refining the list of candidate publications that contain PPI data; more ambitiously, NLP approaches may be able to directly extract relevant information from full-text articles for rapid inspection by expert curators. Close collaboration between biological databases and NLP systems developers will continue to facilitate the long-term objectives of both disciplines.
deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher
This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.
Tkacz, Ewaryst; Paszenda, Zbigniew; Piętka, Ewa
This book presents the proceedings of the “Innovations in Biomedical Engineering IBE’2016” Conference held on October 16–18, 2016 in Poland, discussing recent research on innovations in biomedical engineering. The past decade has seen the dynamic development of more and more sophisticated technologies, including biotechnologies, and more general technologies applied in the area of life sciences. As such the book covers the broadest possible spectrum of subjects related to biomedical engineering innovations. Divided into four parts, it presents state-of-the-art achievements in: • engineering of biomaterials, • modelling and simulations in biomechanics, • informatics in medicine • signal analysis The book helps bridge the gap between technological and methodological engineering achievements on the one hand and clinical requirements in the three major areas diagnosis, therapy and rehabilitation on the other.
Phillips, M; Kalet, I; McNutt, T; Smith, W
Biomedical informatics encompasses a very large domain of knowledge and applications. This broad and loosely defined field can make it difficult to navigate. Physicists often are called upon to provide informatics services and/or to take part in projects involving principles of the field. The purpose of the presentations in this symposium is to help medical physicists gain some knowledge about the breadth of the field and how, in the current clinical and research environment, they can participate and contribute. Three talks have been designed to give an overview from the perspective of physicists and to provide a more in-depth discussion in two areas. One of the primary purposes, and the main subject of the first talk, is to help physicists achieve a perspective about the range of the topics and concepts that fall under the heading of 'informatics'. The approach is to de-mystify topics and jargon and to help physicists find resources in the field should they need them. The other talks explore two areas of biomedical informatics in more depth. The goal is to highlight two domains of intense current interest--databases and models--in enough depth into current approaches so that an adequate background for independent inquiry is achieved. These two areas will serve as good examples of how physicists, using informatics principles, can contribute to oncology practice and research. Learning Objectives: To understand how the principles of biomedical informatics are used by medical physicists. To put the relevant informatics concepts in perspective with regard to biomedicine in general. To use clinical database design as an example of biomedical informatics. To provide a solid background into the problems and issues of the design and use of data and databases in radiation oncology. To use modeling in the service of decision support systems as an example of modeling methods and data use. To provide a background into how uncertainty in our data and knowledge can be
Sánchez-Martín, Francisco M; Millán Rodríguez, Félix; Villavicencio Mavrich, Humberto
According to the declaration of the Budapest Open Access Initiative (OAI) is defined as a editorial model in which access to scientific journal literature and his use are free. Free flow of information allowed by Internet has been the basis of this initiative. The Bethesda and the Berlin declarations, supported by some international agencies, proposes to require researchers to deposit copies of all articles published in a self-archive or an Open Access repository, and encourage researchers to publish their research papers in journals Open Access. This paper reviews the keys of the OAI, with their strengths and controversial aspects; and it discusses the position of databases, search engines and repositories of biomedical information, as well as the attitude of the scientists, publishers and journals. So far the journal Actas Urológicas Españolas (Act Urol Esp) offer their contents on Open Access as On Line in Spanish and English.
Paldam, Martin; Svendsen, Gert Tinggaard
This report has two purposes: The first purpose is to present our 4-page questionnaire, which measures social capital. It is close to the main definitions of social capital and contains the most successful measures from the literature. Also it is easy to apply as discussed. The second purpose ...... is to present the social capital database we have collected for 21 countries using the questionnaire. We do this by comparing the level of social capital in the countries covered. That is, the report compares the marginals from the 21 surveys....
Magnetic particles are increasingly being used in a wide variety of biomedical applications. Written by a team of internationally respected experts, this book provides an up-to-date authoritative reference for scientists and engineers. The first section presents the fundamentals of the field by explaining the theory of magnetism, describing techniques to synthesize magnetic particles, and detailing methods to characterize magnetic particles. The second section describes biomedical applications, including chemical sensors and cellular actuators, and diagnostic applications such as drug delivery, hyperthermia cancer treatment, and magnetic resonance imaging contrast.
Magnetic particles are increasingly being used in a wide variety of biomedical applications. Written by a team of internationally respected experts, this book provides an up-to-date authoritative reference for scientists and engineers. The first section presents the fundamentals of the field by explaining the theory of magnetism, describing techniques to synthesize magnetic particles, and detailing methods to characterize magnetic particles. The second section describes biomedical applications, including chemical sensors and cellular actuators, and diagnostic applications such as drug delivery, hyperthermia cancer treatment, and magnetic resonance imaging contrast.
This book presents and describes imaging technologies that can be used to study chemical processes and structural interactions in dynamic systems, principally in biomedical systems. The imaging technologies, largely biomedical imaging technologies such as MRT, Fluorescence mapping, raman mapping, nanoESCA, and CARS microscopy, have been selected according to their application range and to the chemical information content of their data. These technologies allow for the analysis and evaluation of delicate biological samples, which must not be disturbed during the profess. Ultimately, this may me
van Mulligen, Erik M; Cases, Montserrat; Hettne, Kristina; Molero, Eva; Weeber, Marc; Robertson, Kevin A; Oliva, Baldomero; de la Calle, Guillermo; Maojo, Victor
The European INFOBIOMED Network of Excellence recognized that a successful education program in biomedical informatics should include not only traditional teaching activities in the basic sciences but also the development of skills for working in multidisciplinary teams. A carefully developed 3-year training program for biomedical informatics students addressed these educational aspects through the following four activities: (1) an internet course database containing an overview of all Medical Informatics and BioInformatics courses, (2) a BioMedical Informatics Summer School, (3) a mobility program based on a 'brokerage service' which published demands and offers, including funding for research exchange projects, and (4) training challenges aimed at the development of multi-disciplinary skills. This paper focuses on experiences gained in the development of novel educational activities addressing work in multidisciplinary teams. The training challenges described here were evaluated by asking participants to fill out forms with Likert scale based questions. For the mobility program a needs assessment was carried out. The mobility program supported 20 exchanges which fostered new BMI research, resulted in a number of peer-reviewed publications and demonstrated the feasibility of this multidisciplinary BMI approach within the European Union. Students unanimously indicated that the training challenge experience had contributed to their understanding and appreciation of multidisciplinary teamwork. The training activities undertaken in INFOBIOMED have contributed to a multi-disciplinary BMI approach. It is our hope that this work might provide an impetus for training efforts in Europe, and yield a new generation of biomedical informaticians.
Douglas, Tania S
Biomedical engineering (BME) contributes to development through improving human health. This paper examines BME education to address the needs of developing countries. Components of different BME programs described in the literature are synthesized to represent what has been proposed or implemented for the production of graduates able to address health problems in a manner suited to the local environment in which they occur. Published research on BME education is reviewed with reference to problem context, interventions and their mechanisms, and intended outcomes.
Full Text Available Abstract Background Prospective study protocols and registrations can play a significant role in reducing incomplete or selective reporting of primary biomedical research, because they are pre-specified blueprints which are available for the evaluation of, and comparison with, full reports. However, inconsistencies between protocols or registrations and full reports have been frequently documented. In this systematic review, which forms part of our series on the state of reporting of primary biomedical, we aimed to survey the existing evidence of inconsistencies between protocols or registrations (i.e., what was planned to be done and/or what was actually done and full reports (i.e., what was reported in the literature; this was based on findings from systematic reviews and surveys in the literature. Methods Electronic databases, including CINAHL, MEDLINE, Web of Science, and EMBASE, were searched to identify eligible surveys and systematic reviews. Our primary outcome was the level of inconsistency (expressed as a percentage, with higher percentages indicating greater inconsistency between protocols or registration and full reports. We summarized the findings from the included systematic reviews and surveys qualitatively. Results There were 37 studies (33 surveys and 4 systematic reviews included in our analyses. Most studies (n = 36 compared protocols or registrations with full reports in clinical trials, while a single survey focused on primary studies of clinical trials and observational research. High inconsistency levels were found in outcome reporting (ranging from 14% to 100%, subgroup reporting (from 12% to 100%, statistical analyses (from 9% to 47%, and other measure comparisons. Some factors, such as outcomes with significant results, sponsorship, type of outcome and disease speciality were reported to be significantly related to inconsistent reporting. Conclusions We found that inconsistent reporting between protocols or
Zhou, Xuezhong; Liu, Baoyan; Wu, Zhaohui; Feng, Yi
The amount of biomedical data in different disciplines is growing at an exponential rate. Integrating these significant knowledge sources to generate novel hypotheses for systems biology research is difficult. Traditional Chinese medicine (TCM) is a completely different discipline, and is a complementary knowledge system to modern biomedical science. This paper uses a significant TCM bibliographic literature database in China, together with MEDLINE, to help discover novel gene functional knowledge. We present an integrative mining approach to uncover the functional gene relationships from MEDLINE and TCM bibliographic literature. This paper introduces TCM literature (about 50,000 records) as one knowledge source for constructing literature-based gene networks. We use the TCM diagnosis, TCM syndrome, to automatically congregate the related genes. The syndrome-gene relationships are discovered based on the syndrome-disease relationships extracted from TCM literature and the disease-gene relationships in MEDLINE. Based on the bubble-bootstrapping and relation weight computing methods, we have developed a prototype system called MeDisco/3S, which has name entity and relation extraction, and online analytical processing (OLAP) capabilities, to perform the integrative mining process. We have got about 200,000 syndrome-gene relations, which could help generate syndrome-based gene networks, and help analyze the functional knowledge of genes from syndrome perspective. We take the gene network of Kidney-Yang Deficiency syndrome (KYD syndrome) and the functional analysis of some genes, such as CRH (corticotropin releasing hormone), PTH (parathyroid hormone), PRL (prolactin), BRCA1 (breast cancer 1, early onset) and BRCA2 (breast cancer 2, early onset), to demonstrate the preliminary results. The underlying hypothesis is that the related genes of the same syndrome will have some biological functional relationships, and will constitute a functional network. This paper presents
Items 1 - 20 of 20 ... Archives: Journal of Medical and Biomedical Sciences. Journal Home > Archives: Journal of Medical and Biomedical Sciences. Log in or Register to get access to full text downloads.
Biomedical Nanomaterials brings together the engineering applications and challenges of using nanostructured surfaces and nanomaterials in healthcare in a single source. Each chapter covers important and new information in the biomedical applications of nanomaterials.
Biomedical researchers are facing data deluge challenges such as dealing with large volume of complex heterogeneous data and complex and computationally demanding data processing methods. Such scale and complexity of biomedical research requires multi-disciplinary collaboration between scientists
African Journal of Biomedical Research: Journal Sponsorship. Journal Home > About the Journal > African Journal of Biomedical Research: Journal Sponsorship. Log in or Register to get access to full text downloads.
Items 1 - 19 of 19 ... Archives: Journal of Medicine and Biomedical Research. Journal Home > Archives: Journal of Medicine and Biomedical Research. Log in or Register to get access to full text downloads.
Huffstetler, J.K.; Dailey, N.S.; Rickert, L.W.; Chilton, B.D.
The Information Center Complex (ICC), a centrally administered group of information centers, provides information support to environmental and biomedical research groups and others within and outside Oak Ridge National Laboratory. In-house data base building and development of specialized document collections are important elements of the ongoing activities of these centers. ICC groups must be concerned with language which will adequately classify and insure retrievability of document records. Language control problems are compounded when the complexity of modern scientific problem solving demands an interdisciplinary approach. Although there are several word lists, indexes, and thesauri specific to various scientific disciplines usually grouped as Environmental Sciences, no single generally recognized authority can be used as a guide to the terminology of all environmental science. If biomedical terminology for the description of research on environmental effects is also needed, the problem becomes even more complex. The building of a word list which can be used as a general guide to the environmental/biomedical sciences has been a continuing activity of the Information Center Complex. This activity resulted in the publication of the Environmental Biomedical Terminology Index
Lihong V. Wang summarizes his tenure as Editor-in-Chief of the Journal of Biomedical Optics and introduces his successor, Brian Pogue, who will assume the role in January 2018. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
This volume gives an introduction to a fascinating research area to applied mathematicians. It is devoted to providing the exposition of promising analytical and numerical techniques for solving challenging biomedical imaging problems, which trigger the investigation of interesting issues in various branches of mathematics.
Huffstetler, J.K.; Dailey, N.S.; Rickert, L.W.; Chilton, B.D.
The Information Center Complex (ICC), a centrally administered group of information centers, provides information support to environmental and biomedical research groups and others within and outside Oak Ridge National Laboratory. In-house data base building and development of specialized document collections are important elements of the ongoing activities of these centers. ICC groups must be concerned with language which will adequately classify and insure retrievability of document records. Language control problems are compounded when the complexity of modern scientific problem solving demands an interdisciplinary approach. Although there are several word lists, indexes, and thesauri specific to various scientific disciplines usually grouped as Environmental Sciences, no single generally recognized authority can be used as a guide to the terminology of all environmental science. If biomedical terminology for the description of research on environmental effects is also needed, the problem becomes even more complex. The building of a word list which can be used as a general guide to the environmental/biomedical sciences has been a continuing activity of the Information Center Complex. This activity resulted in the publication of the Environmental Biomedical Terminology Index (EBTI).
The following instructions relating to submissions must be adhered to. Failure to conform can lead to delay in publication. Preferred method of submission. Manuscripts may be submitted by post (Editor-in-chief Journal of Biomedical Investigation, Department of Pharmacology and Therapeutics, Faculty of Medicine College ...
Gowen, Richard J.
Discusses recent developments in the health care industry and their impact on the future of biomedical engineering education. Indicates that a more thorough understanding of the complex functions of the living organism can be acquired through the application of engineering techniques to problems of life sciences. (CC)
Roč. 52, č. 1 (2003), s. 39-43 ISSN 0862-8408 R&D Projects: GA ČR GA310/03/1381 Grant - others:Howard Hughes Medical Institute(US) HHMI55000323 Institutional research plan: CEZ:AV0Z5052915 Keywords : statistics * usage * biomedical journals Subject RIV: EC - Immunology Impact factor: 0.939, year: 2003
Ramalingam, Murugan; Ramakrishna, Seeram; Kobayashi, Hisatoshi
This cutting edge book provides all the important aspects dealing with the basic science involved in materials in biomedical technology, especially structure and properties, techniques and technological innovations in material processing and characterizations, as well as the applications. The volume consists of 12 chapters written by acknowledged experts of the biomaterials field and covers a wide range of topics and applications.
The African Journal of biomedical Research was founded in 1998 as a joint project between a private communications outfit (Laytal Communications) and ... is aimed at being registered in future as a non-governmental organization involved in the promotion of scientific proceedings and publications in developing countries.
Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...
Kulp, Carol S.
Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…
The topic of this diploma is the formation and shaping of African literature. The first chapter is about the beginning of African literature. It describes oral literature and its transmission into written literature. Written African literature had great problems in becoming a part of world literature because of its diversity of languages and dialects. Christianity and Islam are mentioned as two religions which had a great impact on African literature. Colonialism is broadly described as an es...
Full Text Available Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1 We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2 We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3 For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Leslie D. McIntosh
Full Text Available Abstract Background The reproducibility of research is essential to rigorous science, yet significant concerns of the reliability and verifiability of biomedical research have been recently highlighted. Ongoing efforts across several domains of science and policy are working to clarify the fundamental characteristics of reproducibility and to enhance the transparency and accessibility of research. Methods The aim of the proceeding work is to develop an assessment tool operationalizing key concepts of research transparency in the biomedical domain, specifically for secondary biomedical data research using electronic health record data. The tool (RepeAT was developed through a multi-phase process that involved coding and extracting recommendations and practices for improving reproducibility from publications and reports across the biomedical and statistical sciences, field testing the instrument, and refining variables. Results RepeAT includes 119 unique variables grouped into five categories (research design and aim, database and data collection methods, data mining and data cleaning, data analysis, data sharing and documentation. Preliminary results in manually processing 40 scientific manuscripts indicate components of the proposed framework with strong inter-rater reliability, as well as directions for further research and refinement of RepeAT. Conclusions The use of RepeAT may allow the biomedical community to have a better understanding of the current practices of research transparency and accessibility among principal investigators. Common adoption of RepeAT may improve reporting of research practices and the availability of research outputs. Additionally, use of RepeAT will facilitate comparisons of research transparency and accessibility across domains and institutions.
Polenakovic, Momir; Danevska, Lenche
Several biomedical journals in the Republic of Macedonia have succeeded in maintaining regular publication over the years, but only a few have a long-standing tradition. In this paper we present the basic characteristics of 18 biomedical journals that have been published without a break in the Republic of Macedonia. Of these, more details are given for 14 journals, a particular emphasis being on the journal Prilozi/Contributions of the Macedonian Academy of Sciences and Arts, Section of Medical Sciences as one of the journals with a long-term publishing tradition and one of the journals included in the Medline/PubMed database. A brief or broad description is given for the following journals: Macedonian Medical Review, Acta Morphologica, Physioacta, MJMS-Macedonian Journal of Medical Sciences, International Medical Journal Medicus, Archives of Public Health, Epilepsy, Macedonian Orthopaedics and Traumatology Journal, BANTAO Journal, Macedonian Dental Review, Macedonian Pharmaceutical Bulletin, Macedonian Veterinary Review, Journal of Special Education and Rehabilitation, Balkan Journal of Medical Genetics, Contributions of the Macedonian Scientific Society of Bitola, Vox Medici, Social Medicine: Professional Journal for Public Health, and Prilozi/Contributions of the Macedonian Academy of Sciences and Arts. Journals from Macedonia should aim to be published regularly, should comply with the Uniform requirements for manuscripts submitted to biomedical journals, and with the recommendations of reliable organizations working in the field of publishing and research. These are the key prerequisites which Macedonian journals have to accomplish in order to be included in renowned international bibliographic databases. Thus the results of biomedical science from the Republic of Macedonia will be presented to the international scientific arena.
Full Text Available Research in biomedical text mining is starting to produce technology which can make information in biomedical literature more accessible for bio-scientists. One of the current challenges is to integrate and refine this technology to support real-life scientific tasks in biomedicine, and to evaluate its usefulness in the context of such tasks. We describe CRAB - a fully integrated text mining tool designed to support chemical health risk assessment. This task is complex and time-consuming, requiring a thorough review of existing scientific data on a particular chemical. Covering human, animal, cellular and other mechanistic data from various fields of biomedicine, this is highly varied and therefore difficult to harvest from literature databases via manual means. Our tool automates the process by extracting relevant scientific data in published literature and classifying it according to multiple qualitative dimensions. Developed in close collaboration with risk assessors, the tool allows navigating the classified dataset in various ways and sharing the data with other users. We present a direct and user-based evaluation which shows that the technology integrated in the tool is highly accurate, and report a number of case studies which demonstrate how the tool can be used to support scientific discovery in cancer risk assessment and research. Our work demonstrates the usefulness of a text mining pipeline in facilitating complex research tasks in biomedicine. We discuss further development and application of our technology to other types of chemical risk assessment in the future.
Yepes, Antonio Jimeno; Prieur-Gaston, Elise; Névéol, Aurélie
Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts.
Full Text Available The database “Historical Artificial Radionuclides in the Pacific Ocean and its Marginal Seas”, or HAM database, has been created. The database includes 90Sr, 137Cs, and 239,240Pu concentration data from the seawater of the Pacific Ocean and its marginal seas with some measurements from the sea surface to the bottom. The data in the HAM database were collected from about 90 literature citations, which include published papers; annual reports by the Hydrographic Department, Maritime Safety Agency, Japan; and unpublished data provided by individuals. The data of concentrations of 90Sr, 137Cs, and 239,240Pu have been accumulating since 19571998. The present HAM database includes 7737 records for 137Cs concentration data, 3972 records for 90Sr concentration data, and 2666 records for 239,240Pu concentration data. The spatial variation of sampling stations in the HAM database is heterogeneous, namely, more than 80% of the data for each radionuclide is from the Pacific Ocean and the Sea of Japan, while a relatively small portion of data is from the South Pacific. This HAM database will allow us to use these radionuclides as significant chemical tracers for oceanographic study as well as the assessment of environmental affects of anthropogenic radionuclides for these 5 decades. Furthermore, these radionuclides can be used to verify the oceanic general circulation models in the time scale of several decades.
Mak Tippi K
Full Text Available Abstract The economy of China continues to boom and so have its biomedical research and related publishing activities. Several so-called neglected tropical diseases that are most common in the developing world are still rampant or even emerging in some parts of China. The purpose of this article is to document the significant research potential from the Chinese biomedical bibliographic databases. The research contributions from China in the epidemiology and control of schistosomiasis provide an excellent illustration. We searched two widely used databases, namely China National Knowledge Infrastructure (CNKI and VIP Information (VIP. Employing the keyword "Schistosoma" ( and covering the period 1990–2006, we obtained 10,244 hits in the CNKI database and 5,975 in VIP. We examined 10 Chinese biomedical journals that published the highest number of original research articles on schistosomiasis for issues including languages and open access. Although most of the journals are published in Chinese, English abstracts are usually available. Open access to full articles was available in China Tropical Medicine in 2005/2006 and is granted by the Chinese Journal of Parasitology and Parasitic Diseases since 2003; none of the other journals examined offered open access. We reviewed (i the discovery and development of antischistosomal drugs, (ii the progress made with molluscicides and (iii environmental management for schistosomiasis control in China over the past 20 years. In conclusion, significant research is published in the Chinese literature, which is relevant for local control measures and global scientific knowledge. Open access should be encouraged and language barriers removed so the wealth of Chinese research can be more fully appreciated by the scientific community.
This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult
Voigt, Herbert F
The future challenges to medical and biological engineering, sometimes referred to as biomedical engineering or simply bioengineering, are many. Some of these are identifiable now and others will emerge from time to time as new technologies are introduced and harnessed. There is a fundamental issue regarding "Branding the bio/biomedical engineering degree" that requires a common understanding of what is meant by a B.S. degree in Biomedical Engineering, Bioengineering, or Biological Engineering. In this paper we address some of the issues involved in branding the Bio/Biomedical Engineering degree, with the aim of clarifying the Bio/Biomedical Engineering brand.
Hornberger, H; Virtanen, S; Boccaccini, A R
This review comprehensively covers research carried out in the field of degradable coatings on Mg and Mg alloys for biomedical applications. Several coating methods are discussed, which can be divided, based on the specific processing techniques used, into conversion and deposition coatings. The literature review revealed that in most cases coatings increase the corrosion resistance of Mg and Mg alloys. The critical factors determining coating performance, such as corrosion rate, surface chemistry, adhesion and coating morphology, are identified and discussed. The analysis of the literature showed that many studies have focused on calcium phosphate coatings produced either using conversion or deposition methods which were developed for orthopaedic applications. However, the control of phases and the formation of cracks still appear unsatisfactory. More research and development is needed in the case of biodegradable organic based coatings to generate reproducible and relevant data. In addition to biocompatibility, the mechanical properties of the coatings are also relevant, and the development of appropriate methods to study the corrosion process in detail and in the long term remains an important area of research. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Ma, Ning; Hou, Ya-zhu; Wang, Xian-liang; Mao, Jing-yuan
To analyze medication laws of Chinese medicine (CM) treatment in hypertension patients with yin deficiency yang hyperactivity syndrome. China National Knowledge Infrastructure (CNKI, Jan. 1979-Dec 2014), Chinese Scientific Journals Database (VIP, Jan 1989-Dec2014), Chinese Biomedical Literature Database (CBM, Jan.1978-Dec.2014), Wanfang Database (Jan 1990-Dec 2014) were retrieved by using "hypertension", "CM", "Chinese herbs", "syndrome" as keywords. Totally 149 literatures concerning CM treatment for hypertension patients with yin deficiency yanghyperactivity syndrome were included in this study. The herbs database was established by SPSS20.0,and correlation laws were analyzed by SAS9.3. With the Pajek3.1, results were presented visually withcomplex networks. There were 149 literatures including 131 kinds of herbs with 1,598 frequencies. The conventional compatibility program of herbs for asthenic yin and predominant yang syndrome of hypertension were two toothed achyranthes root, tall gastrodia rhizome, Cassia obtusifolia L., eucommiabark, baikal skullcap root, and so on, about 29 kinds. Of them, core herbs were two toothed achyranthes root, tall gastrodia rhizome, Cassia obtusifolia L., poria, prepared rhizome of rehmannia, oriental water-plantain tuber, asiatic cornelian cherry fruit, Uncariae Rhynchophylla, common yam rhizome, the rootbark of the peony tree, and so on. Medication laws of CM treatment in hypertension patientswith yin deficiency yang hyperactivity syndrome obtained by analysis of complex networks reflected thetherapeutics of nourishing yin to suppress yang, which could further provide reference for clinical studies.
Berhidi, Anna; Horváth, Katalin; Horváth, Gabriella; Vasas, Lívia
This publication - based on an article published in 2006 - emphasises the qualities of the current biomedical periodicals of Hungarian editions. The aim of this study was to analyse how Hungarian journals meet the requirements of the scientific aspect and international visibility. Authors evaluated 93 Hungarian biomedical periodicals by 4 viewpoints of the two criteria mentioned above. 35% of the analysed journals complete the attributes of scientific aspect, 5% the international visibility, 6% fulfill all examined criteria, and 25% are indexed in international databases. 6 biomedical Hungarian periodicals covered by each of the three main bibliographic databases (Medline, Scopus, Web of Science) have the best qualities. Authors recommend to improve viewpoints of the scientific aspect and international visibility. The basis of qualitative adequacy are the accurate authors' guidelines, title, abstract, keywords of the articles in English, and the ability to publish on time.
Skrabalak, Sara E.; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl4. The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where ...
This volume introduces readers to the basic concepts and recent advances in the field of biomedical devices. The text gives a detailed account of novel developments in drug delivery, protein electrophoresis, estrogen mimicking methods and medical devices. It also provides the necessary theoretical background as well as describing a wide range of practical applications. The level and style make this book accessible not only to scientific and medical researchers but also to graduate students.
Mahendra R.R Raj
The importance of waste disposal management is a very essential and integral part of any health care system. Health care providers have been ignorant or they did not essentially know the basic aspect of the importance and effective management of hospital waste.This overview of biomedical waste disposal/management gives a thorough insight into the aspects of the guidelines to be followed and adopted according to the international WHO approved methodology for a cleaner, disease-free, and health...
Krustev, P.; Ruskov, T.
In this paper we describe different biomedical application using magnetic nanoparticles. Over the past decade, a number of biomedical applications have begun to emerge for magnetic nanoparticles of differing sizes, shapes, and compositions. Areas under investigation include targeted drug delivery, ultra-sensitive disease detection, gene therapy, high throughput genetic screening, biochemical sensing, and rapid toxicity cleansing. Magnetic nanoparticles exhibit ferromagnetic or superparamagnetic behavior, magnetizing strongly under an applied field. In the second case (superparamagnetic nanoparticles) there is no permanent magnetism once the field is removed. The superparamagnetic nanoparticles are highly attractive as in vivo probes or in vitro tools to extract information on biochemical systems. The optical properties of magnetic metal nanoparticles are spectacular and, therefore, have promoted a great deal of excitement during the last few decades. Many applications as MRI imaging and hyperthermia rely on the use of iron oxide particles. Moreover magnetic nanoparticles conjugated with antibodies are also applied to hyperthermia and have enabled tumor specific contrast enhancement in MRI. Other promising biomedical applications are connected with tumor cells treated with magnetic nanoparticles with X-ray ionizing radiation, which employs magnetic nanoparticles as a complementary radiate source inside the tumor. (authors)
Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro
Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly , . This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.
Turcheniuk, K.; Mochalin, Vadym N.
The interest in nanodiamond applications in biology and medicine is on the rise over recent years. This is due to the unique combination of properties that nanodiamond provides. Small size (∼5 nm), low cost, scalable production, negligible toxicity, chemical inertness of diamond core and rich chemistry of nanodiamond surface, as well as bright and robust fluorescence resistant to photobleaching are the distinct parameters that render nanodiamond superior to any other nanomaterial when it comes to biomedical applications. The most exciting recent results have been related to the use of nanodiamonds for drug delivery and diagnostics—two components of a quickly growing area of biomedical research dubbed theranostics. However, nanodiamond offers much more in addition: it can be used to produce biodegradable bone surgery devices, tissue engineering scaffolds, kill drug resistant microbes, help us to fight viruses, and deliver genetic material into cell nucleus. All these exciting opportunities require an in-depth understanding of nanodiamond. This review covers the recent progress as well as general trends in biomedical applications of nanodiamond, and underlines the importance of purification, characterization, and rational modification of this nanomaterial when designing nanodiamond based theranostic platforms.
Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao
The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.
Gultz, Michael Jarett
Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…
Bakal, Gokhan; Kavuluru, Ramakanth
Identifying new potential treatment options (say, medications and procedures) for known medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Even before this step, due to recent advances, in silico or computational approaches are also being employed to identify viable treatment options. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. In this paper, we report preliminary results on predicting treatment relations between biomedical entities purely based on semantic patterns over biomedical knowledge graphs. As such, we refrain from explicitly using NLP, although the knowledge graphs themselves may be built from NLP extractions. Our intuition is fairly straightforward - entities that participate in a treatment relation may be connected using similar path patterns in biomedical knowledge graphs extracted from scientific literature. Using a dataset of treatment relation instances derived from the well known Unified Medical Language System (UMLS), we verify our intuition by employing graph path patterns from a well known knowledge graph as features in machine learned models. We achieve a high recall (92 %) but precision, however, decreases from 95% to an acceptable 71% as we go from uniform class distribution to a ten fold increase in negative instances. We also demonstrate models trained with patterns of length ≤ 3 result in statistically significant gains in F-score over those trained with patterns of length ≤ 2. Our results show the potential of exploiting knowledge graphs for relation extraction and we believe this is the first effort to employ graph patterns as features for identifying biomedical relations.
Jin, Lin-Peng; Dong, Jun
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost .
Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Bosch, Gundula; Casadevall, Arturo
There is a growing realization that graduate education in the biomedical sciences is successful at teaching students how to conduct research but falls short in preparing them for a diverse job market, communicating with the public, and remaining versatile scientists throughout their careers. Major problems with graduate level education today include overspecialization in a narrow area of science without a proper grounding in essential critical thinking skills. Shortcomings in education may also contribute to some of the problems of the biomedical sciences, such as poor reproducibility, shoddy literature, and the rise in retracted publications. The challenge is to modify graduate programs such that they continue to generate individuals capable of conducting deep research while at the same time producing more broadly trained scientists without lengthening the time to a degree. Here we describe our first experiences at Johns Hopkins and propose a manifesto for reforming graduate science education. Copyright © 2017 Bosch and Casadevall.
This paper presents the development of medical informatics education during the years from the establishment of the International Medical Informatics Association (IMIA) until today. A search in the literature was performed using search engines and appropriate keywords as well as a manual selection of papers. The search covered English language papers and was limited to search on papers title and abstract only. The aggregated papers were analyzed on the basis of the subject area, origin, time span, and curriculum development, and conclusions were drawn. From the results, it is evident that IMIA has played a major role in comparing and integrating the Biomedical and Health Informatics educational efforts across the different levels of education and the regional distribution of educators and institutions. A large selection of references is presented facilitating future work on the field of education in biomedical and health informatics.
Summary Objective This paper presents the development of medical informatics education during the years from the establishment of the International Medical Informatics Association (IMIA) until today. Method A search in the literature was performed using search engines and appropriate keywords as well as a manual selection of papers. The search covered English language papers and was limited to search on papers title and abstract only. Results The aggregated papers were analyzed on the basis of the subject area, origin, time span, and curriculum development, and conclusions were drawn. Conclusions From the results, it is evident that IMIA has played a major role in comparing and integrating the Biomedical and Health Informatics educational efforts across the different levels of education and the regional distribution of educators and institutions. A large selection of references is presented facilitating future work on the field of education in biomedical and health informatics. PMID:27488405
Fernández, Xosé M
Abstract The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/. PMID:29316735
Rigden, Daniel J; Fernández, Xosé M
The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/. © The Author(s) 2018. Published by Oxford University Press on behalf of Nucleic Acids Research.
U.S. Environmental Protection Agency — The Database of Physiological Parameters for Early Life Rats and Mice provides information based on scientific literature about physiological parameters. Modelers...
Grimm, E.C.; Bradshaw, R.H.W; Brewer, S.; Flantua, S.; Giesecke, T.; Lézine, A.M.; Takahara, H.; Williams, J.W.,Jr; Elias, S.A.; Mock, C.J.
During the past 20 years, several pollen database cooperatives have been established. These databases are now constituent databases of the Neotoma Paleoecology Database, a public domain, multiproxy, relational database designed for Quaternary-Pliocene fossil data and modern surface samples. The
Mittelstadt, Brent Daniel
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...
Digitization and valorization through the INIS database of an exceptional stock of grey literature documents: the 'historical' CEA reports (1948-1969); Numerisation et valorisation au travers de la base INIS d'un fonds documentaires exceptionnel de litterature grise: les rapports 'historiques' du CEA (1948-1969)
Surmont, J.; Brulet, C.; Brulet, M.; Pourny, M.; Constant, A.; Guille, N.; Le Blanc, A.; Mouffron, O.; Anguise, P.; Jouve, J.J
This poster, prepared for the sixth edition of the meetings of scientific and technical information professionals (RPIST, Nancy (France)), presents the joint CEA/IAEA project of digitization of the collection of reports published by the French atomic energy commission (CEA) between 1948 and 1969. This exceptional stock of about 2760 grey literature monographs covers the first 20 years of researches carried out at the CEA and has been entered in the INIS database with a link to the full text. The poster describes the different steps of the project from the selection of the documents, their digitization, and the preparation of the corresponding inputs. The INIS database and its general content are briefly presented. (J.S.)
Philip R O Payne
Full Text Available The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed
Anguita, Alberto; García-Remesal, Miguel; Graf, Norbert; Maojo, Victor
Modern biomedical research relies on the semantic integration of heterogeneous data sources to find data correlations. Researchers access multiple datasets of disparate origin, and identify elements-e.g. genes, compounds, pathways-that lead to interesting correlations. Normally, they must refer to additional public databases in order to enrich the information about the identified entities-e.g. scientific literature, published clinical trial results, etc. While semantic integration techniques have traditionally focused on providing homogeneous access to private datasets-thus helping automate the first part of the research, and there exist different solutions for browsing public data, there is still a need for tools that facilitate merging public repositories with private datasets. This paper presents a framework that automatically locates public data of interest to the researcher and semantically integrates it with existing private datasets. The framework has been designed as an extension of traditional data integration systems, and has been validated with an existing data integration platform from a European research project by integrating a private biological dataset with data from the National Center for Biotechnology Information (NCBI). Copyright © 2016 Elsevier Inc. All rights reserved.
Dewhurst, D J
An Introduction to Biomedical Instrumentation presents a course of study and applications covering the basic principles of medical and biological instrumentation, as well as the typical features of its design and construction. The book aims to aid not only the cognitive domain of the readers, but also their psychomotor domain as well. Aside from the seminar topics provided, which are divided into 27 chapters, the book complements these topics with practical applications of the discussions. Figures and mathematical formulas are also given. Major topics discussed include the construction, handli
Roberts, M.L.; Velsko, C.; Turteltaub, K.W.
We are developing 3 H-AMS to measure 3 H activity of mg-sized biological samples. LLNL has already successfully applied 14 C AMS to a variety of problems in the area of biomedical research. Development of 3 H AMS would greatly complement these studies. The ability to perform 3 H AMS measurements at sensitivities equivalent to those obtained for 14 C will allow us to perform experiments using compounds that are not readily available in 14 C-tagged form. A 3 H capability would also allow us to perform unique double-labeling experiments in which we learn the fate, distribution, and metabolism of separate fractions of biological compounds
Theoni K. Georgiou
Full Text Available Thermoresponsive polymers are a class of “smart” materials that have the ability to respond to a change in temperature; a property that makes them useful materials in a wide range of applications and consequently attracts much scientific interest. This review focuses mainly on the studies published over the last 10 years on the synthesis and use of thermoresponsive polymers for biomedical applications including drug delivery, tissue engineering and gene delivery. A summary of the main applications is given following the different studies on thermoresponsive polymers which are categorized based on their 3-dimensional structure; hydrogels, interpenetrating networks, micelles, crosslinked micelles, polymersomes, films and particles.
Street, Laurence J
IntroductionHistory of Medical DevicesThe Role of Biomedical Engineering Technologists in Health CareCharacteristics of Human Anatomy and Physiology That Relate to Medical DevicesSummaryQuestionsDiagnostic Devices: Part OnePhysiological Monitoring SystemsThe HeartSummaryQuestionsDiagnostic Devices: Part TwoCirculatory System and BloodRespiratory SystemNervous SystemSummaryQuestionsDiagnostic Devices: Part ThreeDigestive SystemSensory OrgansReproductionSkin, Bone, Muscle, MiscellaneousChapter SummaryQuestionsDiagnostic ImagingIntroductionX-RaysMagnetic Resonance Imaging ScannersPositron Emissio
INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview
Mahendra R.R Raj
Full Text Available The importance of waste disposal management is a very essential and integral part of any health care system. Health care providers have been ignorant or they did not essentially know the basic aspect of the importance and effective management of hospital waste.This overview of biomedical waste disposal/management gives a thorough insight into the aspects of the guidelines to be followed and adopted according to the international WHO approved methodology for a cleaner, disease-free, and healthier medical services to the populace, i.e., to the hospital employees, patients, and society.
Ciaccio Edward J
Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.
Say, Jana M; van Vreden, Caryn; Reilly, David J; Brown, Louise J; Rabeau, James R; King, Nicholas J C
In recent years, nanodiamonds have emerged from primarily an industrial and mechanical applications base, to potentially underpinning sophisticated new technologies in biomedical and quantum science. Nanodiamonds are relatively inexpensive, biocompatible, easy to surface functionalise and optically stable. This combination of physical properties are ideally suited to biological applications, including intracellular labelling and tracking, extracellular drug delivery and adsorptive detection of bioactive molecules. Here we describe some of the methods and challenges for processing nanodiamond materials, detection schemes and some of the leading applications currently under investigation.
Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl
The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic
... and US Department of Agriculture Dietary Supplement Ingredient Database Toggle navigation Menu Home About DSID Mission Current ... values can be saved to build a small database or add to an existing database for national, ...
National Research Council Staff; Commission on Physical Sciences, Mathematics, and Applications; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences
.... Incorporating input from dozens of biomedical researchers who described what they perceived as key open problems of imaging that are amenable to attack by mathematical scientists and physicists...
Liu, Feng; Goodarzi, Ali; Wang, Haifeng; Stasiak, Joanna; Sun, Jianbo; Zhou, Yu
The 2nd International Conference on Biomedical Engineering and Biotechnology (iCBEB 2013), held in Wuhan on 11–13 October 2013, is an annual conference that aims at providing an opportunity for international and national researchers and practitioners to present the most recent advances and future challenges in the fields of Biomedical Information, Biomedical Engineering and Biotechnology. The papers published by this issue are selected from this conference, which witnesses the frontier in the field of Biomedical Engineering and Biotechnology, which particularly has helped improving the level of clinical diagnosis in medical work.
Rubin, Jessica B; Paltiel, A David; Saltzman, W Mark
With the founding of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) in 1999, the National Institutes of Health (NIH) made explicit its dedication to expanding research in biomedical engineering. Ten years later, we sought to examine how closely federal funding for biomedical engineering aligns with U.S. health priorities. Using a publicly accessible database of research projects funded by the NIH in 2008, we identified 641 grants focused on biomedical engineering, 48% of which targeted specific diseases. Overall, we found that these disease-specific NIH-funded biomedical engineering research projects align with national health priorities, as quantified by three commonly utilized measures of disease burden: cause of death, disability-adjusted survival losses, and expenditures. However, we also found some illnesses (e.g., cancer and heart disease) for which the number of research projects funded deviated from our expectations, given their disease burden. Our findings suggest several possibilities for future studies that would serve to further inform the allocation of limited research dollars within the field of biomedical engineering.
Tindana, Paulina; de Vries, Jantina; Campbell, Megan; Littler, Katherine; Seeley, Janet; Marshall, Patricia; Troyer, Jennifer; Ogundipe, Morisola; Alibu, Vincent Pius; Yakubu, Aminu; Parker, Michael
Community engagement has been recognised as an important aspect of the ethical conduct of biomedical research, especially when research is focused on ethnically or culturally distinct populations. While this is a generally accepted tenet of biomedical research, it is unclear what components are necessary for effective community engagement, particularly in the context of genomic research in Africa. We conducted a review of the published literature to identify the community engagement strategies that can support the successful implementation of genomic studies in Africa. Our search strategy involved using online databases, Pubmed (National Library of Medicine), Medline and Google scholar. Search terms included a combination of the following: community engagement, community advisory boards, community consultation, community participation, effectiveness, genetic and genomic research, Africa, developing countries. A total of 44 articles and 1 thesis were retrieved of which 38 met the selection criteria. Of these, 21 were primary studies on community engagement, while the rest were secondary reports on community engagement efforts in biomedical research studies. 34 related to biomedical research generally, while 4 were specific to genetic and genomic research in Africa. We concluded that there were several community engagement strategies that could support genomic studies in Africa. While many of the strategies could support the early stages of a research project such as the recruitment of research participants, further research is needed to identify effective strategies to engage research participants and their communities beyond the participant recruitment stage. Research is also needed to address how the views of local communities should be incorporated into future uses of human biological samples. Finally, studies evaluating the impact of CE on genetic research are lacking. Systematic evaluation of CE strategies is essential to determine the most effective models of
This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...
US Agency for International Development — The Collecting Taxes Database contains performance and structural indicators about national tax systems. The database contains quantitative revenue performance...
US Agency for International Development — The Anticorruption Projects Database (Database) includes information about USAID projects with anticorruption interventions implemented worldwide between 2007 and...
Hacısalihzade, Selim S
Biomedical Applications of Control Engineering is a lucidly written textbook for graduate control engineering and biomedical engineering students as well as for medical practitioners who want to get acquainted with quantitative methods. It is based on decades of experience both in control engineering and clinical practice. The book begins by reviewing basic concepts of system theory and the modeling process. It then goes on to discuss control engineering application areas like · Different models for the human operator, · Dosage and timing optimization in oral drug administration, · Measuring symptoms of and optimal dopaminergic therapy in Parkinson’s disease, · Measurement and control of blood glucose levels both naturally and by means of external controllers in diabetes, and · Control of depth of anaesthesia using inhalational anaesthetic agents like sevoflurane using both fuzzy and state feedback controllers....
Mohammad, A.; Moheman, A.
The main objective of this chapter is to encapsulate the applications of thin-layer chromatography (TLC) and high-performance thin-layer chromatography (HPTLC) as used in the analysis of compounds of pharmaceutical importance. The chapter discusses the advantages of using TLC or HPTLC for biomedical applications and summarizes important information on stationary and mobile phases, adopted methodology, sample application, zone detection, and identification and quantification of amino acids and proteins, carbohydrates, lipids, bile acids, drugs, vitamins, and porphyrins in biological matrices such as blood, urine, feces, saliva, cerebrospinal fluid, body tissues, etc. Among the stationary phases, silica gel has been the most preferred layer material in combination of mixed aqueous- organic or multicomponent organic solvent systems as mobile phase. For quantitative determination of analyte in various matrices, densitometry has been more commonly used. According to the literature survey, the interest of chromatographers in using the TLC/HPTLC has been in the following order: drugs > amino acids and proteins > lipids > bile acids > carbohydrates/vitamins > porphyrins.
Plaza, Laura; Díaz, Alberto; Gervás, Pablo
Access to the vast body of research literature that is available in biomedicine and related fields may be improved by automatic summarisation. This paper presents a method for summarising biomedical scientific literature that takes into consideration the characteristics of the domain and the type of documents. To address the problem of identifying salient sentences in biomedical texts, concepts and relations derived from the Unified Medical Language System (UMLS) are arranged to construct a semantic graph that represents the document. A degree-based clustering algorithm is then used to identify different themes or topics within the text. Different heuristics for sentence selection, intended to generate different types of summaries, are tested. A real document case is drawn up to illustrate how the method works. A large-scale evaluation is performed using the recall-oriented understudy for gisting-evaluation (ROUGE) metrics. The results are compared with those achieved by three well-known summarisers (two research prototypes and a commercial application) and two baselines. Our method significantly outperforms all summarisers and baselines. The best of our heuristics achieves an improvement in performance of almost 7.7 percentage units in the ROUGE-1 score over the LexRank summariser (0.7862 versus 0.7302). A qualitative analysis of the summaries also shows that our method succeeds in identifying sentences that cover the main topic of the document and also considers other secondary or "satellite" information that might be relevant to the user. The method proposed is proved to be an efficient approach to biomedical literature summarisation, which confirms that the use of concepts rather than terms can be very useful in automatic summarisation, especially when dealing with highly specialised domains. Copyright © 2011 Elsevier B.V. All rights reserved.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016
Sweileh Waleed M
Full Text Available Abstract Background Medical research productivity reflects the level of medical education and practice in a particular country. The objective of this study was to examine the quantity and quality of medical and biomedical research published from Palestine. Findings Comprehensive review of the literature indexed by Scopus was conducted. Data from Jan 01, 2002 till December 31, 2011 was searched for authors affiliated with Palestine or Palestinian authority. Results were refined to limit the search to medical and biomedical subjects. The quality of publication was assessed using Journal Citation Report. The total number of publications was 2207. A total of 770 publications were in the medical and biomedical subject areas. The annual rate of publication was 0.077 articles per gross domestic product/capita. The 770 publications have an h-index of 32. One hundred and thirty eight (18% articles were published in 46 journals that were not indexed in the web of knowledge. Twenty two (22/770; 2.9% articles were published in journals with an IF > 10. Conclusions The quantity and quality of research originating from Palestinian institutions is promising given the scarce resources of Palestine. However, more effort is needed to bridge the gap in medical research productivity and to promote better health in Palestine.
Archives of Medical and Biomedical Research is the official journal of the International Association of Medical and Biomedical Researchers (IAMBR) and the Society for Free Radical Research Africa (SFRR-Africa). It is an internationally peer reviewed, open access and multidisciplinary journal aimed at publishing original ...
Discusses the publication of biomedical journals on the Internet. Highlights include pros and cons of electronic publishing; the Global Health Network at the University of Pittsburgh; the availability of biomedical journals on the World Wide Web; current applications, including access to journal contents tables and electronic delivery of full-text…
van Alste, Jan A.
At the University of Twente together with the Free University of Amsterdam a new educational program on Biomedical Engineering will be developed. The academic program with a five-year duration will start in September 2001. After a general, broad education in Biomedical Engineering in the first three
Biomedical Engineering is the application of principles of physics, chemistry, nd engineering to problems of human health. The National Laboratories of the U.S. Department of Energy have been leaders in this scientific field since 1947. This inventory of their biomedical engineering projects was compiled in January 1999
AIMS AND SCOPE: The journal is conceived as an academic and professional journal covering all fields within the Biomedical Sciences including the allied health fields. Articles from the Physical Sciences and humanities related to the Medical Sciences will also be considered. The African Journal of Biomedical Research ...
Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok
KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment
Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok
KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.
Majernik, Jaroslav; Pancerz, Krzysztof; Zaitseva, Elena
This book presents latest results and selected applications of Computational Intelligence in Biomedical Technologies. Most of contributions deal with problems of Biomedical and Medical Informatics, ranging from theoretical considerations to practical applications. Various aspects of development methods and algorithms in Biomedical and Medical Informatics as well as Algorithms for medical image processing, modeling methods are discussed. Individual contributions also cover medical decision making support, estimation of risks of treatments, reliability of medical systems, problems of practical clinical applications and many other topics This book is intended for scientists interested in problems of Biomedical Technologies, for researchers and academic staff, for all dealing with Biomedical and Medical Informatics, as well as PhD students. Useful information is offered also to IT companies, developers of equipment and/or software for medicine and medical professionals. .
Garmany, John; Clark, Terry
INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint
Master's thesis with the name Oracle database systems administration describes problems in databases and how to solve them, which is important for database administrators. It helps them in delivering faster solutions without the need to look for or figure out solutions on their own. Thesis describes database backup and recovery methods that are closely related to problems solutions. The main goal is to provide guidance and recommendations regarding database troubles and how to solve them. It ...
Nanoscale structures and materials have been explored in many biological applications because of their novel and impressive physical and chemical properties. Such properties allow remarkable opportunities to study and interact with complex biological processes. This book analyses the state of the art of piezoelectric nanomaterials and introduces their applications in the biomedical field. Despite their impressive potentials, piezoelectric materials have not yet received significant attention for bio-applications. This book shows that the exploitation of piezoelectric nanoparticles in nanomedicine is possible and realistic, and their impressive physical properties can be useful for several applications, ranging from sensors and transducers for the detection of biomolecules to “sensible” substrates for tissue engineering or cell stimulation.
Tangney, John F.
The mission of ONR's Human and Bioengineered Systems Division is to direct, plan, foster, and encourage Science and Technology in cognitive science, computational neuroscience, bioscience and bio-mimetic technology, social/organizational science, training, human factors, and decision making as related to future Naval needs. This paper highlights current programs that contribute to future biomedical wellness needs in context of humanitarian assistance and disaster relief. ONR supports fundamental research and related technology demonstrations in several related areas, including biometrics and human activity recognition; cognitive sciences; computational neurosciences and bio-robotics; human factors, organizational design and decision research; social, cultural and behavioral modeling; and training, education and human performance. In context of a possible future with automated casualty evacuation, elements of current science and technology programs are illustrated.
Chmiel, Alan; Humphreys, Brad
A compact, ambulatory biometric data acquisition system has been developed for space and commercial terrestrial use. BioWATCH (Bio medical Wireless and Ambulatory Telemetry for Crew Health) acquires signals from biomedical sensors using acquisition modules attached to a common data and power bus. Several slots allow the user to configure the unit by inserting sensor-specific modules. The data are then sent real-time from the unit over any commercially implemented wireless network including 802.11b/g, WCDMA, 3G. This system has a distributed computing hierarchy and has a common data controller on each sensor module. This allows for the modularity of the device along with the tailored ability to control the cards using a relatively small master processor. The distributed nature of this system affords the modularity, size, and power consumption that betters the current state of the art in medical ambulatory data acquisition. A new company was created to market this technology.
Martin-Sanchez, F; Maojo, V
To analyze the role that biomedical informatics could play in the application of the NBIC Converging Technologies in the medical field and raise awareness of these new areas throughout the Biomedical Informatics community. Review of the literature and analysis of the reference documents in this domain from the biomedical informatics perspective. Detailing existing developments showing that partial convergence of technologies have already yielded relevant results in biomedicine (such as bioinformatics or biochips). Input from current projects in which the authors are involved is also used. Information processing is a key issue in enabling the convergence of NBIC technologies. Researchers in biomedical informatics are in a privileged position to participate and actively develop this new scientific direction. The experience of biomedical informaticians in five decades of research in the medical area and their involvement in the completion of the Human and other genome projects will help them participate in a similar role for the development of applications of converging technologies -particularly in nanomedicine. The proposed convergence will bring bridges between traditional disciplines. Particular attention should be placed on the ethical, legal, and social issues raised by the NBIC convergence. These technologies provide new directions for research and education in Biomedical Informatics placing a greater emphasis in multidisciplinary approaches.
Lee, E. Sally; McDonald, David W.; Anderson, Nicholas; Tarczy-Hornoch, Peter
Due to its complex nature, modern biomedical research has become increasingly interdisciplinary and collaborative in nature. Although a necessity, interdisciplinary biomedical collaboration is difficult. There is, however, a growing body of literature on the study and fostering of collaboration in fields such as computer supported cooperative work (CSCW) and information science (IS). These studies of collaboration provide insight into how to potentially alleviate the difficulties of interdisciplinary collaborative research. We, therefore, undertook a cross cutting study of science and engineering collaboratories to identify emergent themes. We review many relevant collaboratory concepts: (a) general collaboratory concepts across many domains: communication, common workspace and coordination, and data sharing and management, (b) specific collaboratory concepts of particular biomedical relevance: data integration and analysis, security structure, metadata and data provenance, and interoperability and data standards, (c) environmental factors that support collaboratories: administrative and management structure, technical support, and available funding as critical environmental factors, and (d) future considerations for biomedical collaboration: appropriate training and long-term planning. In our opinion, the collaboratory concepts we discuss can guide planning and design of future collaborative infrastructure by biomedical informatics researchers to alleviate some of the difficulties of interdisciplinary biomedical collaboration. PMID:18706852
Kulkarni, M; Gongadze, E; Perutkova, Š; A Iglič; Mazare, A; Schmuki, P; Kralj-Iglič, V; Milošev, I; Mozetič, M
Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO 2 ) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO 2 nanotubes in cell interactions is based on the fact that TiO 2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO 2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties. (topical review)
Kulkarni, M.; Mazare, A.; Gongadze, E.; Perutkova, Š.; Kralj-Iglič, V.; Milošev, I.; Schmuki, P.; Iglič, A.; Mozetič, M.
Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO2) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO2 nanotubes in cell interactions is based on the fact that TiO2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties.
Sun, Dongdong; Wang, Minghui; Li, Ao
Due to the importance of post-translational modifications (PTMs) in human health and diseases, PTMs are regularly reported in the biomedical literature. However, the continuing and rapid pace of expansion of this literature brings a huge challenge for researchers and database curators. Therefore, there is a pressing need to aid them in identifying relevant PTM information more efficiently by using a text mining system. So far, only a few web servers are available for mining information of a very limited number of PTMs, which are based on simple pattern matching or pre-defined rules. In our work, in order to help researchers and database curators easily find and retrieve PTM information from available text, we have developed a text mining tool called MPTM, which extracts and organizes valuable knowledge about 11 common PTMs from abstracts in PubMed by using relations extracted from dependency parse trees and a heuristic algorithm. It is the first web server that provides literature mining service for hydroxylation, myristoylation and GPI-anchor. The tool is also used to find new publications on PTMs from PubMed and uncovers potential PTM information by large-scale text analysis. MPTM analyzes text sentences to identify protein names including substrates and protein-interacting enzymes, and automatically associates them with the UniProtKB protein entry. To facilitate further investigation, it also retrieves PTM-related information, such as human diseases, Gene Ontology terms and organisms from the input text and related databases. In addition, an online database (MPTMDB) with extracted PTM information and a local MPTM Lite package are provided on the MPTM website. MPTM is freely available online at http://bioinformatics.ustc.edu.cn/mptm/ and the source codes are hosted on GitHub: https://github.com/USTC-HILAB/MPTM .
Full Text Available List Contact us RED Database Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti...on The Rice Expression Database (RED) is a database that aggregates the gene expr...icroarray Project and other research groups. Features and manner of utilization of database
Cruser, des Anges; Dubin, Bruce; Brown, Sarah K; Bakken, Lori L; Licciardone, John C; Podawiltz, Alan L; Bulik, Robert J
Without systematic exposure to biomedical research concepts or applications, osteopathic medical students may be generally under-prepared to efficiently consume and effectively apply research and evidence-based medicine information in patient care. The academic literature suggests that although medical residents are increasingly expected to conduct research in their post graduate training specialties, they generally have limited understanding of research concepts.With grant support from the National Center for Complementary and Alternative Medicine, and a grant from the Osteopathic Heritage Foundation, the University of North Texas Health Science Center (UNTHSC) is incorporating research education in the osteopathic medical school curriculum. The first phase of this research education project involved a baseline assessment of students' understanding of targeted research concepts. This paper reports the results of that assessment and discusses implications for research education during medical school. Using a novel set of research competencies supported by the literature as needed for understanding research information, we created a questionnaire to measure students' confidence and understanding of selected research concepts. Three matriculating medical school classes completed the on-line questionnaire. Data were analyzed for differences between groups using analysis of variance and t-tests. Correlation coefficients were computed for the confidence and applied understanding measures. We performed a principle component factor analysis of the confidence items, and used multiple regression analyses to explore how confidence might be related to the applied understanding. Of 496 total incoming, first, and second year medical students, 354 (71.4%) completed the questionnaire. Incoming students expressed significantly more confidence than first or second year students (F = 7.198, df = 2, 351, P = 0.001) in their ability to understand the research concepts. Factor analyses
Perdue, Bob; Piotrowski, Chris
Designed to evaluate the usefulness of the BIOSIS PREVIEWS database when searching the psychiatric literature, this study compared the effectiveness of this online database with the effectiveness of two other computerized databases, MEDLINE and PsycINFO (Psychological Abstracts), which psychiatric researchers and clinicians usually rely on when…
Lin, Kang-Ping; Kao, Tsair; Wang, Jia-Jung; Chen, Mei-Jung; Su, Fong-Chin
Biomedical Engineers (BME) play an important role in medical and healthcare society. Well educational programs are important to support the healthcare systems including hospitals, long term care organizations, manufacture industries of medical devices/instrumentations/systems, and sales/services companies of medical devices/instrumentations/system. In past 30 more years, biomedical engineering society has accumulated thousands people hold a biomedical engineering degree, and work as a biomedical engineer in Taiwan. Most of BME students can be trained in biomedical engineering departments with at least one of specialties in bioelectronics, bio-information, biomaterials or biomechanics. Students are required to have internship trainings in related institutions out of campus for 320 hours before graduating. Almost all the biomedical engineering departments are certified by IEET (Institute of Engineering Education Taiwan), and met the IEET requirement in which required mathematics and fundamental engineering courses. For BMEs after graduation, Taiwanese Society of Biomedical Engineering (TSBME) provides many continue-learning programs and certificates for all members who expect to hold the certification as a professional credit in his working place. In current status, many engineering departments in university are continuously asked to provide joint programs with BME department to train much better quality students. BME is one of growing fields in Taiwan.
Piano, Leonardo; Maselli, Filippo; Viceconti, Antonello; Gianola, Silvia; Ciuro, Aldo
[Purpose] To present legislation comparing direct and referred access-or other measures-to physical therapy. The focus is on the management of the most burdensome musculoskeletal disorders in terms of regulations, costs, effectiveness, safety and cost-effectiveness. [Methods] Main biomedical databases and gray literature were searched ranging from a global scenario to the analysis of targeted geographical areas and specifically Italy and the Region Piedmont. [Results] legislation on Direct Access highlights inconsistencies among the countries belonging to World Confederation for Physical Therapy. Direct Access could be an effective, safe and efficient organization model for the management of patients with musculoskeletal diseases and seems to be more effective safer and cost effective. [Conclusion] Direct Access is a virtuous model which can help improve the global quality of physical therapy services. Further studies are required to confirm this approach and determine whether the findings of the present overview can be replicated in different countries and healthcare systems.
Faria, Daniel; Pesquita, Catia; Mott, Isabela; Martins, Catarina; Couto, Francisco M; Cruz, Isabel F
Biomedical ontologies pose several challenges to ontology matching due both to the complexity of the biomedical domain and to the characteristics of the ontologies themselves. The biomedical tracks in the Ontology Matching Evaluation Initiative (OAEI) have spurred the development of matching systems able to tackle these challenges, and benchmarked their general performance. In this study, we dissect the strategies employed by matching systems to tackle the challenges of matching biomedical ontologies and gauge the impact of the challenges themselves on matching performance, using the AgreementMakerLight (AML) system as the platform for this study. We demonstrate that the linear complexity of the hash-based searching strategy implemented by most state-of-the-art ontology matching systems is essential for matching large biomedical ontologies efficiently. We show that accounting for all lexical annotations (e.g., labels and synonyms) in biomedical ontologies leads to a substantial improvement in F-measure over using only the primary name, and that accounting for the reliability of different types of annotations generally also leads to a marked improvement. Finally, we show that cross-references are a reliable source of information and that, when using biomedical ontologies as background knowledge, it is generally more reliable to use them as mediators than to perform lexical expansion. We anticipate that translating traditional matching algorithms to the hash-based searching paradigm will be a critical direction for the future development of the field. Improving the evaluation carried out in the biomedical tracks of the OAEI will also be important, as without proper reference alignments there is only so much that can be ascertained about matching systems or strategies. Nevertheless, it is clear that, to tackle the various challenges posed by biomedical ontologies, ontology matching systems must be able to efficiently combine multiple strategies into a mature matching
Fleuren, Wilco W M; Alkema, Wynand
In recent years the amount of experimental data that is produced in biomedical research and the number of papers that are being published in this field have grown rapidly. In order to keep up to date with developments in their field of interest and to interpret the outcome of experiments in light of all available literature, researchers turn more and more to the use of automated literature mining. As a consequence, text mining tools have evolved considerably in number and quality and nowadays can be used to address a variety of research questions ranging from de novo drug target discovery to enhanced biological interpretation of the results from high throughput experiments. In this paper we introduce the most important techniques that are used for a text mining and give an overview of the text mining tools that are currently being used and the type of problems they are typically applied for. Copyright © 2015 Elsevier Inc. All rights reserved.
Abad-García, M Francisca; Melero, Remedios; Abadal, Ernest; González-Teruel, Aurora
Open-access literature is digital, online, free of charge, and free of most copyright and licensing restrictions. Self-archiving or deposit of scholarly outputs in institutional repositories (open-access green route) is increasingly present in the activities of the scientific community. Besides the benefits of open access for visibility and dissemination of science, it is increasingly more often required by funding agencies to deposit papers and any other type of documents in repositories. In the biomedical environment this is even more relevant by the impact scientific literature can have on public health. However, to make self-archiving feasible, authors should be aware of its meaning and the terms in which they are allowed to archive their works. In that sense, there are some tools like Sherpa/RoMEO or DULCINEA (both directories of copyright licences of scientific journals at different levels) to find out what rights are retained by authors when they publish a paper and if they allow to implement self-archiving. PubMed Central and its British and Canadian counterparts are the main thematic repositories for biomedical fields. In our country there is none of similar nature, but most of the universities and CSIC, have already created their own institutional repositories. The increase in visibility of research results and their impact on a greater and earlier citation is one of the most frequently advance of open access, but removal of economic barriers to access to information is also a benefit to break borders between groups.
This book provides an introduction to design of biomedical optical imaging technologies and their applications. The main topics include: fluorescence imaging, confocal imaging, micro-endoscope, polarization imaging, hyperspectral imaging, OCT imaging, multimodal imaging and spectroscopic systems. Each chapter is written by the world leaders of the respective fields, and will cover: principles and limitations of optical imaging technology, system design and practical implementation for one or two specific applications, including design guidelines, system configuration, optical design, component requirements and selection, system optimization and design examples, recent advances and applications in biomedical researches and clinical imaging. This book serves as a reference for students and researchers in optics and biomedical engineering.
Kemnitzer, Ronald; Dorsa, Ed
The development of biomedical equipment is justifiably focused on making products that "work." However, this approach leaves many of the people affected by these designs (operators, patients, etc.) with little or no representation when it comes to the design of these products. Industrial design is a "user focused" profession which takes into account the needs of diverse groups when making design decisions. The authors propose that biomedical equipment design can be enhanced, made more user and patient "friendly" by adopting the industrial design approach to researching, analyzing, and ultimately designing biomedical products.
EL PROYECTO DEL GENOMA EN LA LITERATURA BIOMÉDICA LATINOAMERICANA DE CUATRO PAÍSES O PROJETO DO GENOMA NA LITERATURA BIOMÉDICA LATINO-AMERICANA DE QUATRO PAÍSES THE GENOME PROJECT IN BIOMEDICAL LITERATURE WITHIN FOUR COUNTRIES IN LATIN AMERICA
Fernando Lolas Stepke
ção comercial de países em desenvolvimento; a possibilidade de danos físicos ou psicológicos por estigmatização ou por discriminação genética; a possibilidade de modificações genéticas ou aborto por razões eugenésicas, a necessidade de salvaguardar a confidencialidade; a baixa participação das comunidades indígenas no estudo de seu DNA, algumas vezes sem um consentimento informado apropriado; a necessidade de regulação legal para prevenir a realização de modificações genéticas de aprimoramento ou a clonagem humana reprodutiva, e para regular o acesso à informação genéticaThe present reflection refers to data obtained about the social representations of genomic research and its applications through the review of local literature written by biomedical researchers in four Latin American countries: Argentine, Chile, Mexico and Peru. Several issues are addressed, such as: little access to prevention and therapeutic methods related to genomic medicine in Latin America; the risk associated to genetic modifications in human beings; lack of equity in the access to health benefits; control by biotechnological companies; commercialization of gene sequences through patents which leads to commercial exploitation of underdeveloped countries; the possibility of physical or psychological damage in the way of stigmatization or genetic discrimination; the possibility of genetic modifications or abortion for eugenic reasons; the necessity of safeguarding confidentiality; the little participation of indigenous communities in the studies done on their DNA, sometimes without proper informed consent; the necessity of legal regulation to prevent the pathway towards enhancement of genetic modifications or reproductive human cloning, and of regulating access to genetic information
Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.
The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855
Full Text Available In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration. The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past. The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.
Iwasaki, Wataru; Yamamoto, Yasunori; Takagi, Toshihisa
In this paper, we describe a server/client literature management system specialized for the life science domain, the TogoDoc system (Togo, pronounced Toe-Go, is a romanization of a Japanese word for integration). The server and the client program cooperate closely over the Internet to provide life scientists with an effective literature recommendation service and efficient literature management. The content-based and personalized literature recommendation helps researchers to isolate interesting papers from the "tsunami" of literature, in which, on average, more than one biomedical paper is added to MEDLINE every minute. Because researchers these days need to cover updates of much wider topics to generate hypotheses using massive datasets obtained from public databases or omics experiments, the importance of having an effective literature recommendation service is rising. The automatic recommendation is based on the content of personal literature libraries of electronic PDF papers. The client program automatically analyzes these files, which are sometimes deeply buried in storage disks of researchers' personal computers. Just saving PDF papers to the designated folders makes the client program automatically analyze and retrieve metadata, rename file names, synchronize the data to the server, and receive the recommendation lists of newly published papers, thus accomplishing effortless literature management. In addition, the tag suggestion and associative search functions are provided for easy classification of and access to past papers (researchers who read many papers sometimes only vaguely remember or completely forget what they read in the past). The TogoDoc system is available for both Windows and Mac OS X and is free. The TogoDoc Client software is available at http://tdc.cb.k.u-tokyo.ac.jp/, and the TogoDoc server is available at https://docman.dbcls.jp/pubmed_recom.
Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the
Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...
The first edition of the Directory of IAEA Databases is intended to describe the computerized information sources available to IAEA staff members. It contains a listing of all databases produced at the IAEA, together with information on their availability
U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.
EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...
US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...
Sleutjes, B.; de Valk, H.A.G.
Database Urban Europe: ResSegr database on segregation in The Netherlands. Collaborative research on residential segregation in Europe 2014–2016 funded by JPI Urban Europe (Joint Programming Initiative Urban Europe).
Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael
Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and...
Burnham, Judy F
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.
Burnham, Judy F
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.
Skrabalak, Sara E.; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl4. The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where the optical attenuation caused by blood and soft tissue is essentially negligible. They are also biocompatible and present a well-established surface for easy functionalization. We have tailored the scattering and absorption cross-sections of Au nanocages for use in optical coherence tomography and photothermal treatment, respectively. Our preliminary studies show greatly improved spectroscopic image contrast for tissue phantoms containing Au nanocages. Our most recent results also demonstrate the photothermal destruction of breast cancer cells in vitro by using immuno-targeted Au nanocages as an effective photo-thermal transducer. These experiments suggest that Au nanocages may be a new class of nanometer-sized agents for cancer diagnosis and therapy. PMID:18648528
Skrabalak, Sara E; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl(4). The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where the optical attenuation caused by blood and soft tissue is essentially negligible. They are also biocompatible and present a well-established surface for easy functionalization. We have tailored the scattering and absorption cross-sections of Au nanocages for use in optical coherence tomography and photothermal treatment, respectively. Our preliminary studies show greatly improved spectroscopic image contrast for tissue phantoms containing Au nanocages. Our most recent results also demonstrate the photothermal destruction of breast cancer cells in vitro by using immuno-targeted Au nanocages as an effective photo-thermal transducer. These experiments suggest that Au nanocages may be a new class of nanometer-sized agents for cancer diagnosis and therapy.
Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.
Tiago Faustino Andrade
Full Text Available Nowadays, the assessment of body composition by estimating the percentage of body fat has a great impact in many fields such as nutrition, health, sports, chronic diseases and others. The main purpose for this work is the development of a virtual instrument that permits more effective assessment of body fat, automatic data processing, recording results and storage in a database, with high potential to conduct new studies, http://lipotool.com.
Full Text Available Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1 First Author, (2 Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3 Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or
Liu, Feifan; Yu, Hong
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out
Jensen, Christian S.; Pedersen, Torben Bach; Thomsen, Christian
The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes...... data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases. The book also covers advanced multidimensional concepts that are considered...... techniques that are particularly important to multidimensional databases, including materialized views, bitmap indices, join indices, and star join processing. The book ends with a chapter that presents the literature on which the book is based and offers further readings for those readers who wish to engage...
Deligkaris, Kosmas; Tadele, T.S.; Olthuis, Wouter; van den Berg, Albert
This review paper presents hydrogel-based devices for biomedical applications. The first part of the paper gives a comprehensive, qualitative, theoretical overview of hydrogels' synthesis and operation. Crosslinking methods, operation principles and transduction mechanisms are discussed in this
Federal Laboratory Consortium — The NICHD Biomedical Mass Spectrometry Core Facility was created under the auspices of the Office of the Scientific Director to provide high-end mass-spectrometric...
Parracino, A.; Gajula, G.P.; di Gennaro, A.K.
Medical interest in nanotechnology originates from a belief that nanoscale therapeutic devices can be constructed and directed towards its target inside the human body. Such nanodevices can be engineered by coupling superparamagnetic nanoparticle to biomedically active proteins. We hereby report ...
Kim, Donghyun; Somekh, Michael
Nanophotonics has emerged rapidly into technological mainstream with the advent and maturity of nanotechnology available in photonics and enabled many new exciting applications in the area of biomedical science and engineering that were unimagined even a few years ago with conventional photonic engineering techniques. Handbook of Nanophotonics in Biomedical Engineering is intended to be a reliable resource to a wealth of information on nanophotonics that can inspire readers by detailing emerging and established possibilities of nanophotonics in biomedical science and engineering applications. This comprehensive reference presents not only the basics of nanophotonics but also explores recent experimental and clinical methods used in biomedical and bioengineering research. Each peer-reviewed chapter of this book discusses fundamental aspects and materials/fabrication issues of nanophotonics, as well as applications in interfaces, cell, tissue, animal studies, and clinical engineering. The organization provides ...
A collaboration between the National Science Foundation and the National Institutes of Health will give NIH-funded researchers training to help them evaluate their scientific discoveries for commercial potential, with the aim of accelerating biomedical in
Journal of Medical and Biomedical Sciences. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 3 (2014) >. Log in or Register to get access to full text downloads.
International Journal of Medicine and Biomedical Research. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 5, No 3 (2016) >. Log in or Register to get access to full text downloads.
Journal of Medical and Biomedical Sciences. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 1, No 3 (2012) >. Log in or Register to get access to full text downloads.
National Aeronautics and Space Administration — Our project investigated whether a software platform could integrate as wide a variety of devices and data types as needed for spaceflight biomedical support. The...
San, Ka-Yiu; McIntire, Larry V.
Presents an introduction to the Biochemical and Biomedical Engineering program at Rice University. Describes the development of the academic and enhancement programs, including organizational structure and research project titles. (YP)
Bustamante, John; Sierra, Daniel
This volume presents the proceedings of the CLAIB 2016, held in Bucaramanga, Santander, Colombia, 26, 27 & 28 October 2016. The proceedings, presented by the Regional Council of Biomedical Engineering for Latin America (CORAL), offer research findings, experiences and activities between institutions and universities to develop Bioengineering, Biomedical Engineering and related sciences. The conferences of the American Congress of Biomedical Engineering are sponsored by the International Federation for Medical and Biological Engineering (IFMBE), Society for Engineering in Biology and Medicine (EMBS) and the Pan American Health Organization (PAHO), among other organizations and international agencies to bring together scientists, academics and biomedical engineers in Latin America and other continents in an environment conducive to exchange and professional growth.
This volume presents the proceedings of the CLAIB 2014, held in Paraná, Entre Ríos, Argentina 29, 30 & 31 October 2014. The proceedings, presented by the Regional Council of Biomedical Engineering for Latin America (CORAL) offer research findings, experiences and activities between institutions and universities to develop Bioengineering, Biomedical Engineering and related sciences. The conferences of the American Congress of Biomedical Engineering are sponsored by the International Federation for Medical and Biological Engineering (IFMBE), Society for Engineering in Biology and Medicine (EMBS) and the Pan American Health Organization (PAHO), among other organizations and international agencies and bringing together scientists, academics and biomedical engineers in Latin America and other continents in an environment conducive to exchange and professional growth. The Topics include: - Bioinformatics and Computational Biology - Bioinstrumentation; Sensors, Micro and Nano Technologies - Biomaterials, Tissu...
Xia, Junfeng; Wang, Qingguo; Jia, Peilin; Wang, Bing; Pao, William; Zhao, Zhongming
Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research since its advent only a few years ago, and they are expected to advance at an unprecedented pace in the following years. To provide the research community with a comprehensive NGS resource, we have developed the database Next Generation Sequencing Catalog (NGS Catalog, http://bioinfo.mc.vanderbilt.edu/NGS/index.html), a continually updated database that collects, curates and manages available human NGS data obtained from published literature. NGS Catalog deposits publication information of NGS studies and their mutation characteristics (SNVs, small insertions/deletions, copy number variations, and structural variants), as well as mutated genes and gene fusions detected by NGS. Other functions include user data upload, NGS general analysis pipelines, and NGS software. NGS Catalog is particularly useful for investigators who are new to NGS but would like to take advantage of these powerful technologies for their own research. Finally, based on the data deposited in NGS Catalog, we summarized features and findings from whole exome sequencing, whole genome sequencing, and transcriptome sequencing studies for human diseases or traits. © 2012 Wiley Periodicals, Inc.
Shaped by Quantum Theory, Technology, and the Genomics RevolutionThe integration of photonics, electronics, biomaterials, and nanotechnology holds great promise for the future of medicine. This topic has recently experienced an explosive growth due to the noninvasive or minimally invasive nature and the cost-effectiveness of photonic modalities in medical diagnostics and therapy. The second edition of the Biomedical Photonics Handbook presents recent fundamental developments as well as important applications of biomedical photonics of interest to scientists, engineers, manufacturers, teachers,
Biomedical applications have benefited greatly from the increasing interest and research into semiconducting silicon nanowires. Semiconducting Silicon Nanowires for Biomedical Applications reviews the fabrication, properties, and applications of this emerging material. The book begins by reviewing the basics, as well as the growth, characterization, biocompatibility, and surface modification, of semiconducting silicon nanowires. It goes on to focus on silicon nanowires for tissue engineering and delivery applications, including cellular binding and internalization, orthopedic tissue scaffol
Saha, Punam K; Basu, Subhadip
There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig
Kamala, K; Sivaperumal, P
Marine microbial enzyme technologies have progressed significantly in the last few decades for different applications. Among the various microorganisms, marine actinobacterial enzymes have significant active properties, which could allow them to be biocatalysts with tremendous bioactive metabolites. Moreover, marine actinobacteria have been considered as biofactories, since their enzymes fulfill biomedical and industrial needs. In this chapter, the marine actinobacteria and their enzymes' uses in biological activities and biomedical applications are described. © 2017 Elsevier Inc. All rights reserved.
Lim, Joo-Hwee; Xiong, Wei
A comprehensive guide to understanding and interpreting digital images in medical and functional applications Biomedical Image Understanding focuses on image understanding and semantic interpretation, with clear introductions to related concepts, in-depth theoretical analysis, and detailed descriptions of important biomedical applications. It covers image processing, image filtering, enhancement, de-noising, restoration, and reconstruction; image segmentation and feature extraction; registration; clustering, pattern classification, and data fusion. With contributions from ex
Roč. 20, č. 6 (2009), s. 743-750 ISSN 1180-4009. [TIES 2007. Annual Meeting of the International Environmental Society /18./. Mikulov, 16.08.2007-20.08.2007] Institutional research plan: CEZ:AV0Z10300504 Keywords : biomedical informatics * biomedical statistics * genetic information * forensic dentistry Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.000, year: 2009
Duchange, Nathalie; Autard, Delphine; Pinhas, Nicole
International audience; Open access within the scientific community depends on the scientific context and the practices of the field. In the biomedical domain, the communication of research results is characterised by the importance of the peer reviewing process, the existence of a hierarchy among journals and the transfer of copyright to the editor. Biomedical publishing has become a lucrative market and the growth of electronic journals has not helped lower the costs. Indeed, it is difficul...
Chee Kai Chua; Wai Yee Yeong; Jia An
Three-dimensional (3D) printing has a long history of applications in biomedical engineering. The development and expansion of traditional biomedical applications are being advanced and enriched by new printing technologies. New biomedical applications such as bioprinting are highly attractive and trendy. This Special Issue aims to provide readers with a glimpse of the recent profile of 3D printing in biomedical research.
Chua, Chee Kai; Yeong, Wai Yee; An, Jia
Three-dimensional (3D) printing has a long history of applications in biomedical engineering. The development and expansion of traditional biomedical applications are being advanced and enriched by new printing technologies. New biomedical applications such as bioprinting are highly attractive and trendy. This Special Issue aims to provide readers with a glimpse of the recent profile of 3D printing in biomedical research.
Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.