Jahiruddin; Abulaish, Muhammad; Dey, Lipika
A number of techniques such as information extraction, document classification, document clustering and information visualization have been developed to ease extraction and understanding of information embedded within text documents. However, knowledge that is embedded in natural language texts is difficult to extract using simple pattern matching techniques and most of these methods do not help users directly understand key concepts and their semantic relationships in document corpora, which are critical for capturing their conceptual structures. The problem arises due to the fact that most of the information is embedded within unstructured or semi-structured texts that computers can not interpret very easily. In this paper, we have presented a novel Biomedical Knowledge Extraction and Visualization framework, BioKEVis to identify key information components from biomedical text documents. The information components are centered on key concepts. BioKEVis applies linguistic analysis and Latent Semantic Analysis (LSA) to identify key concepts. The information component extraction principle is based on natural language processing techniques and semantic-based analysis. The system is also integrated with a biomedical named entity recognizer, ABNER, to tag genes, proteins and other entity names in the text. We have also presented a method for collating information extracted from multiple sources to generate semantic network. The network provides distinct user perspectives and allows navigation over documents with similar information components and is also used to provide a comprehensive view of the collection. The system stores the extracted information components in a structured repository which is integrated with a query-processing module to handle biomedical queries over text documents. We have also proposed a document ranking mechanism to present retrieved documents in order of their relevance to the user query. Copyright © 2010 Elsevier Inc. All rights reserved.
Rodriguez-Esteban, Raul; Bundschus, Markus
Biomedical text mining of scientific knowledge bases, such as Medline, has received much attention in recent years. Given that text mining is able to automatically extract biomedical facts that revolve around entities such as genes, proteins, and drugs, from unstructured text sources, it is seen as a major enabler to foster biomedical research and drug discovery. In contrast to the biomedical literature, research into the mining of biomedical patents has not reached the same level of maturity. Here, we review existing work and highlight the associated technical challenges that emerge from automatically extracting facts from patents. We conclude by outlining potential future directions in this domain that could help drive biomedical research and drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
Taylor, Donald P
High content screening (HCS) requires time-consuming and often complex iterative information retrieval and assessment approaches to optimally conduct drug discovery programs and biomedical research. Pre- and post-HCS experimentation both require the retrieval of information from public as well as proprietary literature in addition to structured information assets such as compound libraries and projects databases. Unfortunately, this information is typically scattered across a plethora of proprietary bioinformatics tools and databases and public domain sources. Consequently, single search requests must be presented to each information repository, forcing the results to be manually integrated for a meaningful result set. Furthermore, these bioinformatics tools and data repositories are becoming increasingly complex to use; typically they fail to allow for more natural query interfaces. Vivisimo has developed an enterprise software platform to bridge disparate silos of information. The platform automatically categorizes search results into descriptive folders without the use of taxonomies to drive the categorization. A new approach to information retrieval for HCS experimentation is proposed.
Philip R O Payne
Full Text Available The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed
Sarić, Jasmin; Engelken, Henriette; Reyle, Uwe
Biomedical knowledge is to a very large extent represented only in textual form. To make this knowledge accessible to humans and/or further automatic processing, text mining applications have been developed. At the end of this chapter we present an overview of the most important open access applications and their functionality. The main part of the paper is devoted to the major problems with which all such applications have to deal. The first problem is terminology processing, i.e., recognizing biomedical terms and identifying their meanings, at least to a certain degree. The second problem is to bring together information units that are distributed over more than one sentence. The task of coreference resolution consists of identifying the entities to which the text refers in different sentences and in different ways. The third problem we discuss is that of information extraction, in particular, extraction of relational information. The representation of the domain knowledge is an indispensable component of any text mining application. We discuss different types and depths of ontological modeling and how this knowledge helps to accomplish the tasks described above. An overview of ontological resources is given at the end of the chapter.
Full Text Available Abstract Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned
Zheng, Hai-Tao; Borchert, Charles; Jiang, Yong
Biomedical document conceptualization is the process of clustering biomedical documents based on ontology-represented domain knowledge. The result of this process is the representation of the biomedical documents by a set of key concepts and their relationships. Most of clustering methods cluster documents based on invariant domain knowledge. The objective of this work is to develop an effective method to cluster biomedical documents based on various user-specified ontologies, so that users can exploit the concept structures of documents more effectively. We develop a flexible framework to allow users to specify the knowledge bases, in the form of ontologies. Based on the user-specified ontologies, we develop a key concept induction algorithm, which uses latent semantic analysis to identify key concepts and cluster documents. A corpus-related ontology generation algorithm is developed to generate the concept structures of documents. Based on two biomedical datasets, we evaluate the proposed method and five other clustering algorithms. The clustering results of the proposed method outperform the five other algorithms, in terms of key concept identification. With respect to the first biomedical dataset, our method has the F-measure values 0.7294 and 0.5294 based on the MeSH ontology and gene ontology (GO), respectively. With respect to the second biomedical dataset, our method has the F-measure values 0.6751 and 0.6746 based on the MeSH ontology and GO, respectively. Both results outperforms the five other algorithms in terms of F-measure. Based on the MeSH ontology and GO, the generated corpus-related ontologies show informative conceptual structures. The proposed method enables users to specify the domain knowledge to exploit the conceptual structures of biomedical document collections. In addition, the proposed method is able to extract the key concepts and cluster the documents with a relatively high precision. Copyright 2010 Elsevier B.V. All rights reserved.
Bakal, Gokhan; Kavuluru, Ramakanth
Identifying new potential treatment options (say, medications and procedures) for known medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Even before this step, due to recent advances, in silico or computational approaches are also being employed to identify viable treatment options. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. In this paper, we report preliminary results on predicting treatment relations between biomedical entities purely based on semantic patterns over biomedical knowledge graphs. As such, we refrain from explicitly using NLP, although the knowledge graphs themselves may be built from NLP extractions. Our intuition is fairly straightforward - entities that participate in a treatment relation may be connected using similar path patterns in biomedical knowledge graphs extracted from scientific literature. Using a dataset of treatment relation instances derived from the well known Unified Medical Language System (UMLS), we verify our intuition by employing graph path patterns from a well known knowledge graph as features in machine learned models. We achieve a high recall (92 %) but precision, however, decreases from 95% to an acceptable 71% as we go from uniform class distribution to a ten fold increase in negative instances. We also demonstrate models trained with patterns of length ≤ 3 result in statistically significant gains in F-score over those trained with patterns of length ≤ 2. Our results show the potential of exploiting knowledge graphs for relation extraction and we believe this is the first effort to employ graph patterns as features for identifying biomedical relations.
Full Text Available Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1 We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2 We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3 For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Schulz, S; Jansen, L
Medical decision support and other intelligent applications in the life sciences depend on increasing amounts of digital information. Knowledge bases as well as formal ontologies are being used to organize biomedical knowledge and data. However, these two kinds of artefacts are not always clearly distinguished. Whereas the popular RDF(S) standard provides an intuitive triple-based representation, it is semantically weak. Description logics based ontology languages like OWL-DL carry a clear-cut semantics, but they are computationally expensive, and they are often misinterpreted to encode all kinds of statements, including those which are not ontological. We distinguish four kinds of statements needed to comprehensively represent domain knowledge: universal statements, terminological statements, statements about particulars and contingent statements. We argue that the task of formal ontologies is solely to represent universal statements, while the non-ontological kinds of statements can nevertheless be connected with ontological representations. To illustrate these four types of representations, we use a running example from parasitology. We finally formulate recommendations for semantically adequate ontologies that can efficiently be used as a stable framework for more context-dependent biomedical knowledge representation and reasoning applications like clinical decision support systems.
Huang, Chuixiu; Seip, Knut Fredrik; Gjelstad, Astrid
Electromembrane extraction (EME) was presented as a new microextraction concept in 2006, and since the introduction, substantial research has been conducted to develop this concept in different areas of analytical chemistry. To date, more than 100 research papers have been published on EME....... The present paper discusses recent development of EME. The paper focuses on the principles of EME, and discusses how to optimize operational parameters. In addition, pharmaceutical and biomedical applications of EME are reviewed, with emphasis on basic drugs, acidic drugs, amino acids, and peptides. Finally...
This book provides a broad overview of the topic Bioinformatics (medical informatics + biological information) with a focus on data, information and knowledge. From data acquisition and storage to visualization, privacy, regulatory, and other practical and theoretical topics, the author touches on several fundamental aspects of the innovative interface between the medical and computational domains that form biomedical informatics. Each chapter starts by providing a useful inventory of definitions and commonly used acronyms for each topic, and throughout the text, the reader finds several real-world examples, methodologies, and ideas that complement the technical and theoretical background. Also at the beginning of each chapter a new section called "key problems", has been added, where the author discusses possible traps and unsolvable or major problems. This new edition includes new sections at the end of each chapter, called "future outlook and research avenues," providing pointers to future challenges.
Full Text Available As the volume of publications rapidly increases, searching for relevant information from the literature becomes more challenging. To complement standard search engines such as PubMed, it is desirable to have an advanced search tool that directly returns relevant biomedical entities such as targets, drugs, and mutations rather than a long list of articles. Some existing tools submit a query to PubMed and process retrieved abstracts to extract information at query time, resulting in a slow response time and limited coverage of only a fraction of the PubMed corpus. Other tools preprocess the PubMed corpus to speed up the response time; however, they are not constantly updated, and thus produce outdated results. Further, most existing tools cannot process sophisticated queries such as searches for mutations that co-occur with query terms in the literature. To address these problems, we introduce BEST, a biomedical entity search tool. BEST returns, as a result, a list of 10 different types of biomedical entities including genes, diseases, drugs, targets, transcription factors, miRNAs, and mutations that are relevant to a user's query. To the best of our knowledge, BEST is the only system that processes free text queries and returns up-to-date results in real time including mutation information in the results. BEST is freely accessible at http://best.korea.ac.kr.
The relation between biomedical knowledge and clinical knowledge is discussed by comparing their respective structures. The knowledge of a disease as a biological phenomenon is constructed by the interaction of facts and theories from the main biomedical disciplines: epidemiology, diagnostics, clinical trial, therapy development and pathogenesis. Although these facts and theories are based on probabilities and extrapolations, the interaction provides a reliable and coherent structure, comparable to a Kuhnian paradigma. In the structure of clinical knowledge, i.e. knowledge of the patient with the disease, not only biomedical knowledge contributes to the structure but also economic and social relations, ethics and personal experience. However, the interaction between each of the participating "knowledges" in clinical knowledge is not based on mutual dependency and accumulation of different arguments from each, as in biomedical knowledge, but on competition and partial exclusion. Therefore, the structure of biomedical knowledge is different from that of clinical knowledge. This difference is used as the basis for a discussion in which the place of technology, evidence-based medicine and the gap between scientific and clinical knowledge are evaluated.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM
Bui, Quoc-Chinh; Sloot, Peter M A
The abundance of biomedical literature has attracted significant interest in novel methods to automatically extract biomedical relations from the literature. Until recently, most research was focused on extracting binary relations such as protein-protein interactions and drug-disease relations. However, these binary relations cannot fully represent the original biomedical data. Therefore, there is a need for methods that can extract fine-grained and complex relations known as biomedical events. In this article we propose a novel method to extract biomedical events from text. Our method consists of two phases. In the first phase, training data are mapped into structured representations. Based on that, templates are used to extract rules automatically. In the second phase, extraction methods are developed to process the obtained rules. When evaluated against the Genia event extraction abstract and full-text test datasets (Task 1), we obtain results with F-scores of 52.34 and 53.34, respectively, which are comparable to the state-of-the-art systems. Furthermore, our system achieves superior performance in terms of computational efficiency. Our source code is available for academic use at http://dl.dropbox.com/u/10256952/BioEvent.zip.
Zambach, Sine; Hansen, Jens Ulrik
Knowledge on regulatory relations, in for example regulatory pathways in biology, is used widely in experiment design by biomedical researchers and in systems biology. The knowledge has typically either been represented through simple graphs or through very expressive differential equation simula...
Motrenko, Anastasia; Strijov, Vadim
We address the problem of segmenting nearly periodic time series into period-like segments. We introduce a definition of nearly periodic time series via triplets 〈 basic shape, shape transformation, time scaling 〉 that covers a wide range of time series. To split the time series into periods, we select a pair of principal components of the Hankel matrix. We then cut the trajectory of the selected principal components by its symmetry axis and, thus, obtaining half-periods that are merged into segments. We describe a method of automatic selection of periodic pairs of principal components, corresponding to the fundamental periodicity. We demonstrate the application of the proposed method to the problem of period extraction for accelerometric time series of human gait. We see the automatic segmentation into periods as a problem of major importance for human activity recognition problem, since it allows to obtain interpretable segments: each extracted period can be seen as an ultimate entity of gait. The method we propose is more general compared to the application specific methods and can be used for any nearly periodical time series. We compare its performance to classical mathematical methods of period extraction and find that it is not only comparable to the alternatives, but in some cases performs better.
Zhu, Yongjun; Elemento, Olivier; Pathak, Jyotishman; Wang, Fei
Recent advances in biomedical research have generated a large volume of drug-related data. To effectively handle this flood of data, many initiatives have been taken to help researchers make good use of them. As the results of these initiatives, many drug knowledge bases have been constructed. They range from simple ones with specific focuses to comprehensive ones that contain information on almost every aspect of a drug. These curated drug knowledge bases have made significant contributions to the development of efficient and effective health information technologies for better health-care service delivery. Understanding and comparing existing drug knowledge bases and how they are applied in various biomedical studies will help us recognize the state of the art and design better knowledge bases in the future. In addition, researchers can get insights on novel applications of the drug knowledge bases through a review of successful use cases. In this study, we provide a review of existing popular drug knowledge bases and their applications in drug-related studies. We discuss challenges in constructing and using drug knowledge bases as well as future research directions toward a better ecosystem of drug knowledge bases. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Xu, Yun; Wang, ZhiHao; Lei, YiMing; Zhao, YuZhong; Xue, Yu
The exploding growth of the biomedical literature presents many challenges for biological researchers. One such challenge is from the use of a great deal of abbreviations. Extracting abbreviations and their definitions accurately is very helpful to biologists and also facilitates biomedical text analysis. Existing approaches fall into four broad categories: rule based, machine learning based, text alignment based and statistically based. State of the art methods either focus exclusively on acronym-type abbreviations, or could not recognize rare abbreviations. We propose a systematic method to extract abbreviations effectively. At first a scoring method is used to classify the abbreviations into acronym-type and non-acronym-type abbreviations, and then their corresponding definitions are identified by two different methods: text alignment algorithm for the former, statistical method for the latter. A literature mining system MBA was constructed to extract both acronym-type and non-acronym-type abbreviations. An abbreviation-tagged literature corpus, called Medstract gold standard corpus, was used to evaluate the system. MBA achieved a recall of 88% at the precision of 91% on the Medstract gold-standard EVALUATION Corpus. We present a new literature mining system MBA for extracting biomedical abbreviations. Our evaluation demonstrates that the MBA system performs better than the others. It can identify the definition of not only acronym-type abbreviations including a little irregular acronym-type abbreviations (e.g., ), but also non-acronym-type abbreviations (e.g., ).
Antunes, Rui; Matos, Sérgio
Word sense disambiguation (WSD) is an important step in biomedical text mining, which is responsible for assigning an unequivocal concept to an ambiguous term, improving the accuracy of biomedical information extraction systems. In this work we followed supervised and knowledge-based disambiguation approaches, with the best results obtained by supervised means. In the supervised method we used bag-of-words as local features, and word embeddings as global features. In the knowledge-based method we combined word embeddings, concept textual definitions extracted from the UMLS database, and concept association values calculated from the MeSH co-occurrence counts from MEDLINE articles. Also, in the knowledge-based method, we tested different word embedding averaging functions to calculate the surrounding context vectors, with the goal to give more importance to closest words of the ambiguous term. The MSH WSD dataset, the most common dataset used for evaluating biomedical concept disambiguation, was used to evaluate our methods. We obtained a top accuracy of 95.6 % by supervised means, while the best knowledge-based accuracy was 87.4 %. Our results show that word embedding models improved the disambiguation accuracy, proving to be a powerful resource in the WSD task.
Zhou, Deyu; Zhong, Dayou
Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely
Background: The proper handling and disposal of bio-medical waste is very imperative. Unfortunately, laxity and lack of adequate knowledge and practice on bio-medical waste disposal leads to staid health and environment apprehension. Aim: To assess the knowledge and practice on bio-medical waste management ...
Kapoor, Daljit; Nirola, Ashutosh; Kapoor, Vinod; Gambhir, Ramandeep-Singh
Proper handling, treatment and disposal of biomedical wastes are important elements in any health care setting. Not much attention has been paid to the management of Biomedical Waste (BMW) in recent years, in dental colleges and hospitals in India. The present systematic review was conducted to assess knowledge and awareness regarding BMW management among staff and students of dental teaching institutions in India. A systematic review of relevant cross-sectional studies was conducted regarding BMW management in India in dental teaching institutions in India. Six studies were finally included in the present review after conducting both electronic and manual search like Pubmed, EMBASE etc. and after making necessary exclusions. Potential biases were addressed and relevant data was extracted by the concerned investigators. Six studies were finally included in the review. Colour coding of wastes was not done by 67% of the subjects in one of the studies conducted in Haryana. Almost all the subjects agreed to the fact that exposure to hazardous health care waste can result in disease or infection in another study. According to another study reports, none of the respondents was able to list the legislative act regarding BMW when asked. The results of the present review showed that knowledge and awareness level of subjects was inadequate and there is considerable variation in practice and management regarding BMW. There is a great need for continuing education and training programmes to be conducted in dental teaching institutions in India. Key words:Biomedical waste, knowledge, awareness, dentists, institution.
Grant, J P
The quality and quantity of biomedical research on diseases of the poor, such as diarrhea and schistosomiasis, is inferior to that of diseases of the rich, such as cancer. For example, few new and better vaccines exist on the market, even though the scientific community is close to an effective rotavirus vaccine. Researchers have made biomedical breakthroughs in substantial health problems, yet few health workers and caretakers apply them. Bridging the gap between knowledge and technologies and their application by those who would most benefit remain a great public health challenge. The Child Survival and Development Revolution (CSDR) is an attempt to meet this challenge. It uses low cost/high impact medical knowledge and technologies of mass applicability to improve children's health in developing countries. These technologies include growth monitoring, oral rehydration therapy, immunization, female education, family planning, and food supplementation. Colombia leads the CSDR since its top political leaders encourage everyone to work toward child survival. For example, they mobilized the media, the Catholic Church, school teachers, government ministries, business leaders, and nongovernmental organizations to immunize the children during a national campaign. They succeeded in immunizing over 75%. Campaign leaders designed it so that immunization activities would continue after the campaign and that it would serve to expand other primary health care programs. The CSDR demonstrates that, with even a modest political will, health workers and caretakers can achieve a 50% reduction of the 1980 child death rate by 2000. It has also contributed to increased support for the Convention on the Rights of the Child. Nevertheless much remains to be done, so health scientists need to encourage a force just under the surface to emerge: social mobilization and empowerment to improve child health.
Himmelstein, Daniel Scott; Lizee, Antoine; Hessler, Christine; Brueggeman, Leo; Chen, Sabrina L; Hadley, Dexter; Green, Ari; Khankhanian, Pouya; Baranzini, Sergio E
The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.
Caron - Flinterman, J.F.; Broerse, J.E.W.; Bunders - Aelen, J.G.F.
Both governments and patients' movements are increasingly making a plea in favour of the active participation of patients in biomedical research processes. One of the arguments concerns the contribution that patients could make to the relevance and quality of biomedical research based on their
Naderi, Nona; Witte, René
), the first comprehensive, fully open-source approach to automatically extract impacts and related relevant information from the biomedical literature. We assessed the performance of our work on manually annotated corpora and the results show the reliability of our approach. The representation of the extracted information into a structured format facilitates knowledge management and aids in database curation and correction. Furthermore, access to the analysis results is provided through multiple interfaces, including web services for automated data integration and desktop-based solutions for end user interactions.
Aronson Alan R
Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well
Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman
The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.
Automatic summarization of biomedical literature usually relies on domain knowledge from external sources to build rich semantic representations of the documents to be summarized. In this paper, we investigate the impact of the knowledge source used on the quality of the summaries that are generated. We present a method for representing a set of documents relevant to a given biological entity or topic as a semantic graph of domain concepts and relations. Different graphs are created by using different combinations of ontologies and vocabularies within the UMLS (including GO, SNOMED-CT, HUGO and all available vocabularies in the UMLS) to retrieve domain concepts, and different types of relationships (co-occurrence and semantic relations from the UMLS Metathesaurus and Semantic Network) are used to link the concepts in the graph. The different graphs are next used as input to a summarization system that produces summaries composed of the most relevant sentences from the original documents. Our experiments demonstrate that the choice of the knowledge source used to model the text has a significant impact on the quality of the automatic summaries. In particular, we find that, when summarizing gene-related literature, using GO, SNOMED-CT and HUGO to extract domain concepts results in significantly better summaries than using all available vocabularies in the UMLS. This finding suggests that successful biomedical summarization requires the selection of the appropriate knowledge source, whose coverage, specificity and relations must be in accordance to the type of the documents to summarize. Copyright © 2014 Elsevier Inc. All rights reserved.
Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan
Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.
Margolis, Ronald; Derr, Leslie; Dunn, Michelle; Huerta, Michael; Larkin, Jennie; Sheehan, Jerry; Guyer, Mark; Green, Eric D
Biomedical research has and will continue to generate large amounts of data (termed 'big data') in many formats and at all levels. Consequently, there is an increasing need to better understand and mine the data to further knowledge and foster new discovery. The National Institutes of Health (NIH) has initiated a Big Data to Knowledge (BD2K) initiative to maximize the use of biomedical big data. BD2K seeks to better define how to extract value from the data, both for the individual investigator and the overall research community, create the analytic tools needed to enhance utility of the data, provide the next generation of trained personnel, and develop data science concepts and tools that can be made available to all stakeholders. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Full Text Available Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel.
Nuzzo, Angelo; Mulas, Francesca; Gabetta, Matteo; Arbustini, Eloisa; Zupan, Blaz; Larizza, Cristiana; Bellazzi, Riccardo
Due to the overwhelming volume of published scientific papers, information tools for automated literature analysis are essential to support current biomedical research. We have developed a knowledge extraction tool to help researcher in discovering useful information which can support their reasoning process. The tool is composed of a search engine based on Text Mining and Natural Language Processing techniques, and an analysis module which process the search results in order to build annotation similarity networks. We tested our approach on the available knowledge about the genetic mechanism of cardiac diseases, where the target is to find both known and possible hypothetical relations between specific candidate genes and the trait of interest. We show that the system i) is able to effectively retrieve medical concepts and genes and ii) plays a relevant role assisting researchers in the formulation and evaluation of novel literature-based hypotheses.
Full Text Available Background: The waste produced in the course of healthcare activities carries a higher potential for infection and injury than any other type of waste. Inadequate and inappropriate knowledge of handling of healthcare waste may have serious health consequences and a significant impact on the environment as well. Objective: The objective was to assess knowledge, attitude, and practices of doctors, nurses, laboratory technicians, and sanitary staff regarding biomedical waste management. Materials and Methods: This was a cross-sectional study. Setting: The study was conducted among hospitals (bed capacity >100 of Allahabad city. Participants: Medical personnel included were doctors (75, nurses (60, laboratory technicians (78, and sanitary staff (70. Results: Doctors, nurses, and laboratory technicians have better knowledge than sanitary staff regarding biomedical waste management. Knowledge regarding the color coding and waste segregation at source was found to be better among nurses and laboratory staff as compared to doctors. Regarding practices related to biomedical waste management, sanitary staff were ignorant on all the counts. However, injury reporting was low across all the groups of health professionals. Conclusion: The importance of training regarding biomedical waste management needs emphasis; lack of proper and complete knowledge about biomedical waste management impacts practices of appropriate waste disposal.
Ranjan, Rajeev; Pathak, Ruchi; Singh, Dhirendra K.; Jalaluddin, Md.; Kore, Shobha A.; Kore, Abhijeet R.
Aims and Objectives: Biomedical waste management has become a concern with increasing number of dental practitioners in India. Being health care professionals, dentists should be aware regarding safe disposal of biomedical waste and recycling of dental materials to minimize biohazards to the environment. The aim of the present study was to assess awareness regarding biomedical waste management as well as knowledge of effective recycling and reuse of dental materials among dental students. Materials and Methods: This cross-sectional study was conducted among dental students belonging from all dental colleges of Bhubaneswar, Odisha (India) from February 2016 to April 2016. A total of 500 students (208 males and 292 females) participated in the study, which was conducted in two phases. A questionnaire was distributed to assess the awareness of biomedical waste management and knowledge of effective recycling of dental materials, and collected data was examined on a 5-point unipolar scale in percentages to assess the relative awareness regarding these two different categorizes. The Statistical Package for Social Sciences was used to analyzed collected data. Results: Forty-four percent of the dental students were not at all aware about the management of biomedical waste, 22% were moderately aware, 21% slightly aware, 7% very aware, and 5% fell in extremely aware category. Similarly, a higher percentage of participants (61%) were completely unaware regarding recycling and reusing of biomedical waste. Conclusion: There is lack of sufficient knowledge among dental students regarding management of biomedical waste and recycling or reusing of dental materials. Considering its impact on the environment, biomedical waste management requires immediate academic assessment to increase the awareness during training courses. PMID:27891315
Røgen, Peter; Koehl, Patrice
potential from geometric knowledge extracted from native and misfolded conformers of protein structures. This new potential, Metric Protein Potential (MPP), has two main features that are key to its success. Firstly, it is composite in that it includes local and nonlocal geometric information on proteins...
Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.All rights reserved. For Permissions, please email: email@example.com.
Rubin, Daniel L.; Lewis, Suzanna E.; Mungall, Chris J.; Misra,Sima; Westerfield, Monte; Ashburner, Michael; Sim, Ida; Chute,Christopher G.; Solbrig, Harold; Storey, Margaret-Anne; Smith, Barry; Day-Richter, John; Noy, Natalya F.; Musen, Mark A.
The National Center for Biomedical Ontology (http://bioontology.org) is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists funded by the NIH Roadmap to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are: (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. The Center is working toward these objectives by providing tools to develop ontologies and to annotate experimental data, and by developing resources to integrate and relate existing ontologies as well as by creating repositories of biomedical data that are annotated using those ontologies. The Center is providing training workshops in ontology design, development, and usage, and is also pursuing research in ontology evaluation, quality, and use of ontologies to promote scientific discovery. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease.
Huang, Jingshan; Dou, Dejing; Dang, Jiangbo; Pardue, J Harold; Qin, Xiao; Huan, Jun; Gerthoffer, William T; Tan, Ming
Computational techniques have been adopted in medical and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understanding biomedical and biological functions. Large amounts of datasets have been produced by biomedical and biological experiments and simulations. In order for researchers to gain knowledge from original data, nontrivial transformation is necessary, which is regarded as a critical link in the chain of knowledge acquisition, sharing, and reuse. Challenges that have been encountered include: how to efficiently and effectively represent human knowledge in formal computing models, how to take advantage of semantic text mining techniques rather than traditional syntactic text mining, and how to handle security issues during the knowledge sharing and reuse. This paper summarizes the state-of-the-art in these research directions. We aim to provide readers with an introduction of major computing themes to be applied to the medical and biological research.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
Quan, Changqin; Wang, Meng; Ren, Fuji
The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.
Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.
Ogunrin, Olubunmi A; Daniel, Folasade; Ansa, Victor
Responsibility for protection of research participants from harm and exploitation rests on Research Ethics Committees and principal investigators. The Nigerian National Code of Health Research Ethics defines responsibilities of stakeholders in research so its knowledge among researchers will likely aid ethical conduct of research. The levels of awareness and knowledge of the Code among biomedical researchers in southern Nigerian research institutions was assessed. Four institutions were selected using a stratified random sampling technique. Research participants were selected by purposive sampling and completed a pre-tested structured questionnaire. A total of 102 biomedical researchers completed the questionnaires. Thirty percent of the participants were aware of the National Code though 64% had attended at least one training seminar in research ethics. Twenty-five percent had a fairly acceptable knowledge (scores 50%-74%) and 10% had excellent knowledge of the code (score ≥75%). Ninety-five percent expressed intentions to learn more about the National Code and agreed that it is highly relevant to the ethical conduct of research. Awareness and knowledge of the Code were found to be very limited among biomedical researchers in southern Nigeria. There is need to improve awareness and knowledge through ethics seminars and training. Use of existing Nigeria-specific online training resources is also encouraged.
Guo, Jian; Qian, Kun; Zhang, Gongxuan; Xu, Huijie; Schuller, Björn
The advent of 'Big Data' and 'Deep Learning' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for 'feeding' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these 'big' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/.
Full Text Available Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/.
DIANA ELENA CODREANU
Full Text Available Managers of economic organizations have at their disposal a large volume of information and practically facing an avalanche of information, but they can not operate studying reports containing detailed data volumes without a correlation because of the good an organization may be decided in fractions of time. Thus, to take the best and effective decisions in real time, managers need to have the correct information is presented quickly, in a synthetic way, but relevant to allow for predictions and analysis.This paper wants to highlight the solutions to extract knowledge from data, namely data mining. With this technology not only has to verify some hypotheses, but aims at discovering new knowledge, so that economic organization to cope with fierce competition in the market.
Peng, Yifan; Torii, Manabu; Wu, Cathy H; Vijay-Shanker, K
Text mining is increasingly used in the biomedical domain because of its ability to automatically gather information from large amount of scientific articles. One important task in biomedical text mining is relation extraction, which aims to identify designated relations among biological entities reported in literature. A relation extraction system achieving high performance is expensive to develop because of the substantial time and effort required for its design and implementation. Here, we report a novel framework to facilitate the development of a pattern-based biomedical relation extraction system. It has several unique design features: (1) leveraging syntactic variations possible in a language and automatically generating extraction patterns in a systematic manner, (2) applying sentence simplification to improve the coverage of extraction patterns, and (3) identifying referential relations between a syntactic argument of a predicate and the actual target expected in the relation extraction task. A relation extraction system derived using the proposed framework achieved overall F-scores of 72.66% for the Simple events and 55.57% for the Binding events on the BioNLP-ST 2011 GE test set, comparing favorably with the top performing systems that participated in the BioNLP-ST 2011 GE task. We obtained similar results on the BioNLP-ST 2013 GE test set (80.07% and 60.58%, respectively). We conducted additional experiments on the training and development sets to provide a more detailed analysis of the system and its individual modules. This analysis indicates that without increasing the number of patterns, simplification and referential relation linking play a key role in the effective extraction of biomedical relations. In this paper, we present a novel framework for fast development of relation extraction systems. The framework requires only a list of triggers as input, and does not need information from an annotated corpus. Thus, we reduce the involvement of domain
Reddy, L Kalyan V; Al Shammari, Fares
Awareness and knowledge of biomedical waste practices is very important for any health care setting. This study aimed to determine the knowledge, attitudes and practices (KAP) about biomedical waste among health professionals in primary health care centres in Hail City, Saudi Arabia. The study included 135 of 155 professionals who dealt with biomedical waste from 16 out of 26 primary health care centres. Data were collected using a structured questionnaire. Overall 54.8%, 48.9% and 49.6% of the participants had good knowledge, attitudes and practices scores respectively. Profession, education and age were significantly associated with KAP level (P practices, and attitudes and practices (P ˂ 0.05). Training is recommended to enhance the knowledge of the professionals dealing with biomedical waste in the primary health care centres.
Henriksen, Ann-Helen; Ringsted, Charlotte
The aim of this study was to explore how medical students perceive the experience of learning from patient instructors (patients with rheumatism who teach health professionals and students) in the context of coupled faculty-led and patient-led teaching session. This was an explorative study with a qualitative approach based on focus group interviews. Analysis was based on a prior developed model of the characteristics of learning from patient instructors. The authors used this model as sensitizing concepts for the analysis of data while at the same time being open to new insights by constant comparison of old and new findings. Results showed a negotiation both between and within the students of the importance of patients' experiential knowledge versus scientific biomedical knowledge. On one hand students appreciated the experiential learning environment offered in the PI-led sessions representing a patient-centred approach, and acknowledged the importance of the PIs' individual perspectives and experiential knowledge. On the other hand, representing the scientific biomedical perspective and traditional step-by step teaching, students expressed unfamiliarity with the unstructured experiential learning and scepticism regarding the credibility of the patients' knowledge. This study contributes to the understanding of the complexity of involving patients as teachers in healthcare education and initiates a discussion on how to complement faculty-led teaching with patient-led teaching involving varying degrees of patient autonomy in the planning and delivering of the teaching.
Xu, Rong; Li, Li; Wang, Quanqiu
Systems approaches to studying phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repurposing. Currently, systematic study of disease phenotypic relationships on a phenome-wide scale is limited because large-scale machine-understandable disease-phenotype relationship knowledge bases are often unavailable. Here, we present an automatic approach to extract disease-manifestation (D-M) pairs (one specific type of disease-phenotype relationship) from the wide body of published biomedical literature. Our method leverages external knowledge and limits the amount of human effort required. For the text corpus, we used 119 085 682 MEDLINE sentences (21 354 075 citations). First, we used D-M pairs from existing biomedical ontologies as prior knowledge to automatically discover D-M-specific syntactic patterns. We then extracted additional pairs from MEDLINE using the learned patterns. Finally, we analysed correlations between disease manifestations and disease-associated genes and drugs to demonstrate the potential of this newly created knowledge base in disease gene discovery and drug repurposing. In total, we extracted 121 359 unique D-M pairs with a high precision of 0.924. Among the extracted pairs, 120 419 (99.2%) have not been captured in existing structured knowledge sources. We have shown that disease manifestations correlate positively with both disease-associated genes and drug treatments. The main contribution of our study is the creation of a large-scale and accurate D-M phenotype relationship knowledge base. This unique knowledge base, when combined with existing phenotypic, genetic and proteomic datasets, can have profound implications in our deeper understanding of disease etiology and in rapid drug repurposing. http://nlp.case.edu/public/data/DMPatternUMLS/
Yang, Zhihao; Lin, Yuan; Wu, Jiajin; Tang, Nan; Lin, Hongfei; Li, Yanpeng
Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. However, the volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database curators to detect and curate protein interaction information manually. We present a multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, and graph and combines their output with Ranking support vector machine (SVM). Experimental evaluations show that the features in individual kernels are complementary and the kernel combined with Ranking SVM achieves better performance than those of the individual kernels, equal weight combination and optimal weight combination. Our approach can achieve state-of-the-art performance with respect to the comparable evaluations, with 64.88% F-score and 88.02% AUC on the AImed corpus. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Information on Protein Interactions (PIs) is valuable for biomedical research, but often lies buried in the scientific literature and cannot be readily retrieved. While much progress has been made over the years in extracting PIs from the literature using computational methods, there is a lack of free, public, user-friendly tools for the discovery of PIs. We developed an online tool for the extraction of PI relationships from PubMed-abstracts, which we name PIMiner. Protein pairs and the words that describe their interactions are reported by PIMiner so that new interactions can be easily detected within text. The interaction likelihood levels are reported too. The option to extract only specific types of interactions is also provided. The PIMiner server can be accessed through a web browser or remotely through a client\\'s command line. PIMiner can process 50,000 PubMed abstracts in approximately 7 min and thus appears suitable for large-scale processing of biological/biomedical literature. Copyright © 2013 Inderscience Enterprises Ltd.
Garonna, A; Carli, C
This report presents a proposal for a new slow extraction scheme for the Low Energy Ion Ring (LEIR) in the context of the feasibility study for a biomedical research facility at CERN. LEIR has to be maintained as a heavy ion accumulator ring for LHC and for fixed-target experiments with the SPS. In parallel to this on-going operation for physics experiments, an additional secondary use of LEIR for a biomedical research facility was proposed [Dosanjh2013, Holzscheiter2012, PHE2010]. This facility would complement the existing research beam-time available at other laboratories for studies related to ion beam therapy. The new slow extraction [Abler2013] is based on the third-integer resonance. The reference beam is composed of fully stripped carbon ions with extraction energies of 20-440 MeV/u, transverse physical emittances of 5-25 µm and momentum spreads of ±2-9•10-4. Two resonance driving mechanisms have been studied: the quadrupole-driven method and the RF-knockout technique. Both were made compatible...
Miñarro-Giménez, Jose A; Kreuzthaler, Markus; Schulz, Stefan
The identification of relevant predicates between co-occurring concepts in scientific literature databases like MEDLINE is crucial for using these sources for knowledge extraction, in order to obtain meaningful biomedical predications as subject-predicate-object triples. We consider the manually assigned MeSH indexing terms (main headings and subheadings) in MEDLINE records as a rich resource for extracting a broad range of domain knowledge. In this paper, we explore the combination of a clustering method for co-occurring concepts based on their related MeSH subheadings in MEDLINE with the use of SemRep, a natural language processing engine, which extracts predications from free text documents. As a result, we generated sets of clusters of co-occurring concepts and identified the most significant predicates for each cluster. The association of such predicates with the co-occurrences of the resulting clusters produces the list of predications, which were checked for relevance.
Liu, Jun; Willför, Stefan; Mihranyan, Albert
Nanocellulose-based biomaterials for biomedical and pharmaceutical applications have been extensively explored. However, studies on different levels of impurities in the nanocellulose and their potential risks are lacking. This article is the most comprehensive to date survey of the importance and characterization of possible leachables and extractables in nanocellulose for biomedical use. In particular, the (1,3)-β-d-glucan interference in endotoxin detection in algal nanocellulose was addressed. Potential lipophilic and hydrophilic leachables, toxic heavy metals, and microbial contaminants are also monitored. As a model system, nanocellulose from Cladophora sp. algae is investigated. The leachable (1,3)-β-d-glucan and endotoxin, which possess strong immunogenic potential, from the cellulose were minimized to clinically insignificant levels of 4.7μg/g and 2.5EU/g, respectively. The levels of various impurities in the Cladophora cellulose are acceptable for future biomedical applications. The presented approach could be considered as a guideline for other types of nanocellulose. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jimeno Yepes, Antonio; Berlanga, Rafael
Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
In order to deal with the complexity of biological systems and attempts to generate applicable results, current biomedical sciences are adopting concepts and methods from the engineering sciences. Philosophers of science have interpreted this as the emergence of an engineering paradigm, in particular in systems biology and synthetic biology. This article aims at the articulation of the supposed engineering paradigm by contrast with the physics paradigm that supported the rise of biochemistry and molecular biology. This articulation starts from Kuhn's notion of a disciplinary matrix, which indicates what constitutes a paradigm. It is argued that the core of the physics paradigm is its metaphysical and ontological presuppositions, whereas the core of the engineering paradigm is the epistemic aim of producing useful knowledge for solving problems external to the scientific practice. Therefore, the two paradigms involve distinct notions of knowledge. Whereas the physics paradigm entails a representational notion of knowledge, the engineering paradigm involves the notion of 'knowledge as epistemic tool'. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Protein-Protein Interaction (PPI extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH. We evaluate our method with Support Vector Machine (SVM and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.
Ambi, Ashwin; Bryan, Julia; Borbon, Katherine; Centeno, Daniel; Liu, Tianchi; Chen, Tung Po; Cattabiani, Thomas; Traba, Christian
Most studies reveal that the mechanism of action of propolis against bacteria is functional rather than structural and is attributed to a synergism between the compounds in the extracts. Propolis is said to inhibit bacterial adherence, division, inhibition of water-insoluble glucan formation, and protein synthesis. However, it has been shown that the mechanism of action of Russian propolis ethanol extracts is structural rather than functional and may be attributed to the metals found in propolis. If the metals found in propolis are removed, cell lysis still occurs and these modified extracts may be used in the prevention of medical and biomedical implant contaminations. The antibacterial activity of metal-free Russian propolis ethanol extracts (MFRPEE) on two biofilm forming bacteria: penicillin-resistant Staphylococcus aureus and Escherichia coli was evaluated using MTT and a Live/Dead staining technique. Toxicity studies were conducted on mouse osteoblast (MC-3T3) cells using the same viability assays. In the MTT assay, biofilms were incubated with MTT at 37°C for 30min. After washing, the purple formazan formed inside the bacterial cells was dissolved by SDS and then measured using a microplate reader by setting the detecting and reference wavelengths at 570nm and 630nm, respectively. Live and dead distributions of cells were studied by confocal laser scanning microscopy. Complete biofilm inactivation was observed when biofilms were treated for 40h with 2µg/ml of MFRPEE. Results indicate that the metals present in propolis possess antibacterial activity, but do not have an essential role in the antibacterial mechanism of action. Additionally, the same concentration of metals found in propolis samples, were toxic to tissue cells. Comparable to samples with metals, metal free samples caused damage to the cell membrane structures of both bacterial species, resulting in cell lysis. Results suggest that the structural mechanism of action of Russian propolis ethanol
Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang
Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating
Feng, Jiao; Zhang, Xiao-Lu; Li, Ying-Ya; Cui, Ying-Yu; Chen, Yi-Han
Proanthocyanidins (PAs) belong to the condensed tannin subfamily of natural flavonoids. Recent studies have shown that the main bioactive compounds of Pinus massoniana bark extract (PMBE) are PAs, especially the proanthocyanidins B series, which play important roles in cell cycle arrest, apoptosis induction and migration inhibition of cancer cells in vivo and in vitro. PA-Bs are mixtures of oligomers and polymers composed of flavan-3-ol, and the relationship between their structure and corresponding biomedical potentials is summarized in this paper. The hydroxyl at certain positions or the linkage between different carbon atoms of different rings determines or affects their anti-oxidant and free radical scavenging bioactivities. The degree of polymerization and the water solubility of the reaction system also influence their biomedical potential. Taken together, PMBE has a promising future in clinical drug development as a candidate anticancer drug and as a food additive to prevent tumorigenesis. We hope this review will encourage interested researchers to conduct further preclinical and clinical studies to evaluate the anticancer activities of PMBE, its active constituents and their derivatives.
Full Text Available Automatic extraction of protein-protein interaction (PPI pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK. DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM. Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Finn, Symma; Herne, Mose; Castille, Dorothy
Traditional Ecological Knowledge (TEK) is a term, relatively new to Western science, that encompasses a subset of traditional knowledge maintained by Indigenous nations about the relationships between people and the natural environment. The term was first shared by tribal elders in the 1980s to help raise awareness of the importance of TEK. TEK has become a construct that Western scientists have increasingly considered for conducting culturally relevant research with Tribal nations. The authors aim to position TEK in relation to other emerging schools of thought, that is, concepts such as the exposome, social determinants of health (SDoH), and citizen science, and to explore TEK's relevance to environmental health research. This article provides examples of successful application of TEK principles in federally funded research when implemented with respect for the underlying cultural context and in partnership with Indigenous communities. Rather than treating TEK as an adjunct or element to be quantified or incorporated into Western scientific studies, TEK can instead ground our understanding of the environmental, social, and biomedical determinants of health and improve our understanding of health and disease. This article provides historical and recent examples of how TEK has informed Western scientific research. This article provides recommendations for researchers and federal funders to ensure respect for the contributions of TEK to research and to ensure equity and self-determination for Tribal nations who participate in research. https://doi.org/10.1289/EHP858.
Nadarajah, Sri Kumaran; Vijayaraj, Radha; Mani, Jayaprakashvel
The squid ink extract is well known for its biomedical properties. In this study, squid Loligo vulgaris was collected from Tuticorin costal water, Bay of Bengal, India. Proximate composition of the crude squid ink was studied and found to have protein as the major component over lipid and carbohydrates. Further, bioactive fractions of squid ink were extracted with ethanol, and therapeutic applications such as hemolytic, antioxidant, antimicrobial, and in vitro anti-inflammatory properties were analyzed using standard methods. In hemolytic assay, the squid ink extract exhibited a maximum hemolytic activity of 128 hemolytic unit against tested erythrocytes. In DPPH assay, the ethanolic extract of squid ink has exhibited an antioxidant activity of 83.5%. The squid ink was found to be potent antibacterial agent against the pathogens tested. 200 μL of L. vulgaris ink extract showed remarkable antibacterial activity as zone of inhibition against Escherichia coli (28 mm), Klebsiella pneumoniae (22 mm), Pseudomonas aeruginosa (21 mm), and Staphylococcus aureus (24 mm). The 68.9% inhibition of protein denaturation by the squid ink extract indicated that it has very good in vitro anti-inflammatory properties. The Fourier transform infrared spectroscopy analysis of the ethanolic extracts of the squid ink indicated the presence of functional groups such as 1° and 2° amines, amides, alkynes (terminal), alkenes, aldehydes, nitriles, alkanes, aliphatic amines, carboxylic acids, and alkyl halides, which complements the biochemical background of therapeutic applications. Hence, results of this study concluded that the ethanolic extract of L. vulgaris has many therapeutic applications such as antimicrobial, antioxidant, and anti-inflammatory activities. Squid ink is very high in a number of important nutrients. It's particularly high in antioxidants for instance, which as well all know help to protect the cells and the heart against damage from free radicals. In the present study
Thirunavoukkarasu, M.; Balaji, U.; Behera, S.; Panda, P. K.; Mishra, B. K.
An aqueous leaf extract of Desmodium gangeticum was employed to synthesize silver nano particles. Rapid formation of stable silver nanoparticles were observed on exposure of the aqueous leaf extract with solution of silver nitrate. The silver nanoparticles were characterized by UV-visible spectroscopy, scanning electron microscopy, energy dispersive X-ray analysis and transmission electron microscopy (TEM), and Fourier Transform Infra-Red spectroscopy (FTIR) UV-visible spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), transmission electron microscopy (TEM), and Fourier Transform Infra-Red spectroscopy (FTIR). UV-visible spectrum of the aqueous medium peaked at 450 nm corresponding to the plasmon absorbance of silver nanoparticles. SEM analysis revealed the spherical shape of the particles with sizes ranging from 18 to 39 nm and the EDAX spectrum confirmed the presence of silver along with other elements in the plant metabolite. Further, these biologically synthesized nanoparticles were found to be highly toxic against pathogenic bacteria Escherichia coli, thus implying significance of the present study in production of biomedical products.
Full Text Available Kantha D Arunachalam, Sathesh Kumar Annamalai Center for Environmental Nuclear Research, Directorate of Research, SRM University, Chennai, Tamil Nadu, India Abstract: The exploitation of various plant materials for the biosynthesis of nanoparticles is considered a green technology as it does not involve any harmful chemicals. The aim of this study was to develop a simple biological method for the synthesis of silver and gold nanoparticles using Chrysopogon zizanioides. To exploit various plant materials for the biosynthesis of nanoparticles was considered a green technology. An aqueous leaf extract of C. zizanioides was used to synthesize silver and gold nanoparticles by the bioreduction of silver nitrate (AgNO3 and chloroauric acid (HAuCl4 respectively. Water-soluble organics present in the plant materials were mainly responsible for reducing silver or gold ions to nanosized Ag or Au particles. The synthesized silver and gold nanoparticles were characterized by ultraviolet (UV-visible spectroscopy, scanning electron microscopy (SEM, energy dispersive X-ray analysis (EDAX, Fourier transform infrared spectroscopy (FTIR, and X-ray diffraction (XRD analysis. The kinetics decline reactions of aqueous silver/gold ion with the C. zizanioides crude extract were determined by UV-visible spectroscopy. SEM analysis showed that aqueous gold ions, when exposed to the extract were reduced and resulted in the biosynthesis of gold nanoparticles in the size range 20–50 nm. This eco-friendly approach for the synthesis of nanoparticles is simple, can be scaled up for large-scale production with powerful bioactivity as demonstrated by the synthesized silver nanoparticles. The synthesized nanoparticles can have clinical use as antibacterial, antioxidant, as well as cytotoxic agents and can be used for biomedical applications. Keywords: nanoparticles, bioreduction, SEM, silver, gold
Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young
Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.
Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....
Herman H H B M van Haagen
Full Text Available MOTIVATION: Weighted semantic networks built from text-mined literature can be used to retrieve known protein-protein or gene-disease associations, and have been shown to anticipate associations years before they are explicitly stated in the literature. Our text-mining system recognizes over 640,000 biomedical concepts: some are specific (i.e., names of genes or proteins others generic (e.g., 'Homo sapiens'. Generic concepts may play important roles in automated information retrieval, extraction, and inference but may also result in concept overload and confound retrieval and reasoning with low-relevance or even spurious links. Here, we attempted to optimize the retrieval performance for protein-protein interactions (PPI by filtering generic concepts (node filtering or links to generic concepts (edge filtering from a weighted semantic network. First, we defined metrics based on network properties that quantify the specificity of concepts. Then using these metrics, we systematically filtered generic information from the network while monitoring retrieval performance of known protein-protein interactions. We also systematically filtered specific information from the network (inverse filtering, and assessed the retrieval performance of networks composed of generic information alone. RESULTS: Filtering generic or specific information induced a two-phase response in retrieval performance: initially the effects of filtering were minimal but beyond a critical threshold network performance suddenly drops. Contrary to expectations, networks composed exclusively of generic information demonstrated retrieval performance comparable to unfiltered networks that also contain specific concepts. Furthermore, an analysis using individual generic concepts demonstrated that they can effectively support the retrieval of known protein-protein interactions. For instance the concept "binding" is indicative for PPI retrieval and the concept "mutation abnormality" is
Wahyu Jauharis Saputra
Full Text Available Information extraction is an early stage of a process of textual data analysis. Information extraction is required to get information from textual data that can be used for process analysis, such as classification and categorization. A textual data is strongly influenced by the language. Arabic is gaining a significant attention in many studies because Arabic language is very different from others, and in contrast to other languages, tools and research on the Arabic language is still lacking. The information extracted using the knowledge dictionary is a concept of expression. A knowledge dictionary is usually constructed manually by an expert and this would take a long time and is specific to a problem only. This paper proposed a method for automatically building a knowledge dictionary. Dictionary knowledge is formed by classifying sentences having the same concept, assuming that they will have a high similarity value. The concept that has been extracted can be used as features for subsequent computational process such as classification or categorization. Dataset used in this paper was the Arabic text dataset. Extraction result was tested by using a decision tree classification engine and the highest precision value obtained was 71.0% while the highest recall value was 75.0%.
Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.
Ibrahim Burak Ozyurt
Full Text Available The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web. RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual
Ozyurt, Ibrahim Burak; Grethe, Jeffrey S; Martone, Maryann E; Bandrowski, Anita E
The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web). RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual resources through an
Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua
A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base.
Lee, Dong-Gi; Shin, Hyunjung
Recently, research on human disease network has succeeded and has become an aid in figuring out the relationship between various diseases. In most disease networks, however, the relationship between diseases has been simply represented as an association. This representation results in the difficulty of identifying prior diseases and their influence on posterior diseases. In this paper, we propose a causal disease network that implements disease causality through text mining on biomedical literature. To identify the causality between diseases, the proposed method includes two schemes: the first is the lexicon-based causality term strength, which provides the causal strength on a variety of causality terms based on lexicon analysis. The second is the frequency-based causality strength, which determines the direction and strength of causality based on document and clause frequencies in the literature. We applied the proposed method to 6,617,833 PubMed literature, and chose 195 diseases to construct a causal disease network. From all possible pairs of disease nodes in the network, 1011 causal pairs of 149 diseases were extracted. The resulting network was compared with that of a previous study. In terms of both coverage and quality, the proposed method showed outperforming results; it determined 2.7 times more causalities and showed higher correlation with associated diseases than the existing method. This research has novelty in which the proposed method circumvents the limitations of time and cost in applying all possible causalities in biological experiments and it is a more advanced text mining technique by defining the concepts of causality term strength.
In order to deal with the complexity of biological systems and attempts to generate applicable results, current biomedical sciences are adopting concepts and methods from the engineering sciences. Philosophers of science have interpreted this as the emergence of an engineering paradigm, in
Full Text Available Putting human commonsense knowledge into computers has always been a long standing dream of artificial intelligence (AI. The cost of several tens of millions of dollars and times have been covered so that the computers could know about “objects falling, not rising.”,” running is faster than walking. The large database was built, automated and semi-automated methods were introduced and volunteers’ efforts were utilized to achieve this, but an automated, high-throughput and low-noise method for commonsense collection still remains as the holy grail of AI. The aim of this study was to build commonsense knowledge ontology using three approaches namely Hearst method, machine translation and using structured resources. Using three Persian corpuse and Applying aforementioned methods, we could extract 7 different relations. 70000 assertions have been extracted. Finally, average accuracy of Hearst, MT and structured resource were 75%, 75% and 100% respectively.
Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.
Marcos Antonio Mouriño García
Full Text Available Automatic classification of text documents into a set of categories has a lot of applications. Among those applications, the automatic classification of biomedical literature stands out as an important application for automatic document classification strategies. Biomedical staff and researchers have to deal with a lot of literature in their daily activities, so it would be useful a system that allows for accessing to documents of interest in a simple and effective way; thus, it is necessary that these documents are sorted based on some criteria—that is to say, they have to be classified. Documents to classify are usually represented following the bag-of-words (BoW paradigm. Features are words in the text—thus suffering from synonymy and polysemy—and their weights are just based on their frequency of occurrence. This paper presents an empirical study of the efficiency of a classifier that leverages encyclopedic background knowledge—concretely Wikipedia—in order to create bag-of-concepts (BoC representations of documents, understanding concept as “unit of meaning”, and thus tackling synonymy and polysemy. Besides, the weighting of concepts is based on their semantic relevance in the text. For the evaluation of the proposal, empirical experiments have been conducted with one of the commonly used corpora for evaluating classification and retrieval of biomedical information, OHSUMED, and also with a purpose-built corpus of MEDLINE biomedical abstracts, UVigoMED. Results obtained show that the Wikipedia-based bag-of-concepts representation outperforms the classical bag-of-words representation up to 157% in the single-label classification problem and up to 100% in the multi-label problem for OHSUMED corpus, and up to 122% in the single-label classification problem and up to 155% in the multi-label problem for UVigoMED corpus.
REZENDE, S. O.
Full Text Available The progress in digitally generated data aquisition and storage has allowed for a huge growth in information generated in organizations. Around 80% ofthose data are created in non structured format and a significant part of those are texts. Intelligent organization of those textual collection is a matter of interest for most organizations, for it speed up information search and retrieval. In this context, Text Mining can transform this great amount non structure text data un useful knowledge, that can even be innovative for those organizations. Using unsupervised methods for knowledge extraction and organization has received great attention in literature, because it does not require previous knowledge on the textual collections that are going to be explored. In this article we describe the main techniques and algorithms used for unsupervised knowledege extraction and organization from textual data. The most relevant works in literature are presented and discussed in each phase of the Text Mining process and some existing computational tools are suggested for each task at hand. At last, some examples and applications are present to show the use of Text Mining on real problems.
Wu, Stephen; Liu, Hongfang
Natural language processing (NLP) has become crucial in unlocking information stored in free text, from both clinical notes and biomedical literature. Clinical notes convey clinical information related to individual patient health care, while biomedical literature communicates scientific findings. This work focuses on semantic characterization of texts at an enterprise scale, comparing and contrasting the two domains and their NLP approaches. We analyzed the empirical distributional characteristics of NLP-discovered named entities in Mayo Clinic clinical notes from 2001-2010, and in the 2011 MetaMapped Medline Baseline. We give qualitative and quantitative measures of domain similarity and point to the feasibility of transferring resources and techniques. An important by-product for this study is the development of a weighted ontology for each domain, which gives distributional semantic information that may be used to improve NLP applications.
Dhiraj Kumar Srivastava
Full Text Available Introduction: Biomedical waste by definition means “Any waste which is generated during the process of diagnosis, treatment or immunization of human or animal or in research activities pertaining there to in the production or testing of biological”Objectives:• The level of awareness about various aspect of Bio Medical Waste management among the paramedical staff.• To study the impact of three day training programme on knowledge of Bio Medical Waste management. Material & Methods: The present study is a Cross sectional Study carried out to assess the impact of three day training programme on knowledge of Paramedical staff posted at District Hospital, Etawah. The change in knowledge was assessed using pre- test and post- test questionnaire.Result: A total of 72 paramedical staff participated in the study. Majority of the participants were unaware about the hazards associated with the improper handing f Biomedical wastes. The knowledge about the different color codes used for the segregation of biomedical waste was also very low. Similarly, the awareness about the vehicle used for the transportation of biomedical waste was also poor.Conclusion: The present study concludes that there is an urgent need for regular training for paramedical staff posted at District Hospital and other government hospital located in small District & town as awareness about the Biomedical waste among them is very low.
Ensing, M.; Paton, R.; Speel, P.H.W.M.; Speel, P.H.W.M.; Rada, R.
An object-oriented approach has been applied to the different stages involved in developing a knowledge base about insulin metabolism. At an early stage the separation of terminological and assertional knowledge was made. The terminological component was developed by medical experts and represented
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
) using Latent Semantic Analysis (one of the unsupervised machine learning techniques). It presents three separate cases to illustrate the similarity knowledge extraction from the metadata, where the emotional components in each case represents different abstraction levels – genres, synopsis and lyrics...... with many interrelated parts – recommendation engines, content metadata, contextual information and user profiles. In the center of any type of recommendation lies the notion of similarity. The most popular way to approach similarity is to look for the feature overlaps. This results often in recommending...... only “more of the same” type of content which does not necessarily lead to the meaningful personalization. Another way to approach similarity is to find a similar underlying meaning in the content. Aspects of meaning in media can be represented using Gardenfors Conceptual Spaces theory, which can...
The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best.
The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best
Catalina, Mercedes; Cot, Jaume; Borras, Miquel; Lapuente, Joaquín de; González, Javier; Balu, Alina M; Luque, Rafael
The biomedical properties of a porous bio-collagenic polymer extracted from leather industrial waste residues have been investigated in wound healing and tissue regeneration in induced wounds in rats. Application of the pure undiluted bio-collagen to induced wounds in rats dramatically improved its healing after 7 days in terms of collagen production and wound filling as well as in the migration and differentiation of keratinocytes. The formulation tested was found to be three times more effective than the commercial reference product Catrix ® (Heal Progress (HP): 8 ± 1.55 vs. 2.33 ± 0.52, p < 0.001; Formation of Collagen (FC): 7.5 ± 1.05 vs. 2.17 ± 0.75, p < 0.001; Regeneration of Epidermis (RE): 13.33 ± 5.11 vs. 5 ± 5.48, p < 0.05).
Doughty, Emily; Kertesz-Farkas, Attila; Bodenreider, Olivier; Thompson, Gary; Adadey, Asa; Peterson, Thomas; Kann, Maricel G
A major goal of biomedical research in personalized medicine is to find relationships between mutations and their corresponding disease phenotypes. However, most of the disease-related mutational data are currently buried in the biomedical literature in textual form and lack the necessary structure to allow easy retrieval and visualization. We introduce a high-throughput computational method for the identification of relevant disease mutations in PubMed abstracts applied to prostate (PCa) and breast cancer (BCa) mutations. We developed the extractor of mutations (EMU) tool to identify mutations and their associated genes. We benchmarked EMU against MutationFinder--a tool to extract point mutations from text. Our results show that both methods achieve comparable performance on two manually curated datasets. We also benchmarked EMU's performance for extracting the complete mutational information and phenotype. Remarkably, we show that one of the steps in our approach, a filter based on sequence analysis, increases the precision for that task from 0.34 to 0.59 (PCa) and from 0.39 to 0.61 (BCa). We also show that this high-throughput approach can be extended to other diseases. Our method improves the current status of disease-mutation databases by significantly increasing the number of annotated mutations. We found 51 and 128 mutations manually verified to be related to PCa and Bca, respectively, that are not currently annotated for these cancer types in the OMIM or Swiss-Prot databases. EMU's retrieval performance represents a 2-fold improvement in the number of annotated mutations for PCa and BCa. We further show that our method can benefit from full-text analysis once there is an increase in Open Access availability of full-text articles. Freely available at: http://bioinf.umbc.edu/EMU/ftp.
Caudell, Mark A; Quinlan, Marsha B; Quinlan, Robert J; Call, Douglas R
Human and animal health are deeply intertwined in livestock dependent areas. Livestock health contributes to food security and can influence human health through the transmission of zoonotic diseases. In low-income countries diagnosis and treatment of livestock diseases is often carried out by household members who draw upon both ethnoveterinary medicine (EVM) and contemporary veterinary biomedicine (VB). Expertise in these knowledge bases, along with their coexistence, informs treatment and thus ultimately impacts animal and human health. The aim of the current study was to determine how socio-cultural and ecological differences within and between two livestock-keeping populations, the Maasai of northern Tanzania and Koore of southwest Ethiopia, impact expertise in EVM and VB and coexistence of the two knowledge bases. An ethnoveterinary research project was conducted to examine dimensions of EVM and VB knowledge among the Maasai (N = 142 households) and the Koore (N = 100). Cultural consensus methods were used to quantify expertise and the level of agreement on EVM and VB knowledge. Ordinary least squares regression was used to model patterns of expertise and consensus across groups and to examine associations between knowledge and demographic/sociocultural attributes. Maasai and Koore informants displayed high consensus on EVM but only the Koore displayed consensus on VB knowledge. EVM expertise in the Koore varied across gender, herd size, and level of VB expertise. EVM expertise was highest in the Maasai but was only associated with age. The only factor associated with VB expertise was EVM expertise in the Koore. Variation in consensus and the correlates of expertise across the Maassi and the Koore are likely related to differences in the cultural transmission of EVM and VB knowledge. Transmission dynamics are established by the integration of livestock within the socioecological systems of the Maasai and Koore and culture historical experiences with
The Role Biomedical Science Laboratories Can Play In Improving Science Knowledge and Promoting First-Year Nursing Academic Success The need for additional nursing and health care professionals is expected to increase dramatically over the next 20 years. With this in mind, students must have strong biomedical science knowledge to be competent in their field. Some studies have shown that participation in bioscience laboratories can enhance science knowledge. If this is true, an analysis of the role bioscience labs have in first-year nursing academic success is apposite. In response, this study sought to determine whether concurrent enrollment in anatomy and microbiology lecture and lab courses improved final lecture course grades. The investigation was expanded to include a comparison of first-year nursing GPA and prerequisite bioscience concurrent lecture/lab enrollment. Additionally, research has indicated that learning is affected by student perception of the course, instructor, content, and environment. To gain an insight regarding students' perspectives of laboratory courses, almost 100 students completed a 20-statement perception survey to understand how lab participation affects learning. Data analyses involved comparing anatomy and microbiology final lecture course grades between students who concurrently enrolled in the lecture and lab courses and students who completed the lecture course alone. Independent t test analyses revealed that there was no significant difference between the groups for anatomy, t(285) = .11, p = .912, but for microbiology, the lab course provided a significant educational benefit, t(256) = 4.47, p = .000. However, when concurrent prerequisite bioscience lecture/lab enrollment was compared to non-concurrent enrollment for first-year nursing GPA using independent t test analyses, no significant difference was found for South Dakota State University, t(37) = -1.57, p = .125, or for the University of South Dakota, t(38) = -0.46, p
Minimally-invasive, microneedle-array extraction of interstitial fluid for comprehensive biomedical applications: transcriptomics, proteomics, metabolomics, exosome research, and biomarker identification.
Taylor, Robert M; Miller, Philip R; Ebrahimi, Parwana; Polsky, Ronen; Baca, Justin T
Interstitial fluid (ISF) has recently garnered interest as a biological fluid that could be used as an alternate to blood for biomedical applications, diagnosis, and therapy. ISF extraction techniques are promising because they are less invasive and less painful than venipuncture. ISF is an alternative, incompletely characterized source of physiological data. Here, we describe a novel method of ISF extraction in rats, using microneedle arrays, which provides volumes of ISF that are sufficient for downstream analysis techniques such as proteomics, genomics, and extracellular vesicle purification and analysis. This method is potentially less invasive than previously reported techniques. The limited invasiveness and larger volumes of extracted ISF afforded by this microneedle-assisted ISF extraction method provide a technique that is less stressful and more humane to laboratory animals, while also allowing for a reduction in the numbers of animals needed to acquire sufficient volumes of ISF for biomedical analysis and application.
Bin Res, Arwa A.
Associations between methylated genes and diseases have been investigated in several studies, and it is critical to have such information available for better understanding of diseases and clinical decisions. However, such information is scattered in a large number of electronic publications and it is difficult to manually search for it. Therefore, the goal of the project is to develop a machine learning model that can efficiently extract such information. Twelve machine learning algorithms were applied and compared in application to this problem based on three approaches that involve: document-term frequency matrices, position weight matrices, and a hybrid approach that uses the combination of the previous two. The best results we obtained by the hybrid approach with a random forest model that, in a 10-fold cross-validation, achieved F-score and accuracy of nearly 85% and 84%, respectively. On a completely separate testing set, F-score and accuracy of 89% and 88%, respectively, were obtained. Based on this model, we developed a tool that automates extraction of associations between methylated genes and diseases from electronic text. Our study contributed an efficient method for extracting specific types of associations from free text and the methodology developed here can be extended to other similar association extraction problems.
Nielsen, Dorte Guldbrand; Gøtzsche, Ole; Eika, Berit
interpretation skills. An anatomical focused checklist was developed based on Danish Cardiology Society guidelines for a standard echocardiography of adults. A TTE case of a common and complex clinical presentation was recorded and presented to participants on a portable computer using EchoPac software....... Participants made an immediate diagnosis and after that filled out the checklist of possible pathologies. Each correct answer was awarded one point, with a possible maximum score of 111. The echocardiography interpretation scores of the 45 physicians were then correlated with their scores on a MCQ test...... of echocardiography relevant physiology knowledge. Results: A strong and significant correlation between expertise level and scores on the TTE interpretation checklist was found (r = 0.70, p test. A weak, but significant correlation was found between expertise level and physiology...
Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George
Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.
Full Text Available Abstract Background Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst 1, but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. Results We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. Conclusion We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets.
Vijayan Sri Ramkumar
Full Text Available Green synthesis of nanoparticles using seaweeds are fascinating high research attention nowadays and also gaining center of attention in biomedical applications. In this work, we have synthesized biocompatible and functionalized silver nanoparticles using an aqueous extract of seaweed Enteromorpha compressa as a reducing as well as stabilizing agent and their efficient antimicrobial and anticancer activity are reported here. The UV–vis spectra of AgNPs showed the characteristics SPR absorption band at 421 nm. The chemical interaction and crystalline nature of the AgNPs were evaluated by FT-IR and XRD studies. The XRD result of AgNPs shows typical Ag reflection peaks at 38.1°, 44.2°, 64.4° and 77.1° corresponding to (111, (200, (220 and (311 Bragg’s planes. The surface morphology and composition of the samples were observed by HRTEM, EDS and SAED pattern analyses. Spherical shaped Ag nano structures were observed in the size ranges between 4 and 24 nm with clear lattice fringes in the HRTEM image. This report reveals that seaweed mediated synthesis of AgNPs and sustained delivery of Ag ions to the bacterial and fungal surface have been reducing their growth rate which was evaluated by well diffusion assay. The synthesized AgNPs showed favorable cytotoxicity against Ehlrich Ascites Carcinoma (EAC cells with IC50 value was recorded at 95.35 μg mL−1. This study showed cost effective silver nanoparticles synthesis with excellent biocompatibility and thus could potentially be utilized in biomedical and pharmaceutical applications.
Torralba-Rodríguez, Francisco Jesús; Bixquert-Montagud, Vicente; Fernández-Breis, Jesualdo Tomás; Martínez-Béjar, Rodrigo
In Intensive Care Units doctors have to manage several alarm situations in patients. When a doctor analyzes the state of the patient, (s)he has to decide if there is an alarm situation and make decisions about what actions to perform. It is desirable to detect these situations before they occur, because the solution could be easier and the doctor has more time to react. An intelligent system could analyze the information, extract conclusions, format and order the causes leading to the severe condition. This would be helpful for a doctor, and would make the decision-making process easier. A system capable of performing such operations is presented here. This is not a diagnosis application but a tool to detect alarm situations for patient safety. A prototype capable of making retrospective evaluation of the condition of the patients has been developed. This system is based on the MCRDR technology, which has been extended to deal with the requirements of this domain. The evaluation of the system is also reported in this paper. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Chima, S C; Nkwanyana, N M; Esterhuizen, T M
This study was designed to evaluate the impact of a short biostatistics course on knowledge and performance of statistical analysis by biomedical researchers in Africa. It is recognized that knowledge of biostatistics is essential for understanding and interpretation of modern scientific literature and active participation in the global research enterprise. Unfortunately, it has been observed that basic education of African scholars may be deficient in applied mathematics including biostatistics. Forty university affiliated biomedical researchers from South Africa volunteered for a 4-day short-course where participants were exposed to lectures on descriptive and inferential biostatistics and practical training on using a statistical software package for data analysis. A quantitative questionnaire was used to evaluate participants' statistical knowledge and performance pre- and post-course. Changes in knowledge and performance were measured using objective and subjective criteria. Data from completed questionnaires were captured and analyzed using Statistical Package for Social Sciences. Participants' pre- and post-course data were compared using nonparametric Wilcoxon signed ranks tests for nonnormally distributed variables. A P researchers in this cohort and highlights the potential benefits of short-courses in biostatistics to improve the knowledge and skills of biomedical researchers and scholars in Africa.
Mouriño-García, Marcos A; Pérez-Rodríguez, Roberto; Anido-Rifón, Luis E
The ability to efficiently review the existing literature is essential for the rapid progress of research. This paper describes a classifier of text documents, represented as vectors in spaces of Wikipedia concepts, and analyses its suitability for classification of Spanish biomedical documents when only English documents are available for training. We propose the cross-language concept matching (CLCM) technique, which relies on Wikipedia interlanguage links to convert concept vectors from the Spanish to the English space. The performance of the classifier is compared to several baselines: a classifier based on machine translation, a classifier that represents documents after performing Explicit Semantic Analysis (ESA), and a classifier that uses a domain-specific semantic annotator (MetaMap). The corpus used for the experiments (Cross-Language UVigoMED) was purpose-built for this study, and it is composed of 12,832 English and 2,184 Spanish MEDLINE abstracts. The performance of our approach is superior to any other state-of-the art classifier in the benchmark, with performance increases up to: 124% over classical machine translation, 332% over MetaMap, and 60 times over the classifier based on ESA. The results have statistical significance, showing p-values knowledge mined from Wikipedia to represent documents as vectors in a space of Wikipedia concepts and translating vectors between language-specific concept spaces, a cross-language classifier can be built, and it performs better than several state-of-the-art classifiers.
Chew, Peter A.
One of the challenges increasingly facing intelligence analysts, along with professionals in many other fields, is the vast amount of data which needs to be reviewed and converted into meaningful information, and ultimately into rational, wise decisions by policy makers. The advent of the world wide web (WWW) has magnified this challenge. A key hypothesis which has guided us is that threats come from ideas (or ideology), and ideas are almost always put into writing before the threats materialize. While in the past the 'writing' might have taken the form of pamphlets or books, today's medium of choice is the WWW, precisely because it is a decentralized, flexible, and low-cost method of reaching a wide audience. However, a factor which complicates matters for the analyst is that material published on the WWW may be in any of a large number of languages. In 'Identification of Threats Using Linguistics-Based Knowledge Extraction', we have sought to use Latent Semantic Analysis (LSA) and other similar text analysis techniques to map documents from the WWW, in whatever language they were originally written, to a common language-independent vector-based representation. This then opens up a number of possibilities. First, similar documents can be found across language boundaries. Secondly, a set of documents in multiple languages can be visualized in a graphical representation. These alone offer potentially useful tools and capabilities to the intelligence analyst whose knowledge of foreign languages may be limited. Finally, we can test the over-arching hypothesis--that ideology, and more specifically ideology which represents a threat, can be detected solely from the words which express the ideology--by using the vector-based representation of documents to predict additional features (such as the ideology) within a framework based on supervised learning. In this report, we present the results of a three-year project of the same name. We believe
Full Text Available The work deals with an environmentally benign process for the synthesis of silver nanoparticle using Butea monosperma bark extract which is used both as a reducing as well as capping agent at room temperature. The reaction mixture turned brownish yellow after about 24 h and an intense surface plasmon resonance (SPR band at around 424 nm clearly indicates the formation of silver nanoparticles. Fourier transform-Infrared (FT-IR spectroscopy showed that the nanoparticles were capped with compounds present in the plant extract. Formation of crystalline fcc silver nanoparticles is analysed by XRD data and the SAED pattern obtained also confirms the crystalline behaviour of the Ag nanoparticles. The size and morphology of these nanoparticles were studied using High Resolution Transmission Electron Microscopy (HRTEM which showed that the nanoparticles had an average dimension of ∼35 nm. A larger DLS data of ∼98 nm shows the presence of the stabilizer on the nanoparticles surface. The bio-synthesized silver nanoparticles revealed potent antibacterial activity against human bacteria of both Gram types. In addition these biologically synthesized nanoparticles also proved to exhibit excellent cytotoxic effect on human myeloid leukemia cell line, KG-1A with IC50 value of 11.47 μg/mL.
Kempler, Steven; Barbieri, Lindsay
Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to
Rasheed, Tahir; Bilal, Muhammad; Iqbal, Hafiz M N; Li, Chuanlong
Biosynthesis of nanoparticles from plant extracts is receiving enormous interest due to their abundant availability and a broad spectrum of bioactive reducing metabolites. In this study, the reducing potential of Artemisia vulgaris leaves extract (AVLE) was investigated for synthesizing silver nanoparticles without the addition of any external reducing or capping agent. The appearance of blackish brown color evidenced the complete synthesis of nanoparticles. The synthesized silver nanoparticles were characterized by UV-vis spectroscopy, scanning electron microscope (SEM), energy dispersive X-ray spectroscopy (EDX), transmission electron microscope (TEM), atomic force microscopy (AFM) and Fourier transforms infrared spectroscopy (FT-IR) analysis. UV-vis absorption profile of the bio-reduced sample elucidated the main peak around 420nm, which correspond to the surface plasmon resonance of silver nanoparticles. SEM and AFM analyses confirmed the morphology of the synthesized nanoparticles. Similarly, particles with a distinctive peak of silver were examined with EDX. The average diameter of silver nanoparticles was about 25nm from Transmission Electron Microscopy (TEM). FTIR spectroscopy scrutinized the involvement of various functional groups during nanoparticle synthesis. The green synthesized nanoparticles presented effective antibacterial activity against pathogenic bacteria than AVLE alone. In-vitro antioxidant assays revealed that silver nanoparticles (AV-AgNPs) exhibited promising antioxidant properties. The nanoparticles also displayed a potent cytotoxic effect against HeLa and MCF-7 cell lines. In conclusion, the results supported the advantages of employing a bio-green approach for developing silver nanoparticles with antimicrobial, antioxidant, and antiproliferative activities in a simple and cost- competitive manner. Copyright © 2017 Elsevier B.V. All rights reserved.
language documents. Thus, IE systems can extract structured information from unstructured text. One type of IE is named entity extraction and then creation of filled templates (Konchady. 2009). The named entity extractor identifies references to particular kinds of objects such as names of people, companies, and locations.
Thies, Christian; Schmidt Borreda, Marcel; Seidl, Thomas; Lehmann, Thomas M.
Multiscale analysis provides a complete hierarchical partitioning of images into visually plausible regions. Each of them is formally characterized by a feature vector describing shape, texture and scale properties. Consequently, object extraction becomes a classification of the feature vectors. Classifiers are trained by relevant and irrelevant regions labeled as object and remaining partitions, respectively. A trained classifier is applicable to yet uncategorized partitionings to identify the corresponding region's classes. Such an approach enables retrieval of a-priori unknown objects within a point-and-click interface. In this work, the classification pipeline consists of a framework for data selection, feature selection, classifier training, classification of testing data, and evaluation. According to the no-free-lunch-theorem of supervised learning, the appropriate classification pipeline is determined experimentally. Therefore, each of the steps is varied by state-of-the-art methods and the respective classification quality is measured. Selection of training data from the ground truth is supported by bootstrapping, variance pooling, virtual training data, and cross validation. Feature selection for dimension reduction is performed by linear discriminant analysis, principal component analysis, and greedy selection. Competing classifiers are k-nearest-neighbor, Bayesian classifier, and the support vector machine. Quality is measured by precision and recall to reflect the retrieval task. A set of 105 hand radiographs from clinical routine serves as ground truth, where the metacarpal bones have been labeled manually. In total, 368 out of 39.017 regions are identified as relevant. In initial experiments for feature selection with the support vector machine have been obtained recall, precision and F-measure of 0.58, 0.67, and 0,62, respectively.
Rasheed, Tahir; Bilal, Muhammad; Li, Chuanlong; Iqbal, Hafiz M N
In the present study, the potential of methanolic leaf extract of Taraxacum officinale plant as a function of bio-inspired green synthesis for the fabrication of silver nanoparticles (AgNPs) has been explored. The bio-reduction of aqueous silver nitrate (AgNO3) solution was confirmed by visually detecting the color change from pale yellow to blackish-brown. Maximum absorbance was observed at 420 nm due to the presence of characteristic surface Plasmon resonance of nano silver by UV-visible spectroscopy. The role of various functional groups in the bio-reduction of silver and chemical transformation was verified by Fourier transform infrared spectroscopy (FTIR). Scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX) predicts the shape (rocky, flack type, ellipsoidal, etc.), size (68 nm) and elemental composition (Ag as a major constituent) of the biosynthesized AgNPs, respectively. Transmission electron microscopy (TEM) analysis further corroborated the morphology of the AgNPs. Color mapping and atomic force microscopy (AFM) confirmed the nano-sized topography. The dynamic light scattering (DLS) showed the charge, stability, and size of the AgNPs. The generated AgNPs presented potential antibacterial activities against Gram-positive and Gram-negative bacterial strains including Staphylococcus aureus, Escherichia coli, and Haemophilus influenzae. The biosynthesized AgNPs also showed antiproliferative activity against MCF-7 breast cancer cell line in a dose-dependent manner. In conclusion, results clearly indicate that biosynthesized AgNPs could be used as effective nano drug for treating infectious diseases caused by multidrug resistant bacterial strains in the near future. Copyright© Bentham Science Publishers; For any queries, please email at firstname.lastname@example.org.
Solovyev, Valery; Ivanov, Vladimir
Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian.
Catia Pesquita; Daniel Faria; Tiago Grego; Francisco Couto; Mário J. Silva
Biomedical research generates a vast amount of information that is ultimately stored in scientific publications or in databases. The information in scientific texts is unstructured and thus hard to access, whereas the information in databases, although more accessible, often lacks in contextualization. The integration of information from these two kinds of sources is crucial for managing and extracting knowledge. By structuring and defining the concepts and relationships within a biomedical d...
Conclusion: Guidelines confirm the achieved results from data mining (DM techniques and help to rank important risk factors based on national and local information. Evaluation of extracted rules determined new patterns for CAD patients.
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004
Full Text Available This study reveals the rapid biosynthesis of silver nanoparticles (EAAgNPs using aqueous latex extract of Euphorbia antiquorum L as a potential bioreductant. Synthesized EAAgNPs generate the surface plasmonic resonance peak at 438Â nm in UVâVis spectrophotometer. Size and shape of EAAgNPs were further characterized through transmission electron microscope (TEM which shows well-dispersed spherical nanoparticles with size ranging from 10 to 50Â nm. Energy dispersive X-ray spectroscopic analysis (EDAX confirms the presence of silver (Ag as the major constituent element. X-ray diffraction (XRD pattern of EAAgNPs corresponding to (111, (200, (220 and (311 planes, reveals that the generated nanoparticles were face centered cubic crystalline in nature. Interestingly, fourier-transform infrared spectroscopy (FTIR analysis shows the major role of active phenolic constituents in reduction and stabilization of EAAgNPs. Phyto-fabricated EAAgNPs exhibits significant antimicrobial and larvicidal activity against bacterial human pathogens as well as disease transmitting blood sucking parasites such as Culex quinquefasciatus and Aedes aegypti (IIIrd instar larvae. On the other hand, in vitro cytotoxicity assessment of bioformulated EAAgNPs has shown potential anticancer activity against human cervical carcinoma cells (HeLa. The preliminary biochemical (MTT assay and microscopic studies depict that the synthesized EAAgNPs at minimal dosage (IC50Â =Â 28Â Î¼g triggers cellular toxicity response. Hence, the EAAgNPs can be considered as an environmentally benign and non-toxic nanobiomaterial for biomedical applications. Keywords: Crystal structure, Euphorbia antiquorum L., Silver nanoparticles, Anticancer, Human pathogens
Oramas Martín, Sergio
In this thesis, we address the problems of classifying and recommending music present in large collections. We focus on the semantic enrichment of descriptions associated to musical items (e.g., artists biographies, album reviews, metadata), and the exploitation of multimodal data (e.g., text, audio, images). To this end, we first focus on the problem of linking music-related texts with online knowledge repositories and on the automated construction of music knowledge bases. Then, we show how...
Dent, Rosanna; Santos, Ricardo Ventura
In the twentieth century, biomedical researchers believed the study of Indigenous Amazonians could inform global histories of human biological diversity. This paper examines the similarities and differences of two approaches to this mid-century biomedical research, comparing the work of virologist and epidemiologist Francis Black with human geneticists James V. Neel and Francisco Salzano. While both groups were interested in Indigenous populations as representatives of the past, their perspectives on epidemics diverged. For Black, outbreaks of infectious diseases were central to his methodological and theoretical interests; for Neel and Salzano, epidemics could potentially compromise the epistemological value of their data.
patient and supportive even when I was in long procrastinations . And of course, Dan has put huge influence on the improvement of my English skills. I...systems built upon well-known large- scale hierarchical structures. Strube07 is built on the latest version of a taxonomy, TStrube, which was derived...events in the documents. However, for scaling purpose, NEXUS only analyzes and extracts events from the first sentence and the title of each document
relevant texts” (mIE) information from simple documents written in different languages can be combined. A combined deep and shallow ( syntax and semantic...e.g., when some parts of the text cannot be resolved by a given NLP component. Furthermore, in RMRS due to possible lack of morphological analysis... syntax and content of Cyc. In Proceedings of the 2006 AAAI Spring Symposium on Formalizing and Compiling Background Knowledge and Its Applications to
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227
Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel
Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient's medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult.
Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.
Full Text Available Background: Hospitals are the centre of cure and also the important centres of infectious waste generation. Effective management of Biomedical Waste (BMW is not only a legal necessity but also a social responsibility. Aims and Objectives: To assess the knowledge and practice in managing the biomedical wastes among nursing staff and student nurses in RIMS, Ranchi. Materials and methods: The study was conducted at RIMS, Ranchi from Oct 2013 to March 2014 (6 months. It was a descriptive, hospital based, cross-sectional study. A total of 240 nurses participated in the present study, randomly chosen from various departments A pre-designed, pre-tested, structured proforma was used for data collection after getting their informed consent. Self-made scoring system was used to categorize the participants as having good, average and poor scores. Data was tabulated and analyzed using percentages and chi-square test. Results: The knowledge regarding general information about BMW management was assessed(with scores 0-8,it was found that level of knowledge was better in student nurses than staff nurses as student nurses scored good(6-8correct answers in more than half of the questions (65%.Whereas staff nurses scored good in only 33.33% questions. When the practical information regarding the BMW management is assessed (with scores 0-8, it was found that staff nurses had relatively better practice regarding BMW management than students as they scored good(6-8correct answers in 40% and 30% respectively. Conclusion: Though overall knowledge of study participants was good but still they need good quality training to improve their current knowledge about BMW.
With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.
Kibbe, Warren A; Arze, Cesar; Felix, Victor; Mitraka, Elvira; Bolton, Evan; Fu, Gang; Mungall, Christopher J; Binder, Janos X; Malone, James; Vasant, Drashtti; Parkinson, Helen; Schriml, Lynn M
The current version of the Human Disease Ontology (DO) (http://www.disease-ontology.org) database expands the utility of the ontology for the examination and comparison of genetic variation, phenotype, protein, drug and epitope data through the lens of human disease. DO is a biomedical resource of standardized common and rare disease concepts with stable identifiers organized by disease etiology. The content of DO has had 192 revisions since 2012, including the addition of 760 terms. Thirty-two percent of all terms now include definitions. DO has expanded the number and diversity of research communities and community members by 50+ during the past two years. These community members actively submit term requests, coordinate biomedical resource disease representation and provide expert curation guidance. Since the DO 2012 NAR paper, there have been hundreds of term requests and a steady increase in the number of DO listserv members, twitter followers and DO website usage. DO is moving to a multi-editor model utilizing Protégé to curate DO in web ontology language. This will enable closer collaboration with the Human Phenotype Ontology, EBI's Ontology Working Group, Mouse Genome Informatics and the Monarch Initiative among others, and enhance DO's current asserted view and multiple inferred views through reasoning. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Full Text Available The tropical freshwater zebrafish has recently emerged as a valuable model organism for the study of adipose tissue biology and obesity-related disease. The strengths of the zebrafish model system are its wealth of genetic mutants, transgenic tools, and amenability to high-resolution imaging of cell dynamics within live animals. However, zebrafish adipose research is at a nascent stage and many gaps exist in our understanding of zebrafish adipose physiology and metabolism. By contrast, adipose research within other, closely related, teleost species has a rich and extensive history, owing to the economic importance of these fish as a food source. Here, we compare and contrast knowledge on peroxisome proliferator-activated receptor gamma (PPARG-mediated adipogenesis derived from both biomedical and aquaculture literatures. We first concentrate on the biomedical literature to (i briefly review PPARG-mediated adipogenesis in mammals, before (ii reviewing Pparg-mediated adipogenesis in zebrafish. Finally, we (iii mine the aquaculture literature to compare and contrast Pparg-mediated adipogenesis in aquaculturally relevant teleosts. Our goal is to highlight evolutionary similarities and differences in adipose biology that will inform our understanding of the role of adipose tissue in obesity and related disease.
Acharya, Anita Shankar; Priyanka; Khandekar, Jyoti; Bachani, Damodar
Injuries caused by needle sticks and sharps due to unsafe injection practices are the most common occupational hazard amongst health care personnel. The objectives of our study were to determine the existing knowledge and practices of interns and change in their level following an information education and communication (IEC) package regarding safe injection practices and related biomedical waste management and to determine the status of hepatitis B vaccination. We conducted a follow-up study among all (106) interns in a tertiary care teaching hospital, Delhi. A predesigned semistructured questionnaire was used. IEC package in the form of hands-on workshop and power point presentation was used. A highly significant (P waste management. Almost two-thirds of interns were immunised against hepatitis B before the intervention and this proportion rose significantly after the intervention.
St.clair, D. C.; Sabharwal, C. L.; Hacke, Keith; Bond, W. E.
One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described.
Scuba, William; Tharp, Melissa; Mowery, Danielle; Tseytlin, Eugene; Liu, Yang; Drews, Frank A; Chapman, Wendy W
Clinical Natural Language Processing (NLP) systems require a semantic schema comprised of domain-specific concepts, their lexical variants, and associated modifiers to accurately extract information from clinical texts. An NLP system leverages this schema to structure concepts and extract meaning from the free texts. In the clinical domain, creating a semantic schema typically requires input from both a domain expert, such as a clinician, and an NLP expert who will represent clinical concepts created from the clinician's domain expertise into a computable format usable by an NLP system. The goal of this work is to develop a web-based tool, Knowledge Author, that bridges the gap between the clinical domain expert and the NLP system development by facilitating the development of domain content represented in a semantic schema for extracting information from clinical free-text. Knowledge Author is a web-based, recommendation system that supports users in developing domain content necessary for clinical NLP applications. Knowledge Author's schematic model leverages a set of semantic types derived from the Secondary Use Clinical Element Models and the Common Type System to allow the user to quickly create and modify domain-related concepts. Features such as collaborative development and providing domain content suggestions through the mapping of concepts to the Unified Medical Language System Metathesaurus database further supports the domain content creation process. Two proof of concept studies were performed to evaluate the system's performance. The first study evaluated Knowledge Author's flexibility to create a broad range of concepts. A dataset of 115 concepts was created of which 87 (76 %) were able to be created using Knowledge Author. The second study evaluated the effectiveness of Knowledge Author's output in an NLP system by extracting concepts and associated modifiers representing a clinical element, carotid stenosis, from 34 clinical free-text radiology
Capurro, Daniel; Soto, Mauricio; Vivent, Macarena; Lopetegui, Marcelo; Herskovic, Jorge R
Biomedical Informatics is a new discipline that arose from the need to incorporate information technologies to the generation, storage, distribution and analysis of information in the domain of biomedical sciences. This discipline comprises basic biomedical informatics, and public health informatics. The development of the discipline in Chile has been modest and most projects have originated from the interest of individual people or institutions, without a systematic and coordinated national development. Considering the unique features of health care system of our country, research in the area of biomedical informatics is becoming an imperative.
de Chiusole, Debora; Stefanutti, Luca; Spoto, Andrea
One of the most crucial issues in knowledge space theory is the construction of the so-called knowledge structures. In the present paper, a new data-driven procedure for large data sets is described, which overcomes some of the drawbacks of the already existing methods. The procedure, called k-states, is an incremental extension of the k-modes algorithm, which generates a sequence of locally optimal knowledge structures of increasing size, among which a "best" model is selected. The performance of k-states is compared to other two procedures in both a simulation study and an empirical application. In the former, k-states displays a better accuracy in reconstructing knowledge structures; in the latter, the structure extracted by k-states obtained a better fit.
Wei, Chih-Hsuan; Peng, Yifan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Li, Jiao; Wiegers, Thomas C; Lu, Zhiyong
Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task--a result that approaches the human inter-annotator agreement (0.8875)--and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system's ability to return real-time results: the average response time for each team's DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of
Full Text Available Background: Dental waste is a subset of hazardous biomedical (BM waste. It has been observed that most of the dental health facilities, the guidelines for proper management of dental waste are not adopted and not up to the prescribed standard. Aim: The aim of this study is to assess the knowledge, awareness, and attitude/behavior of BM waste generation, hazards, and legislation among the study subjects using self-structured questionnaire. Methodology: A cross-sectional study was conducted in 337 practicing dentists in Bengaluru city for the past 2 months. A self-structured questionnaire was used to obtain required data. The questionnaire was divided into three sections. The first section of the questionnaire contained questions regarding knowledge of BM waste generation, hazards, and legislation, whereas the second section contained questions regarding the level of awareness on BM waste management practice, and the third section contained questions regarding attitude/behavior toward BM waste. Results: Of 337 (100% study participants, 176 (52.2% were males and 161 (47.8% were females. Among 337 (100% study participants, more than three-fourth, i.e., 291 (88.4% knew about BM waste generation and legislation, whereas 23 (6.8% each did not know and were not aware of it. Conclusion: There is a good level of knowledge and awareness about BM waste generation hazards, legislation, and management among health care personnel in Bengaluru city. Regular monitoring and training are still required at all levels, and there is a need for continuing dental education on dental waste management practices to these dental practitioners.
Full Text Available Genomic data is estimated to be doubling every seven months with over 2 trillion bases from whole genome sequence studies deposited in Genbank in just the last 15 years alone. Recent advances in compute and storage have enabled the use of artificial intelligence techniques in areas such as feature recognition in digital pathology and chemical synthesis for drug development. To apply A.I. productively to multidimensional data such as cellular processes and their dysregulation, the data must be transformed into a structured format, using prior knowledge to create contextual relationships and hierarchies upon which computational analysis can be performed. Here we present the organization of complex data into hypergraphs that facilitate the application of A.I. We provide an example use case of a hypergraph containing hundreds of biological data values and the results of several classes of A.I. algorithms applied in a popular compute cloud. While multiple, biologically insightful correlations between disease states, behavior, and molecular features were identified, the insights of scientific import were revealed only when exploration of the data included visualization of subgraphs of represented knowledge. The results suggest that while machine learning can identify known correlations and suggest testable ones, the greater probability of discovering unexpected relationships between seemingly independent variables (unknown-unknowns requires a context-aware system – hypergraphs that impart biological meaning in nodes and edges. We discuss the implications of a combined hypergraph-A.I. analysis approach to multidimensional data and the pre-processing requirements for such a system.
Full Text Available Capture design rationale (DR knowledge and presenting it to designers by good form, which have great significance for design reuse and design innovation. Since the 1970s design rationality began to develop, many teams have developed their own design rational system. However, the DR acquisition system is not intelligent enough, and it still requires designers to do a lot of operations. In addition, the existing design documents contain a large number of DR knowledge, but it has not been well excavated. Therefore, a method and system are needed to better extract DR knowledge in design documents. We have proposed a DRKH (design rationale knowledge hierarchy model for DR representation. The DRKH model has three layers, respectively as design intent layer, design decision layer and design basis layer. In this paper, we use text mining method to extract DR from design documents and construct DR model. Finally, the welding robot design specification is taken as an example to demonstrate the system interface.
Tractenberg, Rochelle E; Gordon, Morris
Phenomenon: The purpose of "systematic" reviews/reviewers of medical and health professions educational research is to identify best practices. This qualitative article explores the question of whether systematic reviews can support "evidence informed" teaching and contrasts traditional systematic reviewing with a knowledge translation (KT) approach to this objective. Degrees of freedom analysis (DOFA) is used to examine the alignment of systematic review methods with educational research and the pedagogical strategies and approaches that might be considered with a decision-making framework developed to support valid assessment. This method is also used to explore how KT can be used to inform teaching and learning. The nature of educational research is not compatible with most (11/14) methods for systematic review. The inconsistency of systematic reviewing with the nature of educational research impedes both the identification and implementation of "best-evidence" pedagogy and teaching. This is primarily because research questions that do support the purposes of review do not support educational decision making. By contrast to systematic reviews of the literature, both a DOFA and KT are fully compatible with informing teaching using evidence. A DOFA supports the translation of theory to a specific teaching or learning case, so could be considered a type of KT. The DOFA results in a test of alignment of decision options with relevant educational theory, and KT leads to interventions in teaching or learning that can be evaluated. Examples of how to structure evaluable interventions are derived from a KT approach that are simply not available from a systematic review. Insights: Systematic reviewing of current empirical educational research is not suitable for deriving or supporting best practices in education. However, both "evidence-informed" and scholarly approaches to teaching can be supported as KT projects, which are inherently evaluable and can generate
Shaped by Quantum Theory, Technology, and the Genomics RevolutionThe integration of photonics, electronics, biomaterials, and nanotechnology holds great promise for the future of medicine. This topic has recently experienced an explosive growth due to the noninvasive or minimally invasive nature and the cost-effectiveness of photonic modalities in medical diagnostics and therapy. The second edition of the Biomedical Photonics Handbook presents fundamental developments as well as important applications of biomedical photonics of interest to scientists, engineers, manufacturers, teachers, studen
Full Text Available Biomedical annotation is a common and affective artifact for researchers to discuss, show opinion, and share discoveries. It becomes increasing popular in many online research communities, and implies much useful information. Ranking biomedical annotations is a critical problem for data user to efficiently get information. As the annotator’s knowledge about the annotated entity normally determines quality of the annotations, we evaluate the knowledge, that is, semantic relationship between them, in two ways. The first is extracting relational information from credible websites by mining association rules between an annotator and a biomedical entity. The second way is frequent pattern mining from historical annotations, which reveals common features of biomedical entities that an annotator can annotate with high quality. We propose a weighted and concept-extended RDF model to represent an annotator, a biomedical entity, and their background attributes and merge information from the two ways as the context of an annotator. Based on that, we present a method to rank the annotations by evaluating their correctness according to user’s vote and the semantic relevancy between the annotator and the annotated entity. The experimental results show that the approach is applicable and efficient even when data set is large.
Thessen, Anne E; Parr, Cynthia Sims
Numerous digitization and ontological initiatives have focused on translating biological knowledge from narrative text to machine-readable formats. In this paper, we describe two workflows for knowledge extraction and semantic annotation of text data objects featured in an online biodiversity aggregator, the Encyclopedia of Life. One workflow tags text with DBpedia URIs based on keywords. Another workflow finds taxon names in text using GNRD for the purpose of building a species association network. Both workflows work well: the annotation workflow has an F1 Score of 0.941 and the association algorithm has an F1 Score of 0.885. Existing text annotators such as Terminizer and DBpedia Spotlight performed well, but require some optimization to be useful in the ecology and evolution domain. Important future work includes scaling up and improving accuracy through the use of distributional semantics.
Full Text Available Abstract Background We introduce a Knowledge-based Decision Support System (KDSS in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results.
The Concealed Information Test (CIT) is a psychophysiological method designed to detect information that an individual cannot or does not wish to reveal. The present study used a version of the CIT, the Searching Concealed Information Test (SCIT), to extract information from partial information that participants possessed on a planned jailbreak. In the first experiment, 52 undergraduate students were randomly, but not equally, allocated into 15 different clusters of partial knowledge. In each, participants possessed knowledge about 2 of 6 critical items. Using a lenient decision rule, and a combined measure defined as the mean of 3 individual measures (skin conductance response amplitude, finger pulse, and respiration line length) 5 of the 6 critical items were identified. Experiment 2 extended the first experiment to unequal proportions of critical knowledge. Forty-six undergraduate students were randomly allocated into 25 clusters of partial knowledge in which 0, 1, 2, 3, or 6 pieces of information were known. Using the same lenient decision rule and the combined measure, all 6 items were identified. It was suggested that the Group SCIT is capable of assembling a comprehensive picture out of partial information possessed by informed innocent participants. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hurst, Sarah J
This chapter summarizes the roles of nanomaterials in biomedical applications, focusing on those highlighted in this volume. A brief history of nanoscience and technology and a general introduction to the field are presented. Then, the chemical and physical properties of nanostructures that make them ideal for use in biomedical applications are highlighted. Examples of common applications, including sensing, imaging, and therapeutics, are given. Finally, the challenges associated with translating this field from the research laboratory to the clinic setting, in terms of the larger societal implications, are discussed.
Full Text Available Rare disease patients too often face common problems, including the lack of access to correct diagnosis, lack of quality information on the disease, lack of scientific knowledge of the disease, inequities and difficulties in access to treatment and care. These things could be changed by implementing a comprehensive approach to rare diseases, increasing international cooperation in scientific research, by gaining and sharing scientific knowledge about and by developing tools for extracting and sharing knowledge. A significant aspect to analyze is the organization of knowledge in the biomedical field for the proper management and recovery of health information. For these purposes, the sources needed have been acquired from the Office of Rare Diseases Research, the National Organization of Rare Disorders and Orphanet, organizations that provide information to patients and physicians and facilitate the exchange of information among different actors involved in this field. The present paper shows the representation of rare diseases terms in biomedical terminologies such as MeSH, ICD-10, SNOMED CT and OMIM, leveraging the fact that these terminologies are integrated in the UMLS. At the first level, it was analyzed the overlap among sources and at a second level, the presence of rare diseases terms in target sources included in UMLS, working at the term and concept level. We found that MeSH has the best representation of rare diseases terms.
Suh, Sang C; Tanik, Murat M
Biomedical Engineering: Health Care Systems, Technology and Techniques is an edited volume with contributions from world experts. It provides readers with unique contributions related to current research and future healthcare systems. Practitioners and researchers focused on computer science, bioinformatics, engineering and medicine will find this book a valuable reference.
Coden, Anni; Savova, Guergana; Sominsky, Igor; Tanenblatt, Michael; Masanz, James; Schuler, Karin; Cooper, James; Guan, Wei; de Groen, Piet C
We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets.
Ratkovic, Zorana; Golik, Wiktoria; Warnier, Pierre
Bacteria biotopes cover a wide range of diverse habitats including animal and plant hosts, natural, medical and industrial environments. The high volume of publications in the microbiology domain provides a rich source of up-to-date information on bacteria biotopes. This information, as found in scientific articles, is expressed in natural language and is rarely available in a structured format, such as a database. This information is of great importance for fundamental research and microbiology applications (e.g., medicine, agronomy, food, bioenergy). The automatic extraction of this information from texts will provide a great benefit to the field. We present a new method for extracting relationships between bacteria and their locations using the Alvis framework. Recognition of bacteria and their locations was achieved using a pattern-based approach and domain lexical resources. For the detection of environment locations, we propose a new approach that combines lexical information and the syntactic-semantic analysis of corpus terms to overcome the incompleteness of lexical resources. Bacteria location relations extend over sentence borders, and we developed domain-specific rules for dealing with bacteria anaphors. We participated in the BioNLP 2011 Bacteria Biotope (BB) task with the Alvis system. Official evaluation results show that it achieves the best performance of participating systems. New developments since then have increased the F-score by 4.1 points. We have shown that the combination of semantic analysis and domain-adapted resources is both effective and efficient for event information extraction in the bacteria biotope domain. We plan to adapt the method to deal with a larger set of location types and a large-scale scientific article corpus to enable microbiologists to integrate and use the extracted knowledge in combination with experimental data.
In June 1996, NASA released a Cooperative Agreement Notice (CAN) inviting proposals to establish a National Space Biomedical Research Institute (9-CAN-96-01). This CAN stated that: The Mission of the Institute will be to lead a National effort for accomplishing the integrated, critical path, biomedical research necessary to support the long term human presence, development, and exploration of space and to enhance life on Earth by applying the resultant advances in human knowledge and technology acquired through living and working in space. The Institute will be the focal point of NASA sponsored space biomedical research. This statement has not been amended by NASA and remains the mission of the NSBRI.
This book grew out of the IEEE-EMBS Summer Schools on Biomedical Signal Processing, which have been held annually since 2002 to provide the participants state-of-the-art knowledge on emerging areas in biomedical engineering. Prominent experts in the areas of biomedical signal processing, biomedical data treatment, medicine, signal processing, system biology, and applied physiology introduce novel techniques and algorithms as well as their clinical or physiological applications. The book provides an overview of a compelling group of advanced biomedical signal processing techniques, such as mult
Machón-González, Iván; Rodríguez-Iglesias, Jesús; López-García, Hilario; Castrillón-Peláez, Leonor; Marañón-Maison, Elena
SOM-NG is a hybrid algorithm that is able to carry out visualization of process data, nonlinear function approximation, classification and clustering. The supervised version of SOM-NG produces a new type of 2D lattices called gradient planes which are useful to determine the dynamics of a target variable according to the remaining training variables. In this way, it is an interesting tool for data mining in order to extract knowledge from databases for nonlinear systems. The main objective of this work is to analyze data from an industrial wastewater treatment plant using SOM-NG algorithm in order to investigate relationships between the process variables. The data used proceeds from a biological wastewater treatment plant. This plant is based on an activated sludge treatment including nitrification and denitrification processes. A direct relation between the nitrification efficiency and the operating temperature was found, and also between the ammonia loading rate and the nitrification denitrification efficiency.
Dent, Rosanna; Santos, Ricardo Ventura
In the twentieth century, biomedical researchers believed the study of Indigenous Amazonians could inform global histories of human biological diversity. This paper examines the similarities and differences of two approaches to this mid-century biomedical research, comparing the work of virologist and epidemiologist Francis Black with human geneticists James V. Neel and Francisco Salzano. While both groups were interested in Indigenous populations as representatives of the past, their perspectives on epidemics diverged. For Black, outbreaks of infectious diseases were central to his methodological and theoretical interests; for Neel and Salzano, epidemics could potentially compromise the epistemological value of their data. PMID:29622948
Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou
The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state
Dumidu Wijayasekara; Milos Manic
Significant portion of world energy production is consumed by building Heating, Ventilation and Air Conditioning (HVAC) units. Thus along with occupant comfort, energy efficiency is also an important factor in HVAC control. Modern buildings use advanced Multiple Input Multiple Output (MIMO) control schemes to realize these goals. However, since the performance of HVAC units is dependent on many criteria including uncertainties in weather, number of occupants, and thermal state, the performance of current state of the art systems are sub-optimal. Furthermore, because of the large number of sensors in buildings, and the high frequency of data collection, large amount of information is available. Therefore, important behavior of buildings that compromise energy efficiency or occupant comfort is difficult to identify. This paper presents an easy to use and understandable framework for identifying such behavior. The presented framework uses human understandable knowledge-base to extract important behavior of buildings and present it to users via a graphical user interface. The presented framework was tested on a building in the Pacific Northwest and was shown to be able to identify important behavior that relates to energy efficiency and occupant comfort.
Qin, Cheng-Zhi; Wu, Xue-Wei; Jiang, Jing-Chao; Zhu, A.-Xing
Application of digital terrain analysis (DTA), which is typically a modeling process involving workflow building, relies heavily on DTA domain knowledge of the match between the algorithm (and its parameter settings) and the application context (including the target task, the terrain in the study area, the DEM resolution, etc.), which is referred to as application-context knowledge. However, existing DTA-assisted tools often cannot use application-context knowledge because this type of DTA knowledge has not been formalized to be available for inference in these tools. This situation makes the DTA workflow-building process difficult for users, especially non-expert users. This paper proposes a case-based formalization for DTA application-context knowledge and a corresponding case-based reasoning method. A case in this context consists of a series of indices that formalize the DTA application-context knowledge and the corresponding similarity calculation methods for case-based reasoning. A preliminary experiment to determine the catchment area threshold for extracting drainage networks has been conducted to evaluate the performance of the proposed method. In the experiment, 124 cases of drainage network extraction (50 for evaluation and 74 for reasoning) were prepared from peer-reviewed journal articles. Preliminary evaluation shows that the proposed case-based method is a suitable way to use DTA application-context knowledge to achieve a marked reduction in the modeling burden for users.
Full Text Available Scientific publications written in natural language still play a central role as our knowledge source. However, due to the flood of publications, the literature survey process has become a highly time-consuming and tangled process, especially for novices of the discipline. Therefore, tools supporting the literature-survey process may help the individual scientist to explore new useful domains. Natural language processing (NLP is expected as one of the promising techniques to retrieve, abstract, and extract knowledge. In this contribution, NLP is firstly applied to the literature of chemical vapor deposition (CVD, which is a sub-discipline of materials science and is a complex and interdisciplinary field of research involving chemists, physicists, engineers, and materials scientists. Causal knowledge extraction from the literature is demonstrated using NLP.
Wang, Liming; Chen, Chunying
Nanomaterials (NMs) have been widespread used in biomedical fields, daily consuming, and even food industry. It is crucial to understand the safety and biomedical efficacy of NMs. In this review, we summarized the recent progress about the physiological and pathological effects of NMs from several levels: protein-nano interface, NM-subcellular structures, and cell–cell interaction. We focused on the detailed information of nano-bio interaction, especially about protein adsorption, intracellular trafficking, biological barriers, and signaling pathways as well as the associated mechanism mediated by nanomaterials. We also introduced related analytical methods that are meaningful and helpful for biomedical effect studies in the future. We believe that knowledge about pathophysiologic effects of NMs is not only significant for rational design of medical NMs but also helps predict their safety and further improve their applications in the future. - Highlights: • Rapid protein adsorption onto nanomaterials that affects biomedical effects • Nanomaterials and their interaction with biological membrane, intracellular trafficking and specific cellular effects • Nanomaterials and their interaction with biological barriers • The signaling pathways mediated by nanomaterials and related biomedical effects • Novel techniques for studying translocation and biomedical effects of NMs
This theme issue on knowledge includes annotated listings of Web sites, CD-ROMs and computer software, videos, books, and additional resources that deal with knowledge and differences between how animals and humans learn. Sidebars discuss animal intelligence, learning proper behavior, and getting news from the Internet. (LRW)
Mittelstadt, Brent Daniel
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. ‘Biomedical Big Data’ refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understan...
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction.
Full Text Available Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1 using directional mathematical morphology to enhance the contrast between roads and non-roads; (2 using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction.
Full Text Available The fast increasing amount of articles published in the biomedical field is creating difficulties in the way this wealth of information can be efficiently exploited by researchers. As a way of overcoming these limitations and potentiating a more efficient use of the literature, we propose an approach for structuring the results of a literature search based on the latent semantic information extracted from a corpus. Moreover, we show how the results of the Latent Semantic Analysis method can be adapted so as to evidence differences between results of different searches. We also propose different visualization techniques that can be applied to explore these results. Used in combination, these techniques could empower users with tools for literature guided knowledge exploration and discovery.
The biomedical literature captures the most current biomedical knowledge and is a tremendously rich resource for research. With over 24 million publications currently indexed in the US National Library of Medicine’s PubMed index, however, it is becoming increasingly challenging for biomedical researchers to keep up with this literature. Automated strategies for extracting information from it are required. Large-scale processing of the literature enables direct biomedical knowledge discovery. In this presentation, I will introduce the use of text mining techniques to support analysis of biological data sets, and will specifically discuss applications in protein function and phenotype prediction, as well as analysis of genetic variants that are supported by analysis of the literature and integration with complementary structured resources.
To summarize current outstanding research in the field of knowledge representation and management. Synopsis of the articles selected for the IMIA Yearbook 2010. Four interesting papers, dealing with structured knowledge, have been selected for the section knowledge representation and management. Combining the newest techniques in computational linguistics and natural language processing with the latest methods in statistical data analysis, machine learning and text mining has proved to be efficient for turning unstructured textual information into meaningful knowledge. Three of the four selected papers for the section knowledge representation and management corroborate this approach and depict various experiments conducted to .extract meaningful knowledge from unstructured free texts such as extracting cancer disease characteristics from pathology reports, or extracting protein-protein interactions from biomedical papers, as well as extracting knowledge for the support of hypothesis generation in molecular biology from the Medline literature. Finally, the last paper addresses the level of formally representing and structuring information within clinical terminologies in order to render such information easily available and shareable among the health informatics community. Delivering common powerful tools able to automatically extract meaningful information from the huge amount of electronically unstructured free texts is an essential step towards promoting sharing and reusability across applications, domains, and institutions thus contributing to building capacities worldwide.
Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Chikayama, Takashi; Tojo, Satoshi
Relation extraction is a fundamental technology in biomedical text mining. Most of the previous studies on relation extraction from biomedical literature have focused on specific or predefined types of relations, which inherently limits the types of the extracted relations. With the aim of fully leveraging the knowledge described in the literature, we address much broader types of semantic relations using a single extraction framework. Our system, which we name PASMED, extracts diverse types of binary relations from biomedical literature using deep syntactic patterns. Our experimental results demonstrate that it achieves a level of recall considerably higher than the state of the art, while maintaining reasonable precision. We have then applied PASMED to the whole MEDLINE corpus and extracted more than 137 million semantic relations. The extracted relations provide a quantitative understanding of what kinds of semantic relations are actually described in MEDLINE and can be ultimately extracted by (possibly type-specific) relation extraction systems. PASMED extracts a large number of relations that have previously been missed by existing text mining systems. The entire collection of the relations extracted from MEDLINE is publicly available in machine-readable form, so that it can serve as a potential knowledge base for high-level text-mining applications.
Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won
In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
Pawar, S.H.; Khyalappa, R.J.; Yakhmi, J.V.
This book is predominantly a compilation of papers presented in the conference which is focused on the development in biomedical materials, biomedical devises and instrumentation, biomedical effects of electromagnetic radiation, electrotherapy, radiotherapy, biosensors, biotechnology, bioengineering, tissue engineering, clinical engineering and surgical planning, medical imaging, hospital system management, biomedical education, biomedical industry and society, bioinformatics, structured nanomaterial for biomedical application, nano-composites, nano-medicine, synthesis of nanomaterial, nano science and technology development. The papers presented herein contain the scientific substance to suffice the academic directivity of the researchers from the field of biomedicine, biomedical engineering, material science and nanotechnology. Papers relevant to INIS are indexed separately
Warren, Amy L; Donnon, Tyrone
As veterinary medical curricula evolve, the time dedicated to biomedical science teaching, as well as the role of biomedical science knowledge in veterinary education, has been scrutinized. Aside from being mandated by accrediting bodies, biomedical science knowledge plays an important role in developing clinical, diagnostic, and therapeutic reasoning skills in the application of clinical skills, in supporting evidence-based veterinary practice and life-long learning, and in advancing biomedical knowledge and comparative medicine. With an increasing volume and fast pace of change in biomedical knowledge, as well as increased demands on curricular time, there has been pressure to make biomedical science education efficient and relevant for veterinary medicine. This has lead to a shift in biomedical education from fact-based, teacher-centered and discipline-based teaching to applicable, student-centered, integrated teaching. This movement is supported by adult learning theories and is thought to enhance students' transference of biomedical science into their clinical practice. The importance of biomedical science in veterinary education and the theories of biomedical science learning will be discussed in this article. In addition, we will explore current advances in biomedical teaching methodologies that are aimed to maximize knowledge retention and application for clinical veterinary training and practice.
Institutional origins of health care-associated infection knowledge: lessons from an analysis of articles about methicillin-resistant Staphylococcus aureus published in leading biomedical journals from 1960-2009.
Rojas, Fabio; Byrd, W Carson; Saint, Sanjay
Biomedical research journals are important because peer reviewed research is viewed as more legitimate and trustworthy than non-peer reviewed work. Therefore, it is important to know how knowledge transmitted through academic biomedical journals is produced. This article asks if some organizations are more likely to produce research than others and if organizational setting is linked with an article's impact, as measured by citation counts. Using research on methicillin-resistant Staphylococcus aureus (MRSA) as a case study, we examined the role that hospitals, universities, public health agencies, and other organizations have in shaping an emerging research area. We collected public data on the organizational affiliations of researchers who authored 1,721 articles in general interest and selected specialty journals. MRSA research appears to have evolved in stages that require the participation of different types of organizations. Additionally, our analyses indicate that an author's organizational affiliation predicts citation counts, even when controlling for other factors. Organizations vary greatly in their ability to produce research, and this should be taken into account by those who manage or award funds to research organizations. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian
The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo.
Demner-Fushman, Dina; Mork, James G; Shooshan, Sonya E; Aronson, Alan R
Identification of medical terms in free text is a first step in such Natural Language Processing (NLP) tasks as automatic indexing of biomedical literature and extraction of patients' problem lists from the text of clinical notes. Many tools developed to perform these tasks use biomedical knowledge encoded in the Unified Medical Language System (UMLS) Metathesaurus. We continue our exploration of automatic approaches to creation of subsets (UMLS content views) which can support NLP processing of either the biomedical literature or clinical text. We found that suppression of highly ambiguous terms in the conservative AutoFilter content view can partially replace manual filtering for literature applications, and suppression of two character mappings in the same content view achieves 89.5% precision at 78.6% recall for clinical applications. Published by Elsevier Inc.
Maria Auxiliadora Drumond
Full Text Available The giant earthworm, Rhinodrilus alatus (Righi 1971, has been captured in the southeastern Brazilian Cerrado biome for approximately 80 years and used as bait for amateur fishing throughout Brazil. Local knowledge and traditional extraction practices are crucial for the establishment of management strategies for the species because, although its extraction involves conflicts and social and environmental impacts, the species is one of the major sources of income for approximately 3,000 people, especially for members of an Afro-descendant community that has approximately 2,000 inhabitants. Participatory tools, such as seasonal calendar, transect walks and participatory maps, were individually or collectively used with extractors and traders (former extractors, and 129 semi-structured and unstructured interviews were conducted with the same individuals between 2005 and 2012. The capture of Rhinodrilus alatus was observed in different seasons and areas of occurrence of the species in 17 municipalities, where this giant earthworm is the only species extracted for trade. All information obtained was verified by community members in 17 meetings. The extractors have an extensive knowledge of the life history, behavior, distribution, and possible impacts of climate change on the species. Different capture techniques, which have different impacts, are used during the dry and rainy seasons and are passed by the extractors through the generations. Local knowledge contributed to the establishment of agreements for the use of capture techniques that have less impact, to the expansion of scientific knowledge and the reassessment of the conservation status of Rhinodrilus alatus. The present study may serve as an example for management projects for other giant earthworm species in other regions of Brazil and in other countries.
Drumond, Maria Auxiliadora; Guimarães, Artur Queiroz; da Silva, Raquel Hosken Pereira
The giant earthworm, Rhinodrilus alatus (Righi 1971), has been captured in the southeastern Brazilian Cerrado biome for approximately 80 years and used as bait for amateur fishing throughout Brazil. Local knowledge and traditional extraction practices are crucial for the establishment of management strategies for the species because, although its extraction involves conflicts and social and environmental impacts, the species is one of the major sources of income for approximately 3,000 people, especially for members of an Afro-descendant community that has approximately 2,000 inhabitants. Participatory tools, such as seasonal calendar, transect walks and participatory maps, were individually or collectively used with extractors and traders (former extractors), and 129 semi-structured and unstructured interviews were conducted with the same individuals between 2005 and 2012. The capture of Rhinodrilus alatus was observed in different seasons and areas of occurrence of the species in 17 municipalities, where this giant earthworm is the only species extracted for trade. All information obtained was verified by community members in 17 meetings. The extractors have an extensive knowledge of the life history, behavior, distribution, and possible impacts of climate change on the species. Different capture techniques, which have different impacts, are used during the dry and rainy seasons and are passed by the extractors through the generations. Local knowledge contributed to the establishment of agreements for the use of capture techniques that have less impact, to the expansion of scientific knowledge and the reassessment of the conservation status of Rhinodrilus alatus. The present study may serve as an example for management projects for other giant earthworm species in other regions of Brazil and in other countries.
Lim, Joo-Hwee; Xiong, Wei
A comprehensive guide to understanding and interpreting digital images in medical and functional applications Biomedical Image Understanding focuses on image understanding and semantic interpretation, with clear introductions to related concepts, in-depth theoretical analysis, and detailed descriptions of important biomedical applications. It covers image processing, image filtering, enhancement, de-noising, restoration, and reconstruction; image segmentation and feature extraction; registration; clustering, pattern classification, and data fusion. With contributions from ex
Bronzino, Joseph D
Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Engineering Fundamentals, the first volume of the handbook, presents material from respected scientists with diverse backgrounds in physiological systems, biomechanics, biomaterials, bioelectric phenomena, and neuroengineering. More than three dozen specific topics are examined, including cardia
Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang
Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling.
Roos, M.; Marshall, M.S.; Gibson, A.P.; Schuemie, M.; Meij, E.; Katrenko, S.; van Hage, W.R.; Krommydas, K.; Adriaans, P.W.
Background: Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are
González, Roberto; Zato, Carolina; Benito, Rocío; Bajo, Javier; Hernández, Jesús M; De Paz, Juan F; Vera, Vicente; Corchado, Juan M
Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.
Full Text Available Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.
Htay, Su Su; Lynn, Khin Thidar
Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.
Miguel Ángel Hernández
Full Text Available Our research group has developed a group of hybrid biomedical materials potentially useful in the healing of diabetic foot ulcerations. The organic part of this type of hybrid materials consists of nanometric deposits, proceeding from the Mexican medicinal plant Tournefortia hirsutissima L., while the inorganic part is composed of a zeolite mixture that includes LTA, ZSM-5, clinoptilolite, and montmorillonite (PZX as well as a composite material, made of CaCO3 and montmorillonite (NABE. The organic part has been analyzed by GC-MS to detect the most abundant components present therein. In turn, the inorganic supports were characterized by XRD, SEM, and High Resolution Adsorption (HRADS of N2 at 76 K. Through this latter methodology, the external surface area of the hybrid materials was evaluated; besides, the most representative textural properties of each substrate such as total pore volume, pore size distribution, and, in some cases, the volume of micropores were calculated. The formation and stabilization of nanodeposits on the inorganic segments of the hybrid supports led to a partial blockage of the microporosity of the LTA and ZSM5 zeolites; this same effect occurred with the NABE and PZX substrates.
Anawar, Hossain Md
The oxidative dissolution of sulfidic minerals releases the extremely acidic leachate, sulfate and potentially toxic elements e.g., As, Ag, Cd, Cr, Cu, Hg, Ni, Pb, Sb, Th, U, Zn, etc. from different mine tailings and waste dumps. For the sustainable rehabilitation and disposal of mining waste, the sources and mechanisms of contaminant generation, fate and transport of contaminants should be clearly understood. Therefore, this study has provided a critical review on (1) recent insights in mechanisms of oxidation of sulfidic minerals, (2) environmental contamination by mining waste, and (3) remediation and rehabilitation techniques, and (4) then developed the GEMTEC conceptual model/guide [(bio)-geochemistry-mine type-mineralogy- geological texture-ore extraction process-climatic knowledge)] to provide the new scientific approach and knowledge for remediation of mining wastes and acid mine drainage. This study has suggested the pre-mining geological, geochemical, mineralogical and microtextural characterization of different mineral deposits, and post-mining studies of ore extraction processes, physical, geochemical, mineralogical and microbial reactions, natural attenuation and effect of climate change for sustainable rehabilitation of mining waste. All components of this model should be considered for effective and integrated management of mining waste and acid mine drainage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Liberman Mark Y
Full Text Available Abstract Background The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining. Results We developed a named entity recognizer (MTag, an entity tagger for recognizing clinical descriptions of malignancy presented in text. The application uses the machine-learning technique Conditional Random Fields with additional domain-specific features. MTag was tested with 1,010 training and 432 evaluation documents pertaining to cancer genomics. Overall, our experiments resulted in 0.85 precision, 0.83 recall, and 0.84 F-measure on the evaluation set. Compared with a baseline system using string matching of text with a neoplasm term list, MTag performed with a much higher recall rate (92.1% vs. 42.1% recall and demonstrated the ability to learn new patterns. Application of MTag to all MEDLINE abstracts yielded the identification of 580,002 unique and 9,153,340 overall mentions of malignancy. Significantly, addition of an extensive lexicon of malignancy mentions as a feature set for extraction had minimal impact in performance. Conclusion Together, these results suggest that the
Jin, Yang; McDonald, Ryan T; Lerman, Kevin; Mandel, Mark A; Carroll, Steven; Liberman, Mark Y; Pereira, Fernando C; Winters, Raymond S; White, Peter S
The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining. We developed a named entity recognizer (MTag), an entity tagger for recognizing clinical descriptions of malignancy presented in text. The application uses the machine-learning technique Conditional Random Fields with additional domain-specific features. MTag was tested with 1,010 training and 432 evaluation documents pertaining to cancer genomics. Overall, our experiments resulted in 0.85 precision, 0.83 recall, and 0.84 F-measure on the evaluation set. Compared with a baseline system using string matching of text with a neoplasm term list, MTag performed with a much higher recall rate (92.1% vs. 42.1% recall) and demonstrated the ability to learn new patterns. Application of MTag to all MEDLINE abstracts yielded the identification of 580,002 unique and 9,153,340 overall mentions of malignancy. Significantly, addition of an extensive lexicon of malignancy mentions as a feature set for extraction had minimal impact in performance. Together, these results suggest that the identification of disparate biomedical entity classes in
Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed
Full Text Available The Ontology for Biomedical Investigations (OBI is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI and Phenotype Attribute and Trait Ontology (PATO without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT. The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org providing details on the people, policies, and issues being
Full Text Available Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user’s wish. This paper aims to design a framework for extracting knowledge from web sources for the end users to take a right decision at a crucial juncture. The web data is collected from various web sources and structured appropriately and stored as an ontology based data repository. The proposed framework implements an online recommender application for the learners online who pursue their graduation in an open and distance learning environment. This framework possesses three phases: data repository, knowledge engine, and online recommendation system. The data repository possesses common data which is attained by the process of acquiring data from various web sources. The knowledge engine collects the semantic data from the ontology based data repository and maps it to the user through the query processor component. Establishment of an online recommendation system is used to make recommendations to the user for a decision making process. This research work is implemented with the help of an experimental case study which deals with an online recommendation system for the career guidance of a learner. The online recommendation application is implemented with the help of R-tool, NLP parser and clustering algorithm.This research study will help users to attain semantic knowledge from heterogeneous web sources and to make decisions.
Mihăilă, Claudiu; Ohta, Tomoko; Pyysalo, Sampo; Ananiadou, Sophia
Biomedical corpora annotated with event-level information represent an important resource for domain-specific information extraction (IE) systems. However, bio-event annotation alone cannot cater for all the needs of biologists. Unlike work on relation and event extraction, most of which focusses on specific events and named entities, we aim to build a comprehensive resource, covering all statements of causal association present in discourse. Causality lies at the heart of biomedical knowledge, such as diagnosis, pathology or systems biology, and, thus, automatic causality recognition can greatly reduce the human workload by suggesting possible causal connections and aiding in the curation of pathway models. A biomedical text corpus annotated with such relations is, hence, crucial for developing and evaluating biomedical text mining. We have defined an annotation scheme for enriching biomedical domain corpora with causality relations. This schema has subsequently been used to annotate 851 causal relations to form BioCause, a collection of 19 open-access full-text biomedical journal articles belonging to the subdomain of infectious diseases. These documents have been pre-annotated with named entity and event information in the context of previous shared tasks. We report an inter-annotator agreement rate of over 60% for triggers and of over 80% for arguments using an exact match constraint. These increase significantly using a relaxed match setting. Moreover, we analyse and describe the causality relations in BioCause from various points of view. This information can then be leveraged for the training of automatic causality detection systems. Augmenting named entity and event annotations with information about causal discourse relations could benefit the development of more sophisticated IE systems. These will further influence the development of multiple tasks, such as enabling textual inference to detect entailments, discovering new facts and providing new
Bikku, Thulasi; Sambasiva Rao, N., Dr; Rao, Akepogu Ananda, Dr
This paper mainly focuseson developing aHadoop based framework for feature selection and classification models to classify high dimensionality data in heterogeneous biomedical databases. Wide research has been performing in the fields of Machine learning, Big data and Data mining for identifying patterns. The main challenge is extracting useful features generated from diverse biological systems. The proposed model can be used for predicting diseases in various applications and identifying the features relevant to particular diseases. There is an exponential growth of biomedical repositories such as PubMed and Medline, an accurate predictive model is essential for knowledge discovery in Hadoop environment. Extracting key features from unstructured documents often lead to uncertain results due to outliers and missing values. In this paper, we proposed a two phase map-reduce framework with text preprocessor and classification model. In the first phase, mapper based preprocessing method was designed to eliminate irrelevant features, missing values and outliers from the biomedical data. In the second phase, a Map-Reduce based multi-class ensemble decision tree model was designed and implemented in the preprocessed mapper data to improve the true positive rate and computational time. The experimental results on the complex biomedical datasets show that the performance of our proposed Hadoop based multi-class ensemble model significantly outperforms state-of-the-art baselines.
Full Text Available Despite a general acceptance of the biopsychosocial model, medical education and patient care are still largely biomedical in focus, and physicians have many deficiencies in biopsychosocial formulations and care. Education in medical schools puts more emphasis on providing biomedical education (BM than biopsychosocial education (BPS; the initial knowledge formed in medical students is mainly with a biomedical approach. Therefore, it seems that psychosocial aspects play a minor role at this level and PSM knowledge will lag behind BM knowledge. However, it seems that the integration of biomedical and psychosocial-knowledge is crucial for a successful and efficient patient encounter. In this paper, based on the theory of medical expertise development, the steps through which biomedical reasoning transforms to psychosomatic reasoning will be discussed.
Ezhilarasi, A Angel; Vijaya, J Judith; Kaviyarasu, K; Maaza, M; Ayeshamariam, A; Kennedy, L John
Green protocols for the synthesis of nickel oxide nanoparticles using Moringa oleifera plant extract has been reported in the present study as they are cost effective and ecofriendly, moreover this paper records that the nickel oxide (NiO) nanoparticles prepared from green method shows better cytotoxicity and antibacterial activity. The NiO nanoparticles were characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), High resolution transmission electron microscopy (HRTEM), Energy dispersive X-ray analysis (EDX), and Photoluminescence spectroscopy (PL). The formation of a pure nickel oxide phase was confirmed by XRD and FTIR. The synthesized NiO nanoparticles was single crystalline having face centered cubic phase and has two intense photoluminescence emissions at 305.46nm and 410nm. The formation of nano- and micro-structures was confirmed by HRTEM. The in-vitro cytotoxicity and cell viability of human cancer cell HT-29 (Colon Carcinoma cell lines) and antibacterial studies against various bacterial strains were studied with various concentrations of nickel oxide nanoparticles prepared from Moringa oleifera plant extract. MTT assay measurements on cell viability and morphological studies proved that the synthesized NiO nanoparticles posses cytotoxic activity against human cancer cells and the various zones of inhibition (mm), obtained revealed the effective antibacterial activity of NiO nanoparticles against various Gram positive and Gram negative bacterial pathogens. Copyright © 2016 Elsevier B.V. All rights reserved.
Keefer, Christopher E; Chang, George; Kauffman, Gregory W
Pharmaceutical companies routinely collect data across multiple projects for common ADME endpoints. Although at the time of collection the data is intended for use in decision making within a specific project, knowledge can be gained by data mining the entire cross-project data set for patterns of structure-activity relationships (SAR) that may be applied to any project. One such data mining method is pairwise analysis. This method has the advantage of being able to identify small structural changes that lead to significant changes in activity. In this paper, we describe the process for full pairwise analysis of our high-throughput ADME assays routinely used for compound discovery efforts at Pfizer (microsomal clearance, passive membrane permeability, P-gp efflux, and lipophilicity). We also describe multiple strategies for the application of these transforms in a prospective manner during compound design. Finally, a detailed analysis of the activity patterns in pairs of compounds that share the same molecular transformation reveals multiple types of transforms from an SAR perspective. These include bioisosteres, additives, multiplicatives, and a type we call switches as they act to either turn on or turn off an activity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Thiele, Herbert; Glandorf, Jörg; Hufnagel, Peter
With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScape bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists; current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor) that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.
Full Text Available Abstract Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is
Background Due to the rapidly expanding body of biomedical literature, biologists require increasingly sophisticated and efficient systems to help them to search for relevant information. Such systems should account for the multiple written variants used to represent biomedical concepts, and allow the user to search for specific pieces of knowledge (or events) involving these concepts, e.g., protein-protein interactions. Such functionality requires access to detailed information about words used in the biomedical literature. Existing databases and ontologies often have a specific focus and are oriented towards human use. Consequently, biological knowledge is dispersed amongst many resources, which often do not attempt to account for the large and frequently changing set of variants that appear in the literature. Additionally, such resources typically do not provide information about how terms relate to each other in texts to describe events. Results This article provides an overview of the design, construction and evaluation of a large-scale lexical and conceptual resource for the biomedical domain, the BioLexicon. The resource can be exploited by text mining tools at several levels, e.g., part-of-speech tagging, recognition of biomedical entities, and the extraction of events in which they are involved. As such, the BioLexicon must account for real usage of words in biomedical texts. In particular, the BioLexicon gathers together different types of terms from several existing data resources into a single, unified repository, and augments them with new term variants automatically extracted from biomedical literature. Extraction of events is facilitated through the inclusion of biologically pertinent verbs (around which events are typically organized) together with information about typical patterns of grammatical and semantic behaviour, which are acquired from domain-specific texts. In order to foster interoperability, the BioLexicon is modelled using the Lexical
Zhu, Fei; Patumcharoenpol, Preecha; Zhang, Cheng; Yang, Yang; Chan, Jonathan; Meechai, Asawin; Vongsangnak, Wanwipa; Shen, Bairong
Cancer is a malignant disease that has caused millions of human deaths. Its study has a long history of well over 100years. There have been an enormous number of publications on cancer research. This integrated but unstructured biomedical text is of great value for cancer diagnostics, treatment, and prevention. The immense body and rapid growth of biomedical text on cancer has led to the appearance of a large number of text mining techniques aimed at extracting novel knowledge from scientific text. Biomedical text mining on cancer research is computationally automatic and high-throughput in nature. However, it is error-prone due to the complexity of natural language processing. In this review, we introduce the basic concepts underlying text mining and examine some frequently used algorithms, tools, and data sets, as well as assessing how much these algorithms have been utilized. We then discuss the current state-of-the-art text mining applications in cancer research and we also provide some resources for cancer text mining. With the development of systems biology, researchers tend to understand complex biomedical systems from a systems biology viewpoint. Thus, the full utilization of text mining to facilitate cancer systems biology research is fast becoming a major concern. To address this issue, we describe the general workflow of text mining in cancer systems biology and each phase of the workflow. We hope that this review can (i) provide a useful overview of the current work of this field; (ii) help researchers to choose text mining tools and datasets; and (iii) highlight how to apply text mining to assist cancer systems biology research. Copyright © 2012 Elsevier Inc. All rights reserved.
Agaga, Luther Agbonyegbeni; John, Theresa Adebola
In Nigeria, medical students are trained in more didactic environments than their counterparts in researchintensive academic medical centers. Their conception of pharmacology was thus sought. Students who are taking/have takenthe medical pharmacology course completed an 18-question survey within 10min by marking one/more choices fromalternatives. Instructions were: "Dear Participant, Please treat as confidential, give your true view, avoid influences, avoidcrosstalk, return survey promptly." Out of 301 students, 188 (62.46%) participated. Simple statistics showed: 61.3%respondents associated pharmacology with medicine, 24.9% with science, 16.8 % with industry, and 11.1% with government;32.8% want to know clinical pharmacology, 7.1% basic pharmacology, 6.7% pharmacotherapy, and 34.2% want a blend ofall three; 57.8% want to know clinical uses of drugs, 44.8% mechanisms of action, 44.4% side effects, and 31.1% differentdrugs in a group; 45.8% prefer to study lecturers' notes, 26.7% textbooks, 9.8% the Internet, and 2.7% journals; 46.7% usestandard textbooks, 11.5% revision texts, 2.66% advanced texts, and 8.4% no textbook; 40.4% study pharmacology to beable to treat patients, 39.1% to complete the requirements for MBBS degree, 8.9% to know this interesting subject, and 3.1%to make money. Respondents preferring aspects of pharmacology were: 42.7, 16, 16, and 10 (%) respectively for mechanismsof action, pharmacokinetics, side effects, and drug lists. Medical students' conception and need for pharmacology werebased on MBBS degree requirements; they lacked knowledge/interest in pharmacology as a science and may not be thepotential trusts for Africa's future pharmacology.
About the Book: A well set out textbook explains the fundamentals of biomedical engineering in the areas of biomechanics, biofluid flow, biomaterials, bioinstrumentation and use of computing in biomedical engineering. All these subjects form a basic part of an engineer''s education. The text is admirably suited to meet the needs of the students of mechanical engineering, opting for the elective of Biomedical Engineering. Coverage of bioinstrumentation, biomaterials and computing for biomedical engineers can meet the needs of the students of Electronic & Communication, Electronic & Instrumenta
Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N
Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem
Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger
The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide...... realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial...... features, include methods for noise removal, e.g., labeling of beyond building-perimeter devices, and methods for quantification of area densities and flows, e.g., building enter and exit events, and for classifying the behavior of people, e.g., into user roles such as visitor, hospitalized or employee...
Bouadi, Tassadit; Gascuel-Odoux, Chantal; Cordier, Marie-Odile; Quiniou, René; Moreau, Pierre
In recent years, simulation models have been used more and more in hydrology to test the effect of scenarios and help stakeholders in decision making. Agro-hydrological models have oriented agricultural water management, by testing the effect of landscape structure and farming system changes on water and chemical emission in rivers. Such models generate a large amount of data while few of them, such as daily concentrations at the outlet of the catchment, or annual budgets regarding soil, water and atmosphere emissions, are stored and analyzed. Thus, a great amount of information is lost from the simulation process. This is due to the large volumes of simulated data, but also to the difficulties in analyzing and transforming the data in an usable information. In this talk we illustrate a data warehouse which has been built to store and manage simulation data coming from the agro-hydrological model TNT (Topography-based nitrogen transfer and transformations, (Beaujouan et al., 2002)). This model simulates the transfer and transformation of nitrogen in agricultural catchments. TNT was used over 10 years on the Yar catchment (western France), a 50 km2 square area which present a detailed data set and have to facing to environmental issue (coastal eutrophication). 44 output key simulated variables are stored at a daily time step, i.e, 8 GB of storage size, which allows the users to explore the N emission in space and time, to quantify all the processes of transfer and transformation regarding the cropping systems, their location within the catchment, the emission in water and atmosphere, and finally to get new knowledge and help in making specific and detailed decision in space and time. We present the dimensional modeling process of the Nitrogen in catchment data warehouse (i.e. the snowflake model). After identifying the set of multileveled dimensions with complex hierarchical structures and relationships among related dimension levels, we chose the snowflake model to
Full Text Available The Topographic Attentive Mapping (TAM network is a biologically-inspired classifier that bears similarities to the human visual system. In case of wrong classification during training, an attentional top-down signal modulates synaptic weights in intermediate layers to reduce the difference between the desired output and the classifier’s output. When used in a TAM network, the proposed pruning algorithm improves classification accuracy and allows extracting knowledge as represented by the network structure. In this paper, sport technique evaluation of motion analysis modelled by the TAM network was discussed. The trajectory pattern of forehand strokes of table tennis players was analyzed with nine sensor markers attached to the right upper arm of players. With the TAM network, input attributes and technique rules were extracted in order to classify the skill level of players of table tennis from the sensor data. In addition, differences between the elite player, middle level player and beginner were clarified; furthermore, we discussed how to improve skills specific to table tennis from the view of data analysis.
Hayashi, Isao; Fujii, Masanori; Maeda, Toshiyuki; Leveille, Jasmin; Tasaka, Tokio
The Topographic Attentive Mapping (TAM) network is a biologically-inspired classifier that bears similarities to the human visual system. In case of wrong classification during training, an attentional top-down signal modulates synaptic weights in intermediate layers to reduce the difference between the desired output and the classifier's output. When used in a TAM network, the proposed pruning algorithm improves classification accuracy and allows extracting knowledge as represented by the network structure. In this paper, sport technique evaluation of motion analysis modelled by the TAM network was discussed. The trajectory pattern of forehand strokes of table tennis players was analyzed with nine sensor markers attached to the right upper arm of players. With the TAM network, input attributes and technique rules were extracted in order to classify the skill level of players of table tennis from the sensor data. In addition, differences between the elite player, middle level player and beginner were clarified; furthermore, we discussed how to improve skills specific to table tennis from the view of data analysis.
Faria, Daniel; Pesquita, Catia; Mott, Isabela; Martins, Catarina; Couto, Francisco M; Cruz, Isabel F
Biomedical ontologies pose several challenges to ontology matching due both to the complexity of the biomedical domain and to the characteristics of the ontologies themselves. The biomedical tracks in the Ontology Matching Evaluation Initiative (OAEI) have spurred the development of matching systems able to tackle these challenges, and benchmarked their general performance. In this study, we dissect the strategies employed by matching systems to tackle the challenges of matching biomedical ontologies and gauge the impact of the challenges themselves on matching performance, using the AgreementMakerLight (AML) system as the platform for this study. We demonstrate that the linear complexity of the hash-based searching strategy implemented by most state-of-the-art ontology matching systems is essential for matching large biomedical ontologies efficiently. We show that accounting for all lexical annotations (e.g., labels and synonyms) in biomedical ontologies leads to a substantial improvement in F-measure over using only the primary name, and that accounting for the reliability of different types of annotations generally also leads to a marked improvement. Finally, we show that cross-references are a reliable source of information and that, when using biomedical ontologies as background knowledge, it is generally more reliable to use them as mediators than to perform lexical expansion. We anticipate that translating traditional matching algorithms to the hash-based searching paradigm will be a critical direction for the future development of the field. Improving the evaluation carried out in the biomedical tracks of the OAEI will also be important, as without proper reference alignments there is only so much that can be ascertained about matching systems or strategies. Nevertheless, it is clear that, to tackle the various challenges posed by biomedical ontologies, ontology matching systems must be able to efficiently combine multiple strategies into a mature matching
EGb 761 is a standardized extract of dried leaves of Ginkgo biloba containing 24% ginkgo-flavonol glycosides, 6% terpene lactones such as ginkgolides A, B, C, J and bilobalide. Its broad spectrum of pharmacological activities allows it to be in adequacy to the numerous pathological requirements--hemodynamic, hemorheological, metabolic--which occur in cerebral, retinal, cochleovestibular, cardiac or peripheral ischemia. Moreover, EGb 761 has direct effects against necrosis and apoptosis of neurons and improves neural plasticity as evidenced in vestibular compensation. At the molecular and the cellular levels, some evidence obtained with animal models indicates that EGb 761 can interact as a free radical-scavenger and a inhibitor of lipid peroxidation with all, or nearly all reactive oxygen species; maintains ATP content by a protection of mitochondrial respiration and preservation of oxidative phosphorylations; exerts arterial and venous vasoregulator effects involving the release of endothelial factors and the catecholaminergic system. Moreover, EGb 761 regulates ionic balance in damaged cells and exerts a specific and potent Platelet-activating factor antagonist activity. Numerous well-controlled clinical studies, realized in Europe and in USA, have revealed that EGb 761 is an effective therapy for a wide variety of disturbances of cerebral function, ranging from cerebral impairment of ischemic vascular origins (i.e. multi infarct dementia), early cognitive decline to mild-to-moderate cases of the more severe types of senile dementias (including Alzheimer's disease) or mixed origins (i.e. psychoorganic origin). Improvement of signs and symptoms have been demonstrated for cognitive functions, particularly for memory loss, attention, alertness, vigilance, arousal and mental fluidity. Some clinical studies have showed that EGb 761 treatment may improve the capacity of geriatric patients to cope with the stressful demands of daily life. The explanation is a dual
INTRODUCTION TO DIGITAL SIGNAL AND IMAGE PROCESSINGSignals and Biomedical Signal ProcessingIntroduction and OverviewWhat is a ""Signal""?Analog, Discrete, and Digital SignalsProcessing and Transformation of SignalsSignal Processing for Feature ExtractionSome Characteristics of Digital ImagesSummaryProblemsFourier TransformIntroduction and OverviewOne-Dimensional Continuous Fourier TransformSampling and NYQUIST RateOne-Dimensional Discrete Fourier TransformTwo-Dimensional Discrete Fourier TransformFilter DesignSummaryProblemsImage Filtering, Enhancement, and RestorationIntroduction and Overview
Krustev, P.; Ruskov, T.
In this paper we describe different biomedical application using magnetic nanoparticles. Over the past decade, a number of biomedical applications have begun to emerge for magnetic nanoparticles of differing sizes, shapes, and compositions. Areas under investigation include targeted drug delivery, ultra-sensitive disease detection, gene therapy, high throughput genetic screening, biochemical sensing, and rapid toxicity cleansing. Magnetic nanoparticles exhibit ferromagnetic or superparamagnetic behavior, magnetizing strongly under an applied field. In the second case (superparamagnetic nanoparticles) there is no permanent magnetism once the field is removed. The superparamagnetic nanoparticles are highly attractive as in vivo probes or in vitro tools to extract information on biochemical systems. The optical properties of magnetic metal nanoparticles are spectacular and, therefore, have promoted a great deal of excitement during the last few decades. Many applications as MRI imaging and hyperthermia rely on the use of iron oxide particles. Moreover magnetic nanoparticles conjugated with antibodies are also applied to hyperthermia and have enabled tumor specific contrast enhancement in MRI. Other promising biomedical applications are connected with tumor cells treated with magnetic nanoparticles with X-ray ionizing radiation, which employs magnetic nanoparticles as a complementary radiate source inside the tumor. (authors)
Gascuel-odoux, C.; Bouadi, T.; Cordier, M.; Quiniou, R.
-Catch has been designed using the open source Business Intelligence Platform Pentaho. We show how to use online analytical processing (OLAP) to access and exploit, intuitively and quickly, the multidimensional and aggregated data from the N-Catch data warehouse. We illustrate how the data warehouse can be used to explore spatio-temporal dimensions efficiently and to discover new knowledge at multiple levels of simulation. OLAP tool can be used to synthesize environmental information and understand nitrogen emissions in water bodies by generating comparative and personalized views of historical data. This DWH is currently extended with data mining or information retrieval methods as Skyline queries to perform advanced analyses (Bouadi et al., 2012). Bouadi et al. N-Catch: A Data Warehouse for Multilevel Analysis of Simulated Nitrogen Data from an Agro-hydrological Model. Submitted. Bouadi et al., 2012) Bouadi, T., Cordier, M., and Quiniou, R. (2012). Incremental computation of skyline queries with dynamic preferences. In DEXA (1), pages 219-233. Trepos et al. 2012. Mining simulation data by rule induction to determine critical source areas of stream water pollution by herbicides. Computers and Electronics in Agriculture 86, 75-88.
This thesis is about Text Mining. Extracting important information from literature. In the last years, the number of biomedical articles and journals is growing exponentially. Scientists might not find the information they want because of the large number of publications. Therefore a system was
Alves, Tiago; Rodrigues, Rúben; Costa, Hugo; Rocha, Miguel
The volume of biomedical literature has been increasing in the last years. Patent documents have also followed this trend, being important sources of biomedical knowledge, technical details and curated data, which are put together along the granting process. The field of Biomedical text mining (BioTM) has been creating solutions for the problems posed by the unstructured nature of natural language, which makes the search of information a challenging task. Several BioTM techniques can be applied to patents. From those, Information Retrieval (IR) includes processes where relevant data are obtained from collections of documents. In this work, the main goal was to build a patent pipeline addressing IR tasks over patent repositories to make these documents amenable to BioTM tasks. The pipeline was developed within @Note2, an open-source computational framework for BioTM, adding a number of modules to the core libraries, including patent metadata and full text retrieval, PDF to text conversion and optical character recognition. Also, user interfaces were developed for the main operations materialized in a new @Note2 plug-in. The integration of these tools in @Note2 opens opportunities to run BioTM tools over patent texts, including tasks from Information Extraction, such as Named Entity Recognition or Relation Extraction. We demonstrated the pipeline's main functions with a case study, using an available benchmark dataset from BioCreative challenges. Also, we show the use of the plug-in with a user query related to the production of vanillin. This work makes available all the relevant content from patents to the scientific community, decreasing drastically the time required for this task, and provides graphical interfaces to ease the use of these tools. Copyright © 2018 Elsevier B.V. All rights reserved.
Laenger, C. J., Sr.
The engineering tasks performed in response to needs articulated by clinicians are described. Initial contacts were made with these clinician-technology requestors by the Southwest Research Institute NASA Biomedical Applications Team. The basic purpose of the program was to effectively transfer aerospace technology into functional hardware to solve real biomedical problems.
Nardi, Mariane; Lira-Guedes, Ana Cláudia; Albuquerque Cunha, Helenilza Ferreira; Guedes, Marcelino Carneiro; Mustin, Karen; Gomes, Suellen Cristina Pantoja
Várzea forests of the Amazon estuary contain species of importance to riverine communities. For example, the oil extracted from the seeds of crabwood trees is traditionally used to combat various illnesses and as such artisanal extraction processes have been maintained. The objectives of this study were to (1) describe the process involved in artisanal extraction of crabwood oil in the Fazendinha Protected Area, in the state of Amapá; (2) characterise the processes of knowledge transfer associated with the extraction and use of crabwood oil within a peri-urban riverine community; and (3) discern medicinal uses of the oil. The data were obtained using semistructured interviews with 13 community members involved in crabwood oil extraction and via direct observation. The process of oil extraction is divided into four stages: seed collection; cooking and resting of the seeds; shelling of the seeds and dough preparation; and oil collection. Oil extraction is carried out within the home for personal use, with surplus marketed within the community. More than 90% of the members of the community involved in extraction of crabwood oil highlighted the use of the oil to combat inflammation of the throat. Knowledge transfer occurs via oral transmission and through direct observation.
Full Text Available Várzea forests of the Amazon estuary contain species of importance to riverine communities. For example, the oil extracted from the seeds of crabwood trees is traditionally used to combat various illnesses and as such artisanal extraction processes have been maintained. The objectives of this study were to (1 describe the process involved in artisanal extraction of crabwood oil in the Fazendinha Protected Area, in the state of Amapá; (2 characterise the processes of knowledge transfer associated with the extraction and use of crabwood oil within a peri-urban riverine community; and (3 discern medicinal uses of the oil. The data were obtained using semistructured interviews with 13 community members involved in crabwood oil extraction and via direct observation. The process of oil extraction is divided into four stages: seed collection; cooking and resting of the seeds; shelling of the seeds and dough preparation; and oil collection. Oil extraction is carried out within the home for personal use, with surplus marketed within the community. More than 90% of the members of the community involved in extraction of crabwood oil highlighted the use of the oil to combat inflammation of the throat. Knowledge transfer occurs via oral transmission and through direct observation.
Johnson-Throop, Kathy A.
To improve on-orbit clinical capabilities by developing and providing operational support for intelligent, robust, reliable, and secure, enterprise-wide and comprehensive health care and biomedical informatics systems with increasing levels of autonomy, for use on Earth, low Earth orbit & exploration class missions. Biomedical Informatics is an emerging discipline that has been defined as the study, invention, and implementation of structures and algorithms to improve communication, understanding and management of medical information. The end objective of biomedical informatics is the coalescing of data, knowledge, and the tools necessary to apply that data and knowledge in the decision-making process, at the time and place that a decision needs to be made.
Boas, David A
Biomedical optics holds tremendous promise to deliver effective, safe, non- or minimally invasive diagnostics and targeted, customizable therapeutics. Handbook of Biomedical Optics provides an in-depth treatment of the field, including coverage of applications for biomedical research, diagnosis, and therapy. It introduces the theory and fundamentals of each subject, ensuring accessibility to a wide multidisciplinary readership. It also offers a view of the state of the art and discusses advantages and disadvantages of various techniques.Organized into six sections, this handbook: Contains intr
Gebelein, C G
The biomedical applications of polymers span an extremely wide spectrum of uses, including artificial organs, skin and soft tissue replacements, orthopaedic applications, dental applications, and controlled release of medications. No single, short review can possibly cover all these items in detail, and dozens of books andhundreds of reviews exist on biomedical polymers. Only a few relatively recent examples will be cited here;additional reviews are listed under most of the major topics in this book. We will consider each of the majorclassifications of biomedical polymers to some extent, inclu
From exoskeletons to neural implants, biomedical devices are no less than life-changing. Compact and constant power sources are necessary to keep these devices running efficiently. Edwar Romero's Powering Biomedical Devices reviews the background, current technologies, and possible future developments of these power sources, examining not only the types of biomedical power sources available (macro, mini, MEMS, and nano), but also what they power (such as prostheses, insulin pumps, and muscular and neural stimulators), and how they work (covering batteries, biofluids, kinetic and ther
Galipeau, James; Barbour, Virginia; Baskin, Patricia; Bell-Syer, Sally; Cobey, Kelly; Cumpston, Miranda; Deeks, Jon; Garner, Paul; MacLehose, Harriet; Shamseer, Larissa; Straus, Sharon; Tugwell, Peter; Wager, Elizabeth; Winker, Margaret; Moher, David
Biomedical journals are the main route for disseminating the results of health-related research. Despite this, their editors operate largely without formal training or certification. To our knowledge, no body of literature systematically identifying core competencies for scientific editors of biomedical journals exists. Therefore, we aimed to conduct a scoping review to determine what is known on the competency requirements for scientific editors of biomedical journals. We searched the MEDLINE®, Cochrane Library, Embase®, CINAHL, PsycINFO, and ERIC databases (from inception to November 2014) and conducted a grey literature search for research and non-research articles with competency-related statements (i.e. competencies, knowledge, skills, behaviors, and tasks) pertaining to the role of scientific editors of peer-reviewed health-related journals. We also conducted an environmental scan, searched the results of a previous environmental scan, and searched the websites of existing networks, major biomedical journal publishers, and organizations that offer resources for editors. A total of 225 full-text publications were included, 25 of which were research articles. We extracted a total of 1,566 statements possibly related to core competencies for scientific editors of biomedical journals from these publications. We then collated overlapping or duplicate statements which produced a list of 203 unique statements. Finally, we grouped these statements into seven emergent themes: (1) dealing with authors, (2) dealing with peer reviewers, (3) journal publishing, (4) journal promotion, (5) editing, (6) ethics and integrity, and (7) qualities and characteristics of editors. To our knowledge, this scoping review is the first attempt to systematically identify possible competencies of editors. Limitations are that (1) we may not have captured all aspects of a biomedical editor's work in our searches, (2) removing redundant and overlapping items may have led to the
... and on-line analysis of the biomedical signals. Each Biopac system-based laboratory station consists of real-time data acquisition system, amplifiers for EMG, EKG, EEG, and equipment for the study of Plethysmography, evoked response, cardio...
Holeňa, Martin; Baerns, M.
Roč. 81, - (2003), s. 485-494 ISSN 0920-5861 Grant - others:BMBF(DE) FKZ 03C3013 Institutional research plan: CEZ:AV0Z1030915 Keywords : artificial neural networks * multilayer perceptron * dependency * approximation * network training * overtraining * knowledge extraction * logical rules * oxidative dehydrogenation of propane Subject RIV: BA - General Mathematics Impact factor: 2.627, year: 2003
Rangayyan, Rangaraj M
The book will help assist a reader in the development of techniques for analysis of biomedical signals and computer aided diagnoses with a pedagogical examination of basic and advanced topics accompanied by over 350 figures and illustrations. Wide range of filtering techniques presented to address various applications. 800 mathematical expressions and equations. Practical questions, problems and laboratory exercises. Includes fractals and chaos theory with biomedical applications.
Sophisticated techniques for signal processing are now available to the biomedical specialist! Written in an easy-to-read, straightforward style, Biomedical Signal Processing presents techniques to eliminate background noise, enhance signal detection, and analyze computer data, making results easy to comprehend and apply. In addition to examining techniques for electrical signal analysis, filtering, and transforms, the author supplies an extensive appendix with several computer programs that demonstrate techniques presented in the text.
Pandiyan, Nithya; Murugesan, Balaji; Sonamuthu, Jegatheeswaran; Samayanan, Selvam; Mahalingam, Sundrarajan
In this study, a typical green synthesis route has approached for CeO 2 /ZrO 2 core metal oxide nanoparticles using ionic liquid mediated Justicia adhatoda extract. This synthesis method is carried out at simple room temperature condition to obtain the core metal oxide nanoparticles. XRD, SEM and TEM studies employed to study the crystalline and surface morphological properties under nucleation, growth, and aggregation processes. CeO 2 /ZrO 2 core metal oxides display agglomerated nano stick-like structure with 20-45nm size. GC-MS spectroscopy confirms the presence of vasicinone and N,N-Dimethylglycine present in the plant extract, which are capable of converting the corresponding metal ion precursor to CeO 2 /ZrO 2 core metal oxide nanoparticles. In FTIR, the corresponding stretching for Ce-O and Zr-O bands indicated at 498 and 416cm -1 and Raman spectroscopy also supports typical stretching frequencies at 463 and 160cm -1 . Band gap energy of the CeO 2 /ZrO 2 core metal oxide is 3.37eV calculated from UV- DRS spectroscopy. The anti-bacterial studies performed against a set of bacterial strains the result showed that core metal oxide nanoparticles more susceptible to gram-positive (G+) bacteria than gram-negative (G-) bacteria. A unique feature of the antioxidant behaviors core metal oxides reduces the concentration of DPPH radical up to 89%. The CeO 2 /ZrO 2 core metal oxide nanoparticles control the S. marcescent bio-film formation and restrict the quorum sensing. The toxicology behavior of CeO 2 /ZrO 2 core metal oxide NPs is found due to the high oxygen site vacancies, ROS formation, smallest particle size and higher surface area. This type of green synthesis route may efficient and the core metal oxide nanoparticles will possess a good bio-medical agent in future. Copyright © 2017 Elsevier B.V. All rights reserved.
Vegter, M W
Precision Medicine has become a common label for data-intensive and patient-driven biomedical research. Its intended future is reflected in endeavours such as the Precision Medicine Initiative in the USA. This article addresses the question whether it is possible to discern a new 'medical cosmology' in Precision Medicine, a concept that was developed by Nicholas Jewson to describe comprehensive transformations involving various dimensions of biomedical knowledge and practice, such as vocabularies, the roles of patients and physicians and the conceptualisation of disease. Subsequently, I will elaborate my assessment of the features of Precision Medicine with the help of Michel Foucault, by exploring how precision medicine involves a transformation along three axes: the axis of biomedical knowledge, of biomedical power and of the patient as a self. Patients are encouraged to become the managers of their own health status, while the medical domain is reframed as a data-sharing community, characterised by changing power relationships between providers and patients, producers and consumers. While the emerging Precision Medicine cosmology may surpass existing knowledge frameworks; it obscures previous traditions and reduces research-subjects to mere data. This in turn, means that the individual is both subjected to the neoliberal demand to share personal information, and at the same time has acquired the positive 'right' to become a member of the data-sharing community. The subject has to constantly negotiate the meaning of his or her data, which can either enable self-expression, or function as a commanding Superego.
Full Text Available Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment.In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM, and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes.This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.
Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M
Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.
Text mining of biomedical literature and clinical notes is a very active field of research in biomedical science. Semantic analysis is one of the core modules for different Natural Language Processing (NLP) solutions. Methods for calculating semantic relatedness of two concepts can be very useful in solutions solving different problems such as relationship extraction, ontology creation and question / answering [1--6]. Several techniques exist in calculating semantic relatedness of two concepts. These techniques utilize different knowledge sources and corpora. So far, researchers attempted to find the best hybrid method for each domain by combining semantic relatedness techniques and data sources manually. In this work, attempts were made to eliminate the needs for manually combining semantic relatedness methods targeting any new contexts or resources through proposing an automated method, which attempted to find the best combination of semantic relatedness techniques and resources to achieve the best semantic relatedness score in every context. This may help the research community find the best hybrid method for each context considering the available algorithms and resources.
Tang, Buzhou; Cao, Hongxin; Wang, Xiaolong; Chen, Qingcai; Xu, Hua
Biomedical Named Entity Recognition (BNER), which extracts important entities such as genes and proteins, is a crucial step of natural language processing in the biomedical domain. Various machine learning-based approaches have been applied to BNER tasks and showed good performance. In this paper, we systematically investigated three different types of word representation (WR) features for BNER, including clustering-based representation, distributional representation, and word embeddings. We selected one algorithm from each of the three types of WR features and applied them to the JNLPBA and BioCreAtIvE II BNER tasks. Our results showed that all the three WR algorithms were beneficial to machine learning-based BNER systems. Moreover, combining these different types of WR features further improved BNER performance, indicating that they are complementary to each other. By combining all the three types of WR features, the improvements in F-measure on the BioCreAtIvE II GM and JNLPBA corpora were 3.75% and 1.39%, respectively, when compared with the systems using baseline features. To the best of our knowledge, this is the first study to systematically evaluate the effect of three different types of WR features for BNER tasks.
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
Dunn, Michelle C; Bourne, Philip E
This article describes efforts at the National Institutes of Health (NIH) from 2013 to 2016 to train a national workforce in biomedical data science. We provide an analysis of the Big Data to Knowledge (BD2K) training program strengths and weaknesses with an eye toward future directions aimed at any funder and potential funding recipient worldwide. The focus is on extramurally funded programs that have a national or international impact rather than the training of NIH staff, which was addressed by the NIH's internal Data Science Workforce Development Center. From its inception, the major goal of BD2K was to narrow the gap between needed and existing biomedical data science skills. As biomedical research increasingly relies on computational, mathematical, and statistical thinking, supporting the training and education of the workforce of tomorrow requires new emphases on analytical skills. From 2013 to 2016, BD2K jump-started training in this area for all levels, from graduate students to senior researchers.
Flexman, J A; Lazareck, L
Biomedical engineering impacts health care and contributes to fundamental knowledge in medicine and biology. Policy, such as through regulation and research funding, has the potential to dramatically affect biomedical engineering research and commercialization. New developments, in turn, may affect society in new ways. The intersection of biomedical engineering and society and related policy issues must be discussed between scientists and engineers, policy-makers and the public. As a student, there are many ways to become engaged in the issues surrounding science and technology policy. At the University of Washington in Seattle, the Forum on Science Ethics and Policy (FOSEP, www.fosep.org) was started by graduate students and post-doctoral fellows interested in improving the dialogue between scientists, policymakers and the public and has received support from upper-level administration. This is just one example of how students can start thinking about science policy and ethics early in their careers.
Amith, Muhammad; He, Zhe; Bian, Jiang; Lossio-Ventura, Juan Antonio; Tao, Cui
With the proliferation of heterogeneous health care data in the last three decades, biomedical ontologies and controlled biomedical terminologies play a more and more important role in knowledge representation and management, data integration, natural language processing, as well as decision support for health information systems and biomedical research. Biomedical ontologies and controlled terminologies are intended to assure interoperability. Nevertheless, the quality of biomedical ontologies has hindered their applicability and subsequent adoption in real-world applications. Ontology evaluation is an integral part of ontology development and maintenance. In the biomedicine domain, ontology evaluation is often conducted by third parties as a quality assurance (or auditing) effort that focuses on identifying modeling errors and inconsistencies. In this work, we first organized four categorical schemes of ontology evaluation methods in the existing literature to create an integrated taxonomy. Further, to understand the ontology evaluation practice in the biomedicine domain, we reviewed a sample of 200 ontologies from the National Center for Biomedical Ontology (NCBO) BioPortal-the largest repository for biomedical ontologies-and observed that only 15 of these ontologies have documented evaluation in their corresponding inception papers. We then surveyed the recent quality assurance approaches for biomedical ontologies and their use. We also mapped these quality assurance approaches to the ontology evaluation criteria. It is our anticipation that ontology evaluation and quality assurance approaches will be more widely adopted in the development life cycle of biomedical ontologies. Copyright © 2018 Elsevier Inc. All rights reserved.
GENERAL BIOMEDICAL OPTICS THEORYIntroduction to the Use of Light for Diagnostic and Therapeutic ModalitiesWhat Is Biomedical Optics?Biomedical Optics TimelineElementary Optical DiscoveriesHistorical Events in Therapeutic and Diagnostic Use of LightLight SourcesCurrent State of the ArtSummaryAdditional ReadingProblemsReview of Optical Principles: Fundamental Electromagnetic Theory and Description of Light SourcesDefinitions in OpticsKirchhoff's Laws of RadiationElectromagnetic Wave TheoryLight SourcesApplications of Various LasersSummaryAdditional ReadingProblemsReview of Optical Principles: Classical OpticsGeometrical OpticsOther Optical PrinciplesQuantum PhysicsGaussian OpticsSummaryAdditional ReadingProblemsReview of Optical Interaction PropertiesAbsorption and ScatteringSummaryAdditional ReadingProblemsLight-Tissue Interaction VariablesLaser VariablesTissue VariablesLight Transportation TheoryLight Propagation under Dominant AbsorptionSummaryNomenclatureAdditional ReadingProblemsLight-Tissue Interaction Th...
Kundu, Banani; Kurland, Nicholas E; Yadavalli, Vamsi K; Kundu, Subhas C
Silk proteins of silkworms are chiefly composed of core fibroin protein and glycoprotein sericin that glues fibroin. Unique mechanical properties, cyto-compatibility and controllable biodegradability facilitate the use of fibroin in biomedical applications. Sericin serves as additive in cosmetic and food industries, as mitotic factor in cell culture media, anti-cancerous drug, anticoagulant and as biocompatible coating. For all these uses; aqueous solutions of silk proteins are preferred. Therefore, an accurate understanding of extraction procedure of silk proteins from their sources is critical. A number of protocols exist, amongst which it is required to settle a precise and easy one with desired yield and least down-stream processing. Here, we report extraction of proteins employing methods mentioned in literature using cocoons of mulberry and nonmulberry silks. This study reveals sodium carbonate salt-boiling system is the most efficient sericin extraction procedure for all silk variants. Lithium bromide is observed as the effective fibroin dissolution system for mulberry silk cocoons; whereas heterogeneous species-dependent result is obtained in case of nonmulberry species. We further show the effect of common post processing on nanoscale morphology of mulberry silk fibroin films. This knowledge eases the adoption and fabrication of silk biomaterials in devices and therapeutic delivery systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Brown, J H U
Advances in Biomedical Engineering, Volume 6, is a collection of papers that discusses the role of integrated electronics in medical systems and the usage of biological mathematical models in biological systems. Other papers deal with the health care systems, the problems and methods of approach toward rehabilitation, as well as the future of biomedical engineering. One paper discusses the use of system identification as it applies to biological systems to estimate the values of a number of parameters (for example, resistance, diffusion coefficients) by indirect means. More particularly, the i
Brown, J H U
Advances in Biomedical Engineering, Volume 5, is a collection of papers that deals with application of the principles and practices of engineering to basic and applied biomedical research, development, and the delivery of health care. The papers also describe breakthroughs in health improvements, as well as basic research that have been accomplished through clinical applications. One paper examines engineering principles and practices that can be applied in developing therapeutic systems by a controlled delivery system in drug dosage. Another paper examines the physiological and materials vari
Biomedical enhancements, the applications of medical technology to make better those who are neither ill nor deficient, have made great strides in the past few decades. Using Amartya Sen's capability approach as my framework, I argue in this article that far from being simply permissible, we have a prima facie moral obligation to use these new developments for the end goal of promoting social justice. In terms of both range and magnitude, the use of biomedical enhancements will mark a radical advance in how we compensate the most disadvantaged members of society. © 2013 John Wiley & Sons Ltd.
Full Text Available Abstract Background There are several humanly defined ontologies relevant to Medline. However, Medline is a fast growing collection of biomedical documents which creates difficulties in updating and expanding these humanly defined ontologies. Automatically identifying meaningful categories of entities in a large text corpus is useful for information extraction, construction of machine learning features, and development of semantic representations. In this paper we describe and compare two methods for automatically learning meaningful biomedical categories in Medline. The first approach is a simple statistical method that uses part-of-speech and frequency information to extract a list of frequent nouns from Medline. The second method implements an alignment-based technique to learn frequent generic patterns that indicate a hyponymy/hypernymy relationship between a pair of noun phrases. We then apply these patterns to Medline to collect frequent hypernyms as potential biomedical categories. Results We study and compare these two alternative sets of terms to identify semantic categories in Medline. We find that both approaches produce reasonable terms as potential categories. We also find that there is a significant agreement between the two sets of terms. The overlap between the two methods improves our confidence regarding categories predicted by these independent methods. Conclusions This study is an initial attempt to extract categories that are discussed in Medline. Rather than imposing external ontologies on Medline, our methods allow categories to emerge from the text.
Débora ZUCCO; Renata KOBE; Caroline FABRE; Luciano MADEIRA; Flares BARATTO FILHO
The aim of this study was, through a questionnaire, evaluate the levelof knowledge of the students from graduation in Dentistry course –UNIVILLE –, about the teeth bank: its activities and functioning,biosecutity in teeth manipulation and, especially, to find out the reason why the students do not donate spontaneously extracted teeth to the bank.A questionnaire was elaborated and applied to the studentsfrom the first to the fifth year of graduation with pertinent questions about the teeth ba...
Attinger, E. O.
Considers definition of biomedical engineering (BME) and how biomedical engineers should be trained. State of the art descriptions of BME and BME education are followed by a brief look at the future of BME. (TS)
Peek, N.; Combi, C.; Tucker, A.
Objective: To introduce the special topic of Methods of Information in Medicine on data mining in biomedicine, with selected papers from two workshops on Intelligent Data Analysis in bioMedicine (IDAMAP) held in Verona (2006) and Amsterdam (2007). Methods: Defining the field of biomedical data
Carmichael, Stephen W.; Robb, Richard A.
There is a perceived need for anatomy instruction for graduate students enrolled in a biomedical engineering program. This appeared especially important for students interested in and using medical images. These students typically did not have a strong background in biology. The authors arranged for students to dissect regions of the body that…
The biomedical research Panel believes that the Calutron facility at Oak Ridge is a national and international resource of immense scientific value and of fundamental importance to continued biomedical research. This resource is essential to the development of new isotope uses in biology and medicine. It should therefore be nurtured by adequate support and operated in a way that optimizes its services to the scientific and technological community. The Panel sees a continuing need for a reliable supply of a wide variety of enriched stable isotopes. The past and present utilization of stable isotopes in biomedical research is documented in Appendix 7. Future requirements for stable isotopes are impossible to document, however, because of the unpredictability of research itself. Nonetheless we expect the demand for isotopes to increase in parallel with the continuing expansion of biomedical research as a whole. There are a number of promising research projects at the present time, and these are expected to lead to an increase in production requirements. The Panel also believes that a high degree of priority should be given to replacing the supplies of the 65 isotopes (out of the 224 previously available enriched isotopes) no longer available from ORNL
This report summarizes the activities of the National Space Biomedical Research Institute (NSBRI) during FY 2000. The NSBRI is responsible for the development of countermeasures against the deleterious effects of long-duration space flight and performs fundamental and applied space biomedical research directed towards this specific goal. Its mission is to lead a world-class, national effort in integrated, critical path space biomedical research that supports NASA's Human Exploration and Development of Space (HEDS) Strategic Plan by focusing on the enabling of long-term human presence in, development of, and exploration of space. This is accomplished by: designing, testing and validating effective countermeasures to address the biological and environmental impediments to long-term human space flight; defining the molecular, cellular, organ-level, integrated responses and mechanistic relationships that ultimately determine these impediments, where such activity fosters the development of novel countermeasures; establishing biomedical support technologies to maximize human performance in space, reduce biomedical hazards to an acceptable level, and deliver quality medical care; transferring and disseminating the biomedical advances in knowledge and technology acquired through living and working in space to the general benefit of mankind, including the treatment of patients suffering from gravity- and radiation-related conditions on Earth; and ensuring open involvement of the scientific community, industry and the public at large in the Institute's activities and fostering a robust collaboration with NASA, particularly through NASA's Lyndon B. Johnson Space Center. Attachment:Appendices (A,B,C,D,E,F,G,H,I,J,K,L,M,N,O, and P.).
Torii, Manabu; Liu, Hongfang
In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.
Biomedical Science Technologists in Lagos Universities: Meeting Modern Standards in Biomedical Research. ... biomedical techniques. SOTA biomedical science needs adequate financial investment for the scientific resources as well as stable civic infrastructure, thus these public institutions need more of such provisions.
Bronzino, Joseph D
Known as the bible of biomedical engineering, The Biomedical Engineering Handbook, Fourth Edition, sets the standard against which all other references of this nature are measured. As such, it has served as a major resource for both skilled professionals and novices to biomedical engineering.Biomedical Signals, Imaging, and Informatics, the third volume of the handbook, presents material from respected scientists with diverse backgrounds in biosignal processing, medical imaging, infrared imaging, and medical informatics.More than three dozen specific topics are examined, including biomedical s
Knowledge and skills in biomedical sciences have reached a level, which is difficult to pass on to students in the traditional one to two years by traditional lecture methods and are still expanding. Recently, innovative methods of enabling the students to acquire the knowledge and skills have been evolved, and include ...
Tuchin, Valery V; Zimnyakov, Dmitry A
Optical Polarization in Biomedical Applications introduces key developments in optical polarization methods for quantitative studies of tissues, while presenting the theory of polarization transfer in a random medium as a basis for the quantitative description of polarized light interaction with tissues. This theory uses the modified transfer equation for Stokes parameters and predicts the polarization structure of multiple scattered optical fields. The backscattering polarization matrices (Jones matrix and Mueller matrix) important for noninvasive medical diagnostic are introduced. The text also describes a number of diagnostic techniques such as CW polarization imaging and spectroscopy, polarization microscopy and cytometry. As a new tool for medical diagnosis, optical coherent polarization tomography is analyzed. The monograph also covers a range of biomedical applications, among them cataract and glaucoma diagnostics, glucose sensing, and the detection of bacteria.
Full Text Available Shape memory polymers(SMPs are a class of functional "smart" materials that have shown bright prospects in the area of biomedical applications. The novel smart materials with multifunction of biodegradability and biocompatibility can be designed based on their general principle, composition and structure. In this review, the latest process of three typical biodegradable SMPs(poly(lactide acide, poly(ε-caprolactone, polyurethane was summarized. These three SMPs were classified in different structures and discussed, and shape-memory mechanism, recovery rate and fixed rate, response speed was analysed in detail, also, some biomedical applications were presented. Finally, the future development and applications of SMPs are prospected: two-way SMPs and body temperature induced SMPs will be the focus attension by researchers.
Shen, He; Zhang, Liming; Liu, Min; Zhang, Zhijun
Graphene exhibits unique 2-D structure and exceptional phyiscal and chemical properties that lead to many potential applications. Among various applications, biomedical applications of graphene have attracted ever-increasing interests over the last three years. In this review, we present an overview of current advances in applications of graphene in biomedicine with focus on drug delivery, cancer therapy and biological imaging, together with a brief discussion on the challenges and perspectives for future research in this field. PMID:22448195
Daumke, Philipp; Markó, Kornél; Poprat, Michael; Schulz, Stefan
We present a unique technique to create a multilingual biomedical dictionary, based on a methodology called Morpho-Semantic indexing. Our approach closes a gap caused by the absence of free available multilingual medical dictionaries and the lack of accuracy of non-medical electronic translation tools. We first explain the underlying technology followed by a description of the dictionary interface, which makes use of a multilingual subword thesaurus and of statistical information from a domain-specific, multilingual corpus.
Qiao, Xue; Wang, Qi; Wang, Shuang; Miao, Wen-juan; Li, Yan-jiao; Xiang, Cheng; Guo, De-an; Ye, Min
Herbal medicines usually contain a large group of chemical components, which may be transformed into more complex metabolites in vivo. In this study, we proposed a knowledge-transmitting strategy for metabolites identification of compound formulas. Gegen-Qinlian Decoction (GQD) is a classical formula in traditional Chinese medicine (TCM). It is widely used to treat diarrhea and diabetes in clinical practice. However, only tens of metabolites could be detected using conventional approaches. To comprehensively identify the metabolites of GQD, a “compound to extract to formulation” strategy was established in this study. The metabolic pathways of single representative constituents in GQD were studied, and the metabolic rules were transmitted to chemically similar compounds in herbal extracts. After screening diversified metabolites from herb extracts, the knowledge was summarized to identify the metabolites of GQD. Tandem mass spectrometry (MSn), fragment-based scan (NL, PRE), and selected reaction monitoring (SRM) were employed to identify, screen, and monitor the metabolites, respectively. Using this strategy, we detected 131 GQD metabolites (85 were newly generated) in rats biofluids. Among them, 112 metabolites could be detected when GQD was orally administered at a clinical dosage (12.5 g/kg). This strategy could be used for systematic metabolites identification of complex Chinese medicine formulas. PMID:27996040
Berlanga, Rafael; Jiménez-Ruiz, Ernesto; Nebot, Victoria
The semantic integration of biomedical resources is still a challenging issue which is required for effective information processing and data analysis. The availability of comprehensive knowledge resources such as biomedical ontologies and integrated thesauri greatly facilitates this integration effort by means of semantic annotation, which allows disparate data formats and contents to be expressed under a common semantic space. In this paper, we propose a multidimensional representation for such a semantic space, where dimensions regard the different perspectives in biomedical research (e.g., population, disease, anatomy and protein/genes). This paper presents a novel method for building multidimensional semantic spaces from semantically annotated biomedical data collections. This method consists of two main processes: knowledge and data normalization. The former one arranges the concepts provided by a reference knowledge resource (e.g., biomedical ontologies and thesauri) into a set of hierarchical dimensions for analysis purposes. The latter one reduces the annotation set associated to each collection item into a set of points of the multidimensional space. Additionally, we have developed a visual tool, called 3D-Browser, which implements OLAP-like operators over the generated multidimensional space. The method and the tool have been tested and evaluated in the context of the Health-e-Child (HeC) project. Automatic semantic annotation was applied to tag three collections of abstracts taken from PubMed, one for each target disease of the project, the Uniprot database, and the HeC patient record database. We adopted the UMLS Meta-thesaurus 2010AA as the reference knowledge resource. Current knowledge resources and semantic-aware technology make possible the integration of biomedical resources. Such an integration is performed through semantic annotation of the intended biomedical data resources. This paper shows how these annotations can be exploited for
Lee, Young-Eun; Kim, Hyeongmin; Seo, Changwon; Park, Taejun; Lee, Kyung Bin; Yoo, Seung-Yup; Hong, Seong-Chul; Kim, Jeong Tae; Lee, Jaehwi
The ocean contains numerous marine organisms, including algae, animals, and plants, from which diverse marine polysaccharides with useful physicochemical and biological properties can be extracted. In particular, fucoidan, carrageenan, alginate, and chitosan have been extensively investigated in pharmaceutical and biomedical fields owing to their desirable characteristics, such as biocompatibility, biodegradability, and bioactivity. Various therapeutic efficacies of marine polysaccharides have been elucidated, including the inhibition of cancer, inflammation, and viral infection. The therapeutic activities of these polysaccharides have been demonstrated in various settings, from in vitro laboratory-scale experiments to clinical trials. In addition, marine polysaccharides have been exploited for tissue engineering, the immobilization of biomolecules, and stent coating. Their ability to detect and respond to external stimuli, such as pH, temperature, and electric fields, has enabled their use in the design of novel drug delivery systems. Thus, along with the promising characteristics of marine polysaccharides, this review will comprehensively detail their various therapeutic, biomedical, and miscellaneous applications.
Sathick, Javubar; Venkat, Jaya
Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user's wish. This paper aims to design a…
Beißwanger, Anna Elena
Biomedicine is an impressively fast developing, interdisciplinary field of research. To control the growing volumes of biomedical data, ontologies are increasingly used as common organization structures. Biomedical ontologies describe domain knowledge in a formal, computationally accessible way. They serve as controlled vocabularies and background knowledge in applications dealing with the integration, analysis and retrieval of heterogeneous types of data. The development of...
Mora, Oscar; Bisbal, Jesús
In this paper, we present BIMS (Biomedical Information Management System). BIMS is a software architecture designed to provide a flexible computational framework to manage the information needs of a wide range of biomedical research projects. The main goal is to facilitate the clinicians' job in data entry, and researcher's tasks in data management, in high data quality biomedical research projects. The BIMS architecture has been designed following the two-level modeling paradigm, a promising...
Brown, J H U
Advances in Biomedical Engineering, Volume 2, is a collection of papers that discusses the basic sciences, the applied sciences of engineering, the medical sciences, and the delivery of health services. One paper discusses the models of adrenal cortical control, including the secretion and metabolism of cortisol (the controlled process), as well as the initiation and modulation of secretion of ACTH (the controller). Another paper discusses hospital computer systems-application problems, objective evaluation of technology, and multiple pathways for future hospital computer applications. The pos
Tranquillo, Joseph V
Biomedical Signals and Systems is meant to accompany a one-semester undergraduate signals and systems course. It may also serve as a quick-start for graduate students or faculty interested in how signals and systems techniques can be applied to living systems. The biological nature of the examples allows for systems thinking to be applied to electrical, mechanical, fluid, chemical, thermal and even optical systems. Each chapter focuses on a topic from classic signals and systems theory: System block diagrams, mathematical models, transforms, stability, feedback, system response, control, time
Full Text Available The discipline of biostatistics is nowadays a fundamental scientific component of biomedical, public health and health services research. Traditional and emerging areas of application include clinical trials research, observational studies, physiology, imaging, and genomics. The present article reviews the current situation of biostatistics, considering the statistical methods traditionally used in biomedical research, as well as the ongoing development of new methods in response to the new problems arising in medicine. Clearly, the successful application of statistics in biomedical research requires appropriate training of biostatisticians. This training should aim to give due consideration to emerging new areas of statistics, while at the same time retaining full coverage of the fundamentals of statistical theory and methodology. In addition, it is important that students of biostatistics receive formal training in relevant biomedical disciplines, such as epidemiology, clinical trials, molecular biology, genetics, and neuroscience.La Bioestadística es hoy en día una componente científica fundamental de la investigación en Biomedicina, salud pública y servicios de salud. Las áreas tradicionales y emergentes de aplicación incluyen ensayos clínicos, estudios observacionales, fisología, imágenes, y genómica. Este artículo repasa la situación actual de la Bioestadística, considerando los métodos estadísticos usados tradicionalmente en investigación biomédica, así como los recientes desarrollos de nuevos métodos, para dar respuesta a los nuevos problemas que surgen en Medicina. Obviamente, la aplicación fructífera de la estadística en investigación biomédica exige una formación adecuada de los bioestadísticos, formación que debería tener en cuenta las áreas emergentes en estadística, cubriendo al mismo tiempo los fundamentos de la teoría estadística y su metodología. Es importante, además, que los estudiantes de
1.Biomedical Photonics: A Revolution at the Interface of Science and Technology, T. Vo-DinhPHOTONICS AND TISSUE OPTICS2.Optical Properties of Tissues, J. Mobley and T. Vo-Dinh3.Light-Tissue Interactions, V.V. Tuchin 4.Theoretical Models and Algorithms in Optical Diffusion Tomography, S.J. Norton and T. Vo-DinhPHOTONIC DEVICES5.Laser Light in Biomedicine and the Life Sciences: From the Present to the Future, V.S. Letokhov6.Basic Instrumentation in Photonics, T. Vo-Dinh7.Optical Fibers and Waveguides for Medical Applications, I. Gannot and
Evans, E.A.; Oldham, K.G.
This volume describes the role of radiochemicals in biomedical research, as tracers in the development of new drugs, their interaction and function with receptor proteins, with the kinetics of binding of hormone - receptor interactions, and their use in cancer research and clinical oncology. The book also aims to identify future trends in this research, the main objective of which is to provide information leading to improvements in the quality of life, and to give readers a basic understanding of the development of new drugs, how they function in relation to receptor proteins and lead to a better understanding of the diagnosis and treatment of cancers. (author)
The main role of a peer reviewer is to make judgments on the research articles by asking a number of questions to evaluate the quality of the research article. Statistics is a major part of any biomedical research article, and most reviewers gain their experiences in manuscript reviewing by undertaking it but not through an educational process. Therefore, reviewers of the biomedical journals normally do not have enough knowledge and skills to evaluate the validity of statistical methods used in biomedical research articles submitted for consideration. Hence, inappropriate statistical analysis in medical journals can lead to misleading conclusions and incorrect results. In this paper, the most common basic statistical guidelines are described that might be a road map to the biomedical reviewers. It is not meant for statisticians or medical editors who have special interest and expertise in statistical analysis.
The main role of a peer reviewer is to make judgments on the research articles by asking a number of questions to evaluate the quality of the research article. Statistics is a major part of any biomedical research article, and most reviewers gain their experiences in manuscript reviewing by undertaking it but not through an educational process. Therefore, reviewers of the biomedical journals normally do not have enough knowledge and skills to evaluate the validity of statistical methods used in biomedical research articles submitted for consideration. Hence, inappropriate statistical analysis in medical journals can lead to misleading conclusions and incorrect results. In this paper, the most common basic statistical guidelines are described that might be a road map to the biomedical reviewers. It is not meant for statisticians or medical editors who have special interest and expertise in statistical analysis. PMID:26430447
Long, Francis M.
Discusses four methods of professional identification in biomedical engineering including registration, certification, accreditation, and possible membership qualification of the societies. Indicates that the destiny of the biomedical engineer may be under the control of a new profession, neither the medical nor the engineering. (CC)
The Egyptian Journal of Biomedical Sciences publishes in all aspects of biomedical research sciences. Both basic and clinical research papers are welcomed. Vol 23 (2007). DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Table of Contents. Articles. Phytochemical And ...
Full Text Available Abstract Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians" can be essential members of translational medicine teams.
Sarkar, Indra Neil
Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics) may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records) and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians") can be essential members of translational medicine teams.
This book provides a comprehensive overview of the state-of-the-art computational intelligence research and technologies in biomedical images with emphasis on biomedical decision making. Biomedical imaging offers useful information on patients’ medical conditions and clues to causes of their symptoms and diseases. Biomedical images, however, provide a large number of images which physicians must interpret. Therefore, computer aids are demanded and become indispensable in physicians’ decision making. This book discusses major technical advancements and research findings in the field of computational intelligence in biomedical imaging, for example, computational intelligence in computer-aided diagnosis for breast cancer, prostate cancer, and brain disease, in lung function analysis, and in radiation therapy. The book examines technologies and studies that have reached the practical level, and those technologies that are becoming available in clinical practices in hospitals rapidly such as computational inte...
The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.
Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott
The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.
Michelle C Dunn
Full Text Available This article describes efforts at the National Institutes of Health (NIH from 2013 to 2016 to train a national workforce in biomedical data science. We provide an analysis of the Big Data to Knowledge (BD2K training program strengths and weaknesses with an eye toward future directions aimed at any funder and potential funding recipient worldwide. The focus is on extramurally funded programs that have a national or international impact rather than the training of NIH staff, which was addressed by the NIH's internal Data Science Workforce Development Center. From its inception, the major goal of BD2K was to narrow the gap between needed and existing biomedical data science skills. As biomedical research increasingly relies on computational, mathematical, and statistical thinking, supporting the training and education of the workforce of tomorrow requires new emphases on analytical skills. From 2013 to 2016, BD2K jump-started training in this area for all levels, from graduate students to senior researchers.
Ramos, Ana P; Cruz, Marcos A E; Tovani, Camila B; Ciancaglini, Pietro
The ability to investigate substances at the molecular level has boosted the search for materials with outstanding properties for use in medicine. The application of these novel materials has generated the new research field of nanobiotechnology, which plays a central role in disease diagnosis, drug design and delivery, and implants. In this review, we provide an overview of the use of metallic and metal oxide nanoparticles, carbon-nanotubes, liposomes, and nanopatterned flat surfaces for specific biomedical applications. The chemical and physical properties of the surface of these materials allow their use in diagnosis, biosensing and bioimaging devices, drug delivery systems, and bone substitute implants. The toxicology of these particles is also discussed in the light of a new field referred to as nanotoxicology that studies the surface effects emerging from nanostructured materials.
There are many books written about statistics, some brief, some detailed, some humorous, some colorful, and some quite dry. Each of these texts is designed for a specific audience. Too often, texts about statistics have been rather theoretical and intimidating for those not practicing statistical analysis on a routine basis. Thus, many engineers and scientists, who need to use statistics much more frequently than calculus or differential equations, lack sufficient knowledge of the use of statistics. The audience that is addressed in this text is the university-level biomedical engineering stud
Ivinson, A J
The biomedical research community of the new millennium has at its disposal the resources and knowledge to bring about major changes in human health. Technological advances on a scale never before seen mean that we can consider a level of medical investigation and intervention unimaginable only 20 years ago. But with this power comes a tremendous responsibility to think carefully about how those resources should best be used. For reasons of economy, biomedical research is likely to remain focussed on the needs of rich countries. This need not, however, mean that poorer countries cannot in the future receive a greater benefit from the current community of biomedical researchers. And given the nature of disease and its disrespect for national boundaries, a more global approach to biomedical research should be attractive to rich and poor countries alike. Achieving this change, no matter how modest in scale, will require a concerted effort at all levels within the biomedical research community. Indeed, the community is at a stage when it must pay closer attention to the sensitivities and concerns of its patient population. Only then will the tremendous potential of biomedical research be embraced and supported by our societies.
You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.
Basaldella, Marco; Furrer, Lenz; Tasso, Carlo; Rinaldi, Fabio
This article describes a high-recall, high-precision approach for the extraction of biomedical entities from scientific articles. The approach uses a two-stage pipeline, combining a dictionary-based entity recognizer with a machine-learning classifier. First, the OGER entity recognizer, which has a bias towards high recall, annotates the terms that appear in selected domain ontologies. Subsequently, the Distiller framework uses this information as a feature for a machine learning algorithm to select the relevant entities only. For this step, we compare two different supervised machine-learning algorithms: Conditional Random Fields and Neural Networks. In an in-domain evaluation using the CRAFT corpus, we test the performance of the combined systems when recognizing chemicals, cell types, cellular components, biological processes, molecular functions, organisms, proteins, and biological sequences. Our best system combines dictionary-based candidate generation with Neural-Network-based filtering. It achieves an overall precision of 86% at a recall of 60% on the named entity recognition task, and a precision of 51% at a recall of 49% on the concept recognition task. These results are to our knowledge the best reported so far in this particular task.
Full Text Available In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research.
Pesquita, Catia; Faria, Daniel; Falcão, André O; Lord, Phillip; Couto, Francisco M
In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies.Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research.
Elkin, P L; Brown, S H; Wright, G
This article is part of a For-Discussion-Section of Methods of Information in Medicine on "Biomedical Informatics: We are what we publish". It is introduced by an editorial and followed by a commentary paper with invited comments. In subsequent issues the discussion may continue through letters to the editor. Informatics experts have attempted to define the field via consensus projects which has led to consensus statements by both AMIA. and by IMIA. We add to the output of this process the results of a study of the Pubmed publications with abstracts from the field of Biomedical Informatics. We took the terms from the AMIA consensus document and the terms from the IMIA definitions of the field of Biomedical Informatics and combined them through human review to create the Health Informatics Ontology. We built a terminology server using the Intelligent Natural Language Processor (iNLP). Then we downloaded the entire set of articles in Medline identified by searching the literature by "Medical Informatics" OR "Bioinformatics". The articles were parsed by the joint AMIA / IMIA terminology and then again using SNOMED CT and for the Bioinformatics they were also parsed using HGNC Ontology. We identified 153,580 articles using "Medical Informatics" and 20,573 articles using "Bioinformatics". This resulted in 168,298 unique articles and an overlap of 5,855 articles. Of these 62,244 articles (37%) had titles and abstracts that contained at least one concept from the Health Informatics Ontology. SNOMED CT indexing showed that the field interacts with most all clinical fields of medicine. Further defining the field by what we publish can add value to the consensus driven processes that have been the mainstay of the efforts to date. Next steps should be to extract terms from the literature that are uncovered and create class hierarchies and relationships for this content. We should also examine the high occurring of MeSH terms as markers to define Biomedical Informatics
Academic medical and biomedical curricula are designed to educate future academics contributing to new developments in science, clinical practice and society. During undergraduate programs student training is typically focused on acquisition of knowledge and understanding of these interdisciplinary
Chen, Yen-Wei; Moussi, Joelle; Drury, Jeanie L; Wataha, John C
The use of zirconia in medicine and dentistry has rapidly expanded over the past decade, driven by its advantageous physical, biological, esthetic, and corrosion properties. Zirconia orthopedic hip replacements have shown superior wear-resistance over other systems; however, risk of catastrophic fracture remains a concern. In dentistry, zirconia has been widely adopted for endosseous implants, implant abutments, and all-ceramic crowns. Because of an increasing demand for esthetically pleasing dental restorations, zirconia-based ceramic restorations have become one of the dominant restorative choices. Areas covered: This review provides an updated overview of the applications of zirconia in medicine and dentistry with a focus on dental applications. The MEDLINE electronic database (via PubMed) was searched, and relevant original and review articles from 2010 to 2016 were included. Expert commentary: Recent data suggest that zirconia performs favorably in both orthopedic and dental applications, but quality long-term clinical data remain scarce. Concerns about the effects of wear, crystalline degradation, crack propagation, and catastrophic fracture are still debated. The future of zirconia in biomedical applications will depend on the generation of these data to resolve concerns.
Federal Laboratory Consortium — The Molecular Biomedical Imaging Laboratory (MBIL) is adjacent-a nd has access-to the Department of Radiology and Imaging Sciences clinical imaging facilities. MBIL...
Vardharajula, Sandhya; Ali, Sk Z; Tiwari, Pooja M; Eroğlu, Erdal; Vig, Komal; Dennis, Vida A; Singh, Shree R
Carbon nanotubes (CNTs) are emerging as novel nanomaterials for various biomedical applications. CNTs can be used to deliver a variety of therapeutic agents, including biomolecules, to the target disease sites. In addition, their unparalleled optical and electrical properties make them excellent candidates for bioimaging and other biomedical applications. However, the high cytotoxicity of CNTs limits their use in humans and many biological systems. The biocompatibility and low cytotoxicity of CNTs are attributed to size, dose, duration, testing systems, and surface functionalization. The functionalization of CNTs improves their solubility and biocompatibility and alters their cellular interaction pathways, resulting in much-reduced cytotoxic effects. Functionalized CNTs are promising novel materials for a variety of biomedical applications. These potential applications are particularly enhanced by their ability to penetrate biological membranes with relatively low cytotoxicity. This review is directed towards the overview of CNTs and their functionalization for biomedical applications with minimal cytotoxicity. PMID:23091380
This book is based on a graduate course entitled, Ubiquitous Healthcare Circuits and Systems, that was given by one of the editors. It includes an introduction and overview to biomedical ICs and provides information on the current trends in research.
Deloatch, E. M.
The five problems studied for biomedical applications of NASA technology are reported. The studies reported are: design modification of electrophoretic equipment, operating room environment control, hematological viscometry, handling system for iridium, and indirect blood pressure measuring device.
Discusses the definition of "biomedical engineering" and the development of educational programs in the field. Includes detailed descriptions of the roles of bioengineers, medical engineers, and chemical engineers. (CC)
Background Cell lines and cell types are extensively studied in biomedical research yielding to a significant amount of publications each year. Identifying cell lines and cell types precisely in publications is crucial for science reproducibility and knowledge integration. There are efforts for standardisation of the cell nomenclature based on ontology development to support FAIR principles of the cell knowledge. However, it is important to analyse the usage of cell nomenclature in publications at a large scale for understanding the level of uptake of cell nomenclature in literature by scientists. In this study, we analyse the usage of cell nomenclature, both in Vivo, and in Vitro in biomedical literature by using text mining methods and present our results. Results We identified 59% of the cell type classes in the Cell Ontology and 13% of the cell line classes in the Cell Line Ontology in the literature. Our analysis showed that cell line nomenclature is much more ambiguous compared to the cell type nomenclature. However, trends indicate that standardised nomenclature for cell lines and cell types are being increasingly used in publications by the scientists. Conclusions Our findings provide an insight to understand how experimental cells are described in publications and may allow for an improved standardisation of cell type and cell line nomenclature as well as can be utilised to develop efficient text mining applications on cell types and cell lines. All data generated in this study is available at https://github.com/shenay/CellNomenclatureStudy.
Full Text Available Abstract Background Text-mining can assist biomedical researchers in reducing information overload by extracting useful knowledge from large collections of text. We developed a novel text-mining method based on analyzing the network structure created by symbol co-occurrences as a way to extend the capabilities of knowledge extraction. The method was applied to the task of automatic gene and protein name synonym extraction. Results Performance was measured on a test set consisting of about 50,000 abstracts from one year of MEDLINE. Synonyms retrieved from curated genomics databases were used as a gold standard. The system obtained a maximum F-score of 22.21% (23.18% precision and 21.36% recall, with high efficiency in the use of seed pairs. Conclusion The method performs comparably with other studied methods, does not rely on sophisticated named-entity recognition, and requires little initial seed knowledge.
Hydroxyapatite coatings are of great importance in the biological and biomedical coatings fields, especially in the current era of nanotechnology and bioapplications. With a bonelike structure that promotes osseointegration, hydroxyapatite coating can be applied to otherwise bioinactive implants to make their surface bioactive, thus achieving faster healing and recovery. In addition to applications in orthopedic and dental implants, this coating can also be used in drug delivery. Hydroxyapatite Coatings for Biomedical Applications explores developments in the processing and property characteri
Cheng, JX; Widjaja, F; Choi, JE; Hendren, RL
Complementary and alternative medicine (CAM) is widely used to treat children with psychiatric disorders. In this review, MedLine was searched for various biomedical/CAM treatments in combination with the key words "children," "adolescents," "psychiatric disorders," and "complementary alternative medicine." The biomedical/CAM treatments most thoroughly researched were omega-3 fatty acids, melatonin, and memantine. Those with the fewest published studies were N-acetylcysteine, vitamin B 12 , a...
Pesquita, Catia; Faria, Daniel; Falc?o, Andr? O.; Lord, Phillip; Couto, Francisco M.
In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies hav...
Mora Pérez, Oscar
This final year project presents the design principles and prototype implementation of BIMS (Biomedical Information Management System), a flexible software system which provides an infrastructure to manage all information required by biomedical research projects.The BIMS project was initiated with the motivation to solve several limitations in medical data acquisition of some research projects, in which Universitat Pompeu Fabra takes part. These limitations,based on the lack of control mechan...
The John Glenn Biomedical Engineering Consortium is an inter-institutional research and technology development, beginning with ten projects in FY02 that are aimed at applying GRC expertise in fluid physics and sensor development with local biomedical expertise to mitigate the risks of space flight on the health, safety, and performance of astronauts. It is anticipated that several new technologies will be developed that are applicable to both medical needs in space and on earth.
ABSTRACT: The subject of this thesis is the exploration of the suitability of chitosan and some of its derivatives for some chosen biomedical applications. Chitosan-graft-poly (N-vinyl imidazole), Chitosan-tripolyphosphate and ascorbyl chitosan were synthesized and characterized for specific biomedical applications in line with their chemical functionalities. Chitosan-graft-poly (N-vinyl imidazole), Chi-graft-PNVI, was synthesized by two methods; via an N-protection route and without N-pr...
De Jager-Loftus, Danielle P; Midyette, J David; Harvey, Barbara
Providing library and reference services within a biomedical research community presents special challenges for librarians, especially those in historically lower-funded states. These challenges can include understanding needs, defining and communicating the library's role, building relationships, and developing and maintaining general and subject specific knowledge. This article describes a biomedical research network and the work of health sciences librarians at the lead intensive research institution with librarians from primarily undergraduate institutions and tribal colleges. Applying the concept of a community of practice to a collaborative effort suggests how librarians can work together to provide effective reference services to researchers in biomedicine.
Friedman, C P; Wildemuth, B M; Muriuki, M; Gant, S P; Downs, S M; Twarog, R G; de Bliek, R
This study explored which of two modes of access to a biomedical database better supported problem solving in bacteriology. Boolean access, which allowed subjects to frame their queries as combinations of keywords, was compared to hypertext access, which allowed subjects to navigate from one database node to another. The accessible biomedical data were identical across systems. Data were collected from 42 first year medical students, each randomized to the Boolean or hypertext system, before and after their bacteriology course. Subjects worked eight clinical case problems, first using only their personal knowledge and, subsequently, with aid from the database. Database retrievals enabled students to answer questions they could not answer based on personal knowledge only. This effect was greater when personal knowledge of bacteriology was lower. The results also suggest that hypertext was superior to Boolean access in helping subjects identify possible infectious agents in these clinical case problems.
Moher, David; Galipeau, James; Alam, Sabina; Barbour, Virginia; Bartolomeos, Kidist; Baskin, Patricia; Bell-Syer, Sally; Cobey, Kelly D; Chan, Leighton; Clark, Jocalyn; Deeks, Jonathan; Flanagin, Annette; Garner, Paul; Glenny, Anne-Marie; Groves, Trish; Gurusamy, Kurinchi; Habibzadeh, Farrokh; Jewell-Thomas, Stefanie; Kelsall, Diane; Lapeña, José Florencio; MacLehose, Harriet; Marusic, Ana; McKenzie, Joanne E; Shah, Jay; Shamseer, Larissa; Straus, Sharon; Tugwell, Peter; Wager, Elizabeth; Winker, Margaret; Zhaori, Getu
Scientific editors are responsible for deciding which articles to publish in their journals. However, we have not found documentation of their required knowledge, skills, and characteristics, or the existence of any formal core competencies for this role. We describe the development of a minimum set of core competencies for scientific editors of biomedical journals. The 14 key core competencies are divided into three major areas, and each competency has a list of associated elements or descriptions of more specific knowledge, skills, and characteristics that contribute to its fulfillment. We believe that these core competencies are a baseline of the knowledge, skills, and characteristics needed to perform competently the duties of a scientific editor at a biomedical journal.
Stary, J.; Kyrs, M.; Navratil, J.; Havelka, S.; Hala, J.
Definitions of the basic terms and of relations are given and the knowledge is described of the possibilities of the extraction of elements, oxides, covalent-bound halogenides and heteropolyacids. Greatest attention is devoted to the detailed analysis of the extraction of chelates and ion associates using diverse agents. For both types of compounds detailed conditions are given of the separation and the effects of the individual factors are listed. Attention is also devoted to extractions using mixtures of organic agents, the synergic effects thereof, and to extractions in non-aqueous solvents. The effects of radiation on extraction and the main types of apparatus used for extractions carried out in the laboratory are described. (L.K.)
Jobbágy, Akos; Benyó, Zoltán; Monos, Emil
The Bologna Declaration aims at harmonizing the European higher education structure. In accordance with the Declaration, biomedical engineering will be offered as a master (MSc) course also in Hungary, from year 2009. Since 1995 biomedical engineering course has been held in cooperation of three universities: Semmelweis University, Budapest Veterinary University, and Budapest University of Technology and Economics. One of the latter's faculties, Faculty of Electrical Engineering and Informatics, has been responsible for the course. Students could start their biomedical engineering studies - usually in parallel with their first degree course - after they collected at least 180 ECTS credits. Consequently, the biomedical engineering course could have been considered as a master course even before the Bologna Declaration. Students had to collect 130 ECTS credits during the six-semester course. This is equivalent to four-semester full-time studies, because during the first three semesters the curriculum required to gain only one third of the usual ECTS credits. The paper gives a survey on the new biomedical engineering master course, briefly summing up also the subjects in the curriculum.
Díaz Lantada, Andrés; Serrano Olmedo, José Javier; Ros Felip, Antonio; Jiménez Fernández, Javier; Muñoz García, Julio; Claramunt Alonso, Rafael; Carpio Huertas, Jaime
Biomedical engineering is one of the more recent fields of engineering, aimed at the application of engineering principles, methods and design concepts to medicine and biology for healthcare purposes, mainly as a support for preventive, diagnostic or therapeutic tasks. Biomedical engineering professionals are expected to achieve, during their studies and professional practice, considerable knowledge of both health sciences and engineering. Studying biomedical engineering programmes, or combin...
Tkacz, Ewaryst; Paszenda, Zbigniew; Piętka, Ewa
This book presents the proceedings of the “Innovations in Biomedical Engineering IBE’2016” Conference held on October 16–18, 2016 in Poland, discussing recent research on innovations in biomedical engineering. The past decade has seen the dynamic development of more and more sophisticated technologies, including biotechnologies, and more general technologies applied in the area of life sciences. As such the book covers the broadest possible spectrum of subjects related to biomedical engineering innovations. Divided into four parts, it presents state-of-the-art achievements in: • engineering of biomaterials, • modelling and simulations in biomechanics, • informatics in medicine • signal analysis The book helps bridge the gap between technological and methodological engineering achievements on the one hand and clinical requirements in the three major areas diagnosis, therapy and rehabilitation on the other.
Gibbs, Kenneth D.; McGready, John; Griffin, Kimberly
Recent biomedical workforce policy efforts have centered on enhancing career preparation for trainees, and increasing diversity in the research workforce. Postdoctoral scientists, or postdocs, are among those most directly impacted by such initiatives, yet their career development remains understudied. This study reports results from a 2012 national survey of 1002 American biomedical postdocs. On average, postdocs reported increased knowledge about career options but lower clarity about their career goals relative to PhD entry. The majority of postdocs were offered structured career development at their postdoctoral institutions, but less than one-third received this from their graduate departments. Postdocs from all social backgrounds reported significant declines in interest in faculty careers at research-intensive universities and increased interest in nonresearch careers; however, there were differences in the magnitude and period of training during which these changes occurred across gender and race/ethnicity. Group differences in interest in faculty careers were explained by career interest differences formed during graduate school but not by differences in research productivity, research self-efficacy, or advisor relationships. These findings point to the need for enhanced career development earlier in the training process, and interventions sensitive to distinctive patterns of interest development across social identity groups. PMID:26582238
Schulz, Stefan; López-García, Pablo
A variety of rich terminology systems, such as thesauri, classifications, nomenclatures and ontologies support information and knowledge processing in health care and biomedical research. Nevertheless, human language, manifested as individually written texts, persists as the primary carrier of information, in the description of disease courses or treatment episodes in electronic medical records, and in the description of biomedical research in scientific publications. In the context of the discussion about big data in biomedicine, we hypothesize that the abstraction of the individuality of natural language utterances into structured and semantically normalized information facilitates the use of statistical data analytics to distil new knowledge out of textual data from biomedical research and clinical routine. Computerized human language technologies are constantly evolving and are increasingly ready to annotate narratives with codes from biomedical terminology. However, this depends heavily on linguistic and terminological resources. The creation and maintenance of such resources is labor-intensive. Nevertheless, it is sensible to assume that big data methods can be used to support this process. Examples include the learning of hierarchical relationships, the grouping of synonymous terms into concepts and the disambiguation of homonyms. Although clear evidence is still lacking, the combination of natural language technologies, semantic resources, and big data analytics is promising.
Hasman, A; Haux, R
Modeling is a significant part of research, education and practice in biomedical and health informatics. Our objective was to explore which types of models of processes are used in current biomedical/health informatics research, as reflected in publications of scientific journals in this field. Also, the implications for medical informatics curricula were investigated. Retrospective, prolective observational study on recent publications of the two official journals of the International Medical Informatics Association (IMIA), the International Journal of Medical Informatics (IJMI) and Methods of Information in Medicine (MIM). All publications of the years 2004 and 2005 from these journals were indexed according to a given list of model types. Random samples out of these publications were analysed in more depth. Three hundred and eighty-four publications have been analysed, 190 of IJMI and 194 of MIM. For publications in special issues (121 in IJMI) and special topics (132 in MIM) we found differences between theme-centered and conference-centered special issues/special topics (SIT) publications. In particular, we could observe a high variation between modeling in publications of theme-centered SITs. It became obvious that often sound formal knowledge as well as a strong engineering background is needed for carrying out this type of research. Usually, this knowledge and the related skills can be best provided in consecutive B.Sc. and M.Sc. programs in medical informatics (respectively, health informatics, biomedical informatics). If the focus should be primarily on health information systems and evaluation this can be offered in a M.Sc. program in medical informatics. In analysing the 384 publications it became obvious that modeling continues to be a major task in research, education and practice in biomedical and health informatics. Knowledge and skills on a broad range of model types are needed in biomedical/health informatics.
Magnetic particles are increasingly being used in a wide variety of biomedical applications. Written by a team of internationally respected experts, this book provides an up-to-date authoritative reference for scientists and engineers. The first section presents the fundamentals of the field by explaining the theory of magnetism, describing techniques to synthesize magnetic particles, and detailing methods to characterize magnetic particles. The second section describes biomedical applications, including chemical sensors and cellular actuators, and diagnostic applications such as drug delivery, hyperthermia cancer treatment, and magnetic resonance imaging contrast.
Magnetic particles are increasingly being used in a wide variety of biomedical applications. Written by a team of internationally respected experts, this book provides an up-to-date authoritative reference for scientists and engineers. The first section presents the fundamentals of the field by explaining the theory of magnetism, describing techniques to synthesize magnetic particles, and detailing methods to characterize magnetic particles. The second section describes biomedical applications, including chemical sensors and cellular actuators, and diagnostic applications such as drug delivery, hyperthermia cancer treatment, and magnetic resonance imaging contrast.
This book presents and describes imaging technologies that can be used to study chemical processes and structural interactions in dynamic systems, principally in biomedical systems. The imaging technologies, largely biomedical imaging technologies such as MRT, Fluorescence mapping, raman mapping, nanoESCA, and CARS microscopy, have been selected according to their application range and to the chemical information content of their data. These technologies allow for the analysis and evaluation of delicate biological samples, which must not be disturbed during the profess. Ultimately, this may me
Bajic, Vladimir B.
The last few decades have witnessed an enormous accumulation of data and information in various forms in the domain of Biomedicine. To search for accurate and rich information on any particular topic in this domain appears challenging. The main reasons are that a) useful pieces of information are scattered across numerous sources, b) data is contained in a variety of formats, c) data/information are not indexed with standard identifiers, d) a lot of information is in a free text format, and e) frequently the information needed is not explicitly presented in any single data/information source. This situation requires new approaches to search for, extract and explore the desired information. We will present a system developed at KAUST that addresses some of these challenges. This system is a representative of a technological solution to what can be named Next Generation Knowledge Mining Systems for the biomedical domain.
Moradi, Milad; Ghadiri, Nasser
Automatic text summarization tools can help users in the biomedical domain to access information efficiently from a large volume of scientific literature and other sources of text documents. In this paper, we propose a summarization method that combines itemset mining and domain knowledge to construct a concept-based model and to extract the main subtopics from an input document. Our summarizer quantifies the informativeness of each sentence using the support values of itemsets appearing in the sentence. To address the concept-level analysis of text, our method initially maps the original document to biomedical concepts using the Unified Medical Language System (UMLS). Then, it discovers the essential subtopics of the text using a data mining technique, namely itemset mining, and constructs the summarization model. The employed itemset mining algorithm extracts a set of frequent itemsets containing correlated and recurrent concepts of the input document. The summarizer selects the most related and informative sentences and generates the final summary. We evaluate the performance of our itemset-based summarizer using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics, performing a set of experiments. We compare the proposed method with GraphSum, TexLexAn, SweSum, SUMMA, AutoSummarize, the term-based version of the itemset-based summarizer, and two baselines. The results show that the itemset-based summarizer performs better than the compared methods. The itemset-based summarizer achieves the best scores for all the assessed ROUGE metrics (R-1: 0.7583, R-2: 0.3381, R-W-1.2: 0.0934, and R-SU4: 0.3889). We also perform a set of preliminary experiments to specify the best value for the minimum support threshold used in the itemset mining algorithm. The results demonstrate that the value of this threshold directly affects the accuracy of the summarization model, such that a significant decrease can be observed in the performance of summarization due to
Simmons, Michael; Singhal, Ayush; Lu, Zhiyong
The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine.
Zhang, Qing; Cao, Yong-Gang; Yu, Hong
Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Ye...
Simangwa, L.; Gazhva, S.; Gurenkova, N.
A total of 900 mothers were interviewed with regard to practice of tooth-buds extraction in children and 900 children aged 0-5 years old were examined for missing primary teeth, scar or wounds on the gums due to tooth-buds extraction. The prevalence of tooth-buds extraction at Ilembula Lutheran Hospital was 0.7% and in all cases the extracted tooth-buds were lower jaw canines.
Zhou, Xuezhong; Peng, Yonghong; Liu, Baoyan
Extracting meaningful information and knowledge from free text is the subject of considerable research interest in the machine learning and data mining fields. Text data mining (or text mining) has become one of the most active research sub-fields in data mining. Significant developments in the area of biomedical text mining during the past years have demonstrated its great promise for supporting scientists in developing novel hypotheses and new knowledge from the biomedical literature. Traditional Chinese medicine (TCM) provides a distinct methodology with which to view human life. It is one of the most complete and distinguished traditional medicines with a history of several thousand years of studying and practicing the diagnosis and treatment of human disease. It has been shown that the TCM knowledge obtained from clinical practice has become a significant complementary source of information for modern biomedical sciences. TCM literature obtained from the historical period and from modern clinical studies has recently been transformed into digital data in the form of relational databases or text documents, which provide an effective platform for information sharing and retrieval. This motivates and facilitates research and development into knowledge discovery approaches and to modernize TCM. In order to contribute to this still growing field, this paper presents (1) a comparative introduction to TCM and modern biomedicine, (2) a survey of the related information sources of TCM, (3) a review and discussion of the state of the art and the development of text mining techniques with applications to TCM, (4) a discussion of the research issues around TCM text mining and its future directions. Copyright 2010 Elsevier Inc. All rights reserved.
Duque, Andres; Stevenson, Mark; Martinez-Romo, Juan; Araujo, Lourdes
Word sense disambiguation is a key step for many natural language processing tasks (e.g. summarization, text classification, relation extraction) and presents a challenge to any system that aims to process documents from the biomedical domain. In this paper, we present a new graph-based unsupervised technique to address this problem. The knowledge base used in this work is a graph built with co-occurrence information from medical concepts found in scientific abstracts, and hence adapted to the specific domain. Unlike other unsupervised approaches based on static graphs such as UMLS, in this work the knowledge base takes the context of the ambiguous terms into account. Abstracts downloaded from PubMed are used for building the graph and disambiguation is performed using the personalized PageRank algorithm. Evaluation is carried out over two test datasets widely explored in the literature. Different parameters of the system are also evaluated to test robustness and scalability. Results show that the system is able to outperform state-of-the-art knowledge-based systems, obtaining more than 10% of accuracy improvement in some cases, while only requiring minimal external resources. Copyright © 2018 Elsevier B.V. All rights reserved.
Canner, Judith E; McEligot, Archana J; Pérez, María-Eglée; Qian, Lei; Zhang, Xinzhi
The gap in educational attainment separating underrepresented minorities from Whites and Asians remains wide. Such a gap has significant impact on workforce diversity and inclusion among cross-cutting Biomedical Data Science (BDS) research, which presents great opportunities as well as major challenges for addressing health disparities. This article provides a brief description of the newly established National Institutes of Health Big Data to Knowledge (BD2K) diversity initiatives at four universities: California State University, Monterey Bay; Fisk University; University of Puerto Rico, Río Piedras Campus; and California State University, Fullerton. We emphasize three main barriers to BDS careers (ie, preparation, exposure, and access to resources) experienced among those pioneer programs and recommendations for possible solutions (ie, early and proactive mentoring, enriched research experience, and data science curriculum development). The diversity disparities in BDS demonstrate the need for educators, researchers, and funding agencies to support evidence-based practices that will lead to the diversification of the BDS workforce.
Ndiaye, M; El Metghari, L; Soumah, M M; Sow, M L
Biomedical waste is currently a real health and environmental concern. In this regard, a study was conducted in 5 hospitals in Dakar to review their management of biomedical waste and to formulate recommendations. This is a descriptive cross-sectional study conducted from 1 April to 31 July 2010 in five major hospitals of Dakar. A questionnaire administered to hospital managers, heads of departments, residents and heads of hospital hygiene departments as well as interviews conducted with healthcare personnel and operators of waste incinerators made it possible to assess mechanisms and knowledge on biomedical waste management. Content analysis of interviews, observations and a data sheet allowed processing the data thus gathered. Of the 150 questionnaires distributed, 98 responses were obtained representing a response rate of 65.3%. An interview was conducted with 75 employees directly involved in the management of biomedical waste and observations were made on biomedical waste management in 86 hospital services. Sharps as well as blood and liquid waste were found in all services except in pharmacies, pharmaceutical waste in 66 services, infectious waste in 49 services and anatomical waste in 11 services. Sorting of biomedical waste was ill-adapted in 53.5% (N = 46) of services and the use of the colour-coding system effective in 31.4% (N = 27) of services. Containers for the safe disposal of sharps were available in 82.5% (N = 71) of services and were effectively utilized in 51.1% (N = 44) of these services. In most services, an illadapted packaging was observed with the use of plastic bottles and bins for waste collection and overfilled containers. With the exception of Hôpital Principal, the main storage area was in open air, unsecured, with biomedical waste littered on the floor and often mixed with waste similar to household refuse. The transfer of biomedical waste to the main storage area was done using trolleys or carts in 67.4% (N = 58) of services and
Items 1 - 20 of 20 ... Archives: Journal of Medical and Biomedical Sciences. Journal Home > Archives: Journal of Medical and Biomedical Sciences. Log in or Register to get access to full text downloads.
Biomedical Nanomaterials brings together the engineering applications and challenges of using nanostructured surfaces and nanomaterials in healthcare in a single source. Each chapter covers important and new information in the biomedical applications of nanomaterials.
Biomedical researchers are facing data deluge challenges such as dealing with large volume of complex heterogeneous data and complex and computationally demanding data processing methods. Such scale and complexity of biomedical research requires multi-disciplinary collaboration between scientists
African Journal of Biomedical Research: Journal Sponsorship. Journal Home > About the Journal > African Journal of Biomedical Research: Journal Sponsorship. Log in or Register to get access to full text downloads.
Items 1 - 19 of 19 ... Archives: Journal of Medicine and Biomedical Research. Journal Home > Archives: Journal of Medicine and Biomedical Research. Log in or Register to get access to full text downloads.
Wang, Liqin; Haug, Peter J; Del Fiol, Guilherme
Mining disease-specific associations from existing knowledge resources can be useful for building disease-specific ontologies and supporting knowledge-based applications. Many association mining techniques have been exploited. However, the challenge remains when those extracted associations contained much noise. It is unreliable to determine the relevance of the association by simply setting up arbitrary cut-off points on multiple scores of relevance; and it would be expensive to ask human experts to manually review a large number of associations. We propose that machine-learning-based classification can be used to separate the signal from the noise, and to provide a feasible approach to create and maintain disease-specific vocabularies. We initially focused on disease-medication associations for the purpose of simplicity. For a disease of interest, we extracted potentially treatment-related drug concepts from biomedical literature citations and from a local clinical data repository. Each concept was associated with multiple measures of relevance (i.e., features) such as frequency of occurrence. For the machine purpose of learning, we formed nine datasets for three diseases with each disease having two single-source datasets and one from the combination of previous two datasets. All the datasets were labeled using existing reference standards. Thereafter, we conducted two experiments: (1) to test if adding features from the clinical data repository would improve the performance of classification achieved using features from the biomedical literature only, and (2) to determine if classifier(s) trained with known medication-disease data sets would be generalizable to new disease(s). Simple logistic regression and LogitBoost were two classifiers identified as the preferred models separately for the biomedical-literature datasets and combined datasets. The performance of the classification using combined features provided significant improvement beyond that using
Huffstetler, J.K.; Dailey, N.S.; Rickert, L.W.; Chilton, B.D.
The Information Center Complex (ICC), a centrally administered group of information centers, provides information support to environmental and biomedical research groups and others within and outside Oak Ridge National Laboratory. In-house data base building and development of specialized document collections are important elements of the ongoing activities of these centers. ICC groups must be concerned with language which will adequately classify and insure retrievability of document records. Language control problems are compounded when the complexity of modern scientific problem solving demands an interdisciplinary approach. Although there are several word lists, indexes, and thesauri specific to various scientific disciplines usually grouped as Environmental Sciences, no single generally recognized authority can be used as a guide to the terminology of all environmental science. If biomedical terminology for the description of research on environmental effects is also needed, the problem becomes even more complex. The building of a word list which can be used as a general guide to the environmental/biomedical sciences has been a continuing activity of the Information Center Complex. This activity resulted in the publication of the Environmental Biomedical Terminology Index
Lihong V. Wang summarizes his tenure as Editor-in-Chief of the Journal of Biomedical Optics and introduces his successor, Brian Pogue, who will assume the role in January 2018. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
This volume gives an introduction to a fascinating research area to applied mathematicians. It is devoted to providing the exposition of promising analytical and numerical techniques for solving challenging biomedical imaging problems, which trigger the investigation of interesting issues in various branches of mathematics.
Huffstetler, J.K.; Dailey, N.S.; Rickert, L.W.; Chilton, B.D.
The Information Center Complex (ICC), a centrally administered group of information centers, provides information support to environmental and biomedical research groups and others within and outside Oak Ridge National Laboratory. In-house data base building and development of specialized document collections are important elements of the ongoing activities of these centers. ICC groups must be concerned with language which will adequately classify and insure retrievability of document records. Language control problems are compounded when the complexity of modern scientific problem solving demands an interdisciplinary approach. Although there are several word lists, indexes, and thesauri specific to various scientific disciplines usually grouped as Environmental Sciences, no single generally recognized authority can be used as a guide to the terminology of all environmental science. If biomedical terminology for the description of research on environmental effects is also needed, the problem becomes even more complex. The building of a word list which can be used as a general guide to the environmental/biomedical sciences has been a continuing activity of the Information Center Complex. This activity resulted in the publication of the Environmental Biomedical Terminology Index (EBTI).
The following instructions relating to submissions must be adhered to. Failure to conform can lead to delay in publication. Preferred method of submission. Manuscripts may be submitted by post (Editor-in-chief Journal of Biomedical Investigation, Department of Pharmacology and Therapeutics, Faculty of Medicine College ...
Gowen, Richard J.
Discusses recent developments in the health care industry and their impact on the future of biomedical engineering education. Indicates that a more thorough understanding of the complex functions of the living organism can be acquired through the application of engineering techniques to problems of life sciences. (CC)
Roč. 52, č. 1 (2003), s. 39-43 ISSN 0862-8408 R&D Projects: GA ČR GA310/03/1381 Grant - others:Howard Hughes Medical Institute(US) HHMI55000323 Institutional research plan: CEZ:AV0Z5052915 Keywords : statistics * usage * biomedical journals Subject RIV: EC - Immunology Impact factor: 0.939, year: 2003
Ramalingam, Murugan; Ramakrishna, Seeram; Kobayashi, Hisatoshi
This cutting edge book provides all the important aspects dealing with the basic science involved in materials in biomedical technology, especially structure and properties, techniques and technological innovations in material processing and characterizations, as well as the applications. The volume consists of 12 chapters written by acknowledged experts of the biomaterials field and covers a wide range of topics and applications.
The African Journal of biomedical Research was founded in 1998 as a joint project between a private communications outfit (Laytal Communications) and ... is aimed at being registered in future as a non-governmental organization involved in the promotion of scientific proceedings and publications in developing countries.
Tulkens, Stéphan; Šuster, Simon; Daelemans, Walter
In this paper, we report a knowledge-based method for Word Sense Disambiguation in the domains of biomedical and clinical text. We combine word representations created on large corpora with a small number of definitions from the UMLS to create concept representations, which we then compare to representations of the context of ambiguous terms. Using no relational information, we obtain comparable performance to previous approaches on the MSH-WSD dataset, which is a well-known dataset in the bi...
Ammenwerth, E; Hackl, W O
Biomedical informatics programs exist in many countries. Some analyses of the skills needed and of recommendations for curricular content for such programs have been published. However, not much is known of the job profiles and job careers of their graduates. To analyse the job profiles and job careers of 175 graduates of the biomedical informatics bachelor and master program of the Tyrolean university UMIT. Survey of all biomedical informatics students who graduated from UMIT between 2001 and 2013. Information is available for 170 graduates. Eight percent of graduates are male. Of all bachelor graduates, 86% started a master program. Of all master graduates, 36% started a PhD. The job profiles are quite diverse: at the time of the survey, 35% of all master graduates worked in the health IT industry, 24% at research institutions, 9% in hospitals, 9% as medical doctors, 17% as informaticians outside the health care sector, and 6% in other areas. Overall, 68% of the graduates are working as biomedical informaticians. The results of the survey indicate a good job situation for the graduates. The job opportunities for biomedical informaticians who graduated with a bachelor or master degree from UMIT seem to be quite good. The majority of graduates are working as biomedical informaticians. A larger number of comparable surveys of graduates from other biomedical informatics programs would help to enhance our knowledge about careers in biomedical informatics.
Gordon, Claire L; Weng, Chunhua
A common bottleneck during ontology evaluation is knowledge acquisition from domain experts for gold standard creation. This paper contributes a novel semi-automated method for evaluating the concept coverage and accuracy of biomedical ontologies by complementing expert knowledge with knowledge automatically extracted from clinical practice guidelines and electronic health records, which minimizes reliance on expensive domain expertise for gold standards generation. We developed a bacterial clinical infectious diseases ontology (BCIDO) to assist clinical infectious disease treatment decision support. Using a semi-automated method we integrated diverse knowledge sources, including publically available infectious disease guidelines from international repositories, electronic health records, and expert-generated infectious disease case scenarios, to generate a compendium of infectious disease knowledge and use it to evaluate the accuracy and coverage of BCIDO. BCIDO has three classes (i.e., infectious disease, antibiotic, bacteria) containing 593 distinct concepts and 2345 distinct concept relationships. Our semi-automated method generated an ID knowledge compendium consisting of 637 concepts and 1554 concept relationships. Overall, BCIDO covered 79% (504/637) of the concepts and 89% (1378/1554) of the concept relationships in the ID compendium. BCIDO coverage of ID compendium concepts was 92% (121/131) for antibiotic, 80% (205/257) for infectious disease, and 72% (178/249) for bacteria. The low coverage of bacterial concepts in BCIDO was due to a difference in concept granularity between BCIDO and infectious disease guidelines. Guidelines and expert generated scenarios were the richest source of ID concepts and relationships while patient records provided relatively fewer concepts and relationships. Our semi-automated method was cost-effective for generating a useful knowledge compendium with minimal reliance on domain experts. This method can be useful for continued
Purawat, Shweta; Cowart, Charles; Amaro, Rommie E; Altintas, Ilkay
The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC is an e-learning platform that empowers the biomedical community to develop, launch and share open training materials. It deploys hands-on software training toolboxes through virtualization technologies such as Amazon EC2 and Virtualbox. The BBDTC facilitates migration of courses across other course management platforms. The framework encourages knowledge sharing and content personalization through the playlist functionality that enables unique learning experiences and accelerates information dissemination to a wider community.
Shin, Soo-Yong; Kim, Woo Sung; Lee, Jae-Ho
Due to the unique characteristics of clinical data, clinical data warehouses (CDWs) have not been successful so far. Specifically, the use of CDWs for biomedical research has been relatively unsuccessful thus far. The characteristics necessary for the successful implementation and operation of a CDW for biomedical research have not clearly defined yet. THREE EXAMPLES OF CDWS WERE REVIEWED: a multipurpose CDW in a hospital, a CDW for independent multi-institutional research, and a CDW for research use in an institution. After reviewing the three CDW examples, we propose some key characteristics needed in a CDW for biomedical research. A CDW for research should include an honest broker system and an Institutional Review Board approval interface to comply with governmental regulations. It should also include a simple query interface, an anonymized data review tool, and a data extraction tool. Also, it should be a biomedical research platform for data repository use as well as data analysis. The proposed characteristics desired in a CDW may have limited transfer value to organizations in other countries. However, these analysis results are still valid in Korea, and we have developed clinical research data warehouse based on these desiderata.
Voigt, Herbert F
The future challenges to medical and biological engineering, sometimes referred to as biomedical engineering or simply bioengineering, are many. Some of these are identifiable now and others will emerge from time to time as new technologies are introduced and harnessed. There is a fundamental issue regarding "Branding the bio/biomedical engineering degree" that requires a common understanding of what is meant by a B.S. degree in Biomedical Engineering, Bioengineering, or Biological Engineering. In this paper we address some of the issues involved in branding the Bio/Biomedical Engineering degree, with the aim of clarifying the Bio/Biomedical Engineering brand.
Walker, Alexander Muir
Information that is not made explicit is nonetheless embedded in most of our standard procedures. In its simplest form, embedded information may take the form of prior knowledge held by the researcher and presumed to be agreed to by consumers of the research product. More interesting are the settings in which the prior information is held unconsciously by both researcher and reader, or when the very form of an "effective procedure" incorporates its creator's (unspoken) understanding of a problem. While it may not be productive to exhaustively detail the embedded or tacit knowledge that manifests itself in creative scientific work, at least at the beginning, we may want to routinize methods for extracting and documenting the ways of thinking that make "experts" expert. We should not back away from both expecting and respecting the tacit knowledge the pervades our work and the work of others.
Kim, Jung-Jae; Rebholz-Schuhmann, Dietrich
The extraction of complex events from biomedical text is a challenging task and requires in-depth semantic analysis. Previous approaches associate lexical and syntactic resources with ontologies for the semantic analysis, but fall short in testing the benefits from the use of domain knowledge. We developed a system that deduces implicit events from explicitly expressed events by using inference rules that encode domain knowledge. We evaluated the system with the inference module on three tasks: First, when tested against a corpus with manually annotated events, the inference module of our system contributes 53.2% of correct extractions, but does not cause any incorrect results. Second, the system overall reproduces 33.1% of the transcription regulatory events contained in RegulonDB (up to 85.0% precision) and the inference module is required for 93.8% of the reproduced events. Third, we applied the system with minimum adaptations to the identification of cell activity regulation events, confirming that the inference improves the performance of the system also on this task. Our research shows that the inference based on domain knowledge plays a significant role in extracting complex events from text. This approach has great potential in recognizing the complex concepts of such biomedical ontologies as Gene Ontology in the literature.
Full Text Available Abstract Background The extraction of complex events from biomedical text is a challenging task and requires in-depth semantic analysis. Previous approaches associate lexical and syntactic resources with ontologies for the semantic analysis, but fall short in testing the benefits from the use of domain knowledge. Results We developed a system that deduces implicit events from explicitly expressed events by using inference rules that encode domain knowledge. We evaluated the system with the inference module on three tasks: First, when tested against a corpus with manually annotated events, the inference module of our system contributes 53.2% of correct extractions, but does not cause any incorrect results. Second, the system overall reproduces 33.1% of the transcription regulatory events contained in RegulonDB (up to 85.0% precision and the inference module is required for 93.8% of the reproduced events. Third, we applied the system with minimum adaptations to the identification of cell activity regulation events, confirming that the inference improves the performance of the system also on this task. Conclusions Our research shows that the inference based on domain knowledge plays a significant role in extracting complex events from text. This approach has great potential in recognizing the complex concepts of such biomedical ontologies as Gene Ontology in the literature.
Sun, H H
A description is given of the Drexel-Presbyterian Hospital Program, established in 1959 to provide the first MS program in biomedical engineering. The goal was to provide a program where life scientists could obtain a rigorous knowledge of physical sciences and engineers could similarly obtain a rigorous knowledge of medical science. Significant milestones in the history of the program are discussed.
Taguchi, Tetsushi; Aoyagi, Takao
The rising costs and aging of the population due to a low birth rate negatively affect the healthcare system in Japan. In 2011, the Council for Science and Technology Policy released the 4th Japan's Science and Technology Basic Policy Report from 2011 to 2015. This report includes two major innovations, 'Life Innovation' and 'Green Innovation', to promote economic growth. Biomedical engineering research is part of 'Life Innovation' and its outcomes are required to maintain people's mental and physical health. It has already resulted in numerous biomedical products, and new ones should be developed using nanotechnology-based concepts. The combination of accumulated knowledge and experience, and 'nanoarchitechtonics' will result in novel, well-designed functional biomaterials. This focus issue contains three reviews and 19 original papers on various biomedical topics, including biomaterials, drug-delivery systems, tissue engineering and diagnostics. We hope that it demonstrates the importance of collaboration among scientists, engineers and clinicians, and will contribute to the further development of biomedical engineering.
Skrabalak, Sara E.; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl4. The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where ...
This volume introduces readers to the basic concepts and recent advances in the field of biomedical devices. The text gives a detailed account of novel developments in drug delivery, protein electrophoresis, estrogen mimicking methods and medical devices. It also provides the necessary theoretical background as well as describing a wide range of practical applications. The level and style make this book accessible not only to scientific and medical researchers but also to graduate students.
Mahendra R.R Raj
The importance of waste disposal management is a very essential and integral part of any health care system. Health care providers have been ignorant or they did not essentially know the basic aspect of the importance and effective management of hospital waste.This overview of biomedical waste disposal/management gives a thorough insight into the aspects of the guidelines to be followed and adopted according to the international WHO approved methodology for a cleaner, disease-free, and health...
Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro
Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly , . This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.
Colson, Yolonda L.; Grinstaff, Mark W.
Superhydrophobic surfaces are actively studied across a wide range of applications and industries, and are now finding increased use in the biomedical arena as substrates to control protein adsorption, cellular interaction, and bacterial growth, as well as platforms for drug delivery devices and for diagnostic tools. The commonality in the design of these materials is to create a stable or metastable air state at the material surface, which lends itself to a number of unique properties. These activities are catalyzing the development of new materials, applications, and fabrication techniques, as well as collaborations across material science, chemistry, engineering, and medicine given the interdisciplinary nature of this work. The review begins with a discussion of superhydrophobicity, and then explores biomedical applications that are utilizing superhydrophobicity in depth including material selection characteristics, in vitro performance, and in vivo performance. General trends are offered for each application in addition to discussion of conflicting data in the literature, and the review concludes with the authors’ future perspectives on the utility of superhydrophobic surfaces for biomedical applications. PMID:27449946
Turcheniuk, K.; Mochalin, Vadym N.
The interest in nanodiamond applications in biology and medicine is on the rise over recent years. This is due to the unique combination of properties that nanodiamond provides. Small size (∼5 nm), low cost, scalable production, negligible toxicity, chemical inertness of diamond core and rich chemistry of nanodiamond surface, as well as bright and robust fluorescence resistant to photobleaching are the distinct parameters that render nanodiamond superior to any other nanomaterial when it comes to biomedical applications. The most exciting recent results have been related to the use of nanodiamonds for drug delivery and diagnostics—two components of a quickly growing area of biomedical research dubbed theranostics. However, nanodiamond offers much more in addition: it can be used to produce biodegradable bone surgery devices, tissue engineering scaffolds, kill drug resistant microbes, help us to fight viruses, and deliver genetic material into cell nucleus. All these exciting opportunities require an in-depth understanding of nanodiamond. This review covers the recent progress as well as general trends in biomedical applications of nanodiamond, and underlines the importance of purification, characterization, and rational modification of this nanomaterial when designing nanodiamond based theranostic platforms.
Phillips, M; Kalet, I; McNutt, T; Smith, W
Biomedical informatics encompasses a very large domain of knowledge and applications. This broad and loosely defined field can make it difficult to navigate. Physicists often are called upon to provide informatics services and/or to take part in projects involving principles of the field. The purpose of the presentations in this symposium is to help medical physicists gain some knowledge about the breadth of the field and how, in the current clinical and research environment, they can participate and contribute. Three talks have been designed to give an overview from the perspective of physicists and to provide a more in-depth discussion in two areas. One of the primary purposes, and the main subject of the first talk, is to help physicists achieve a perspective about the range of the topics and concepts that fall under the heading of 'informatics'. The approach is to de-mystify topics and jargon and to help physicists find resources in the field should they need them. The other talks explore two areas of biomedical informatics in more depth. The goal is to highlight two domains of intense current interest--databases and models--in enough depth into current approaches so that an adequate background for independent inquiry is achieved. These two areas will serve as good examples of how physicists, using informatics principles, can contribute to oncology practice and research. Learning Objectives: To understand how the principles of biomedical informatics are used by medical physicists. To put the relevant informatics concepts in perspective with regard to biomedicine in general. To use clinical database design as an example of biomedical informatics. To provide a solid background into the problems and issues of the design and use of data and databases in radiation oncology. To use modeling in the service of decision support systems as an example of modeling methods and data use. To provide a background into how uncertainty in our data and knowledge can be
Gibbs, Kenneth D; McGready, John; Griffin, Kimberly
Recent biomedical workforce policy efforts have centered on enhancing career preparation for trainees, and increasing diversity in the research workforce. Postdoctoral scientists, or postdocs, are among those most directly impacted by such initiatives, yet their career development remains understudied. This study reports results from a 2012 national survey of 1002 American biomedical postdocs. On average, postdocs reported increased knowledge about career options but lower clarity about their career goals relative to PhD entry. The majority of postdocs were offered structured career development at their postdoctoral institutions, but less than one-third received this from their graduate departments. Postdocs from all social backgrounds reported significant declines in interest in faculty careers at research-intensive universities and increased interest in nonresearch careers; however, there were differences in the magnitude and period of training during which these changes occurred across gender and race/ethnicity. Group differences in interest in faculty careers were explained by career interest differences formed during graduate school but not by differences in research productivity, research self-efficacy, or advisor relationships. These findings point to the need for enhanced career development earlier in the training process, and interventions sensitive to distinctive patterns of interest development across social identity groups. © 2015 K. D. Gibbs et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Leslie D. McIntosh
Full Text Available Abstract Background The reproducibility of research is essential to rigorous science, yet significant concerns of the reliability and verifiability of biomedical research have been recently highlighted. Ongoing efforts across several domains of science and policy are working to clarify the fundamental characteristics of reproducibility and to enhance the transparency and accessibility of research. Methods The aim of the proceeding work is to develop an assessment tool operationalizing key concepts of research transparency in the biomedical domain, specifically for secondary biomedical data research using electronic health record data. The tool (RepeAT was developed through a multi-phase process that involved coding and extracting recommendations and practices for improving reproducibility from publications and reports across the biomedical and statistical sciences, field testing the instrument, and refining variables. Results RepeAT includes 119 unique variables grouped into five categories (research design and aim, database and data collection methods, data mining and data cleaning, data analysis, data sharing and documentation. Preliminary results in manually processing 40 scientific manuscripts indicate components of the proposed framework with strong inter-rater reliability, as well as directions for further research and refinement of RepeAT. Conclusions The use of RepeAT may allow the biomedical community to have a better understanding of the current practices of research transparency and accessibility among principal investigators. Common adoption of RepeAT may improve reporting of research practices and the availability of research outputs. Additionally, use of RepeAT will facilitate comparisons of research transparency and accessibility across domains and institutions.
Sanfilippo, Antonio P.; Posse, Christian; Gopalan, Banu; Tratz, Stephen C.; Gregory, Michelle L.
With the rising influence of the Gene On-tology, new approaches have emerged where the similarity between genes or gene products is obtained by comparing Gene Ontology code annotations associ-ated with them. So far, these approaches have solely relied on the knowledge en-coded in the Gene Ontology and the gene annotations associated with the Gene On-tology database. The goal of this paper is to demonstrate that improvements to these approaches can be obtained by integrating textual evidence extracted from relevant biomedical literature.
Ramm, Hans Henrik
Technology and knowledge make up the knowledge capital that has been so essential to the oil and gas industry's value creation, competitiveness and internationalization. Report prepared for the Norwegian Oil Industry Association (OLF) and The Norwegian Society of Chartered Technical and Scientific Professionals (Tekna), on the Norwegian petroleum cluster as an environment for creating knowledge capital from human capital, how fiscal and other framework conditions may influence the building of knowledge capital, the long-term perspectives for the petroleum cluster, what Norwegian society can learn from the experiences in the petroleum cluster, and the importance of gaining more knowledge about the functionality of knowledge for increased value creation (author) (ml)
Yu, Xiang-Tian; Wang, Lu; Zeng, Tao
Generally, machine learning includes many in silico methods to transform the principles underlying natural phenomenon to human understanding information, which aim to save human labor, to assist human judge, and to create human knowledge. It should have wide application potential in biological and biomedical studies, especially in the era of big biological data. To look through the application of machine learning along with biological development, this review provides wide cases to introduce the selection of machine learning methods in different practice scenarios involved in the whole biological and biomedical study cycle and further discusses the machine learning strategies for analyzing omics data in some cutting-edge biological studies. Finally, the notes on new challenges for machine learning due to small-sample high-dimension are summarized from the key points of sample unbalance, white box, and causality.
Peng, Yifan; Wei, Chih-Hsuan; Lu, Zhiyong
Due to the importance of identifying relations between chemicals and diseases for new drug discovery and improving chemical safety, there has been a growing interest in developing automatic relation extraction systems for capturing these relations from the rich and rapid-growing biomedical literature. In this work we aim to build on current advances in named entity recognition and a recent BioCreative effort to further improve the state of the art in biomedical relation extraction, in particular for the chemical-induced disease (CID) relations. We propose a rich-feature approach with Support Vector Machine to aid in the extraction of CIDs from PubMed articles. Our feature vector includes novel statistical features, linguistic knowledge, and domain resources. We also incorporate the output of a rule-based system as features, thus combining the advantages of rule- and machine learning-based systems. Furthermore, we augment our approach with automatically generated labeled text from an existing knowledge base to improve performance without additional cost for corpus construction. To evaluate our system, we perform experiments on the human-annotated BioCreative V benchmarking dataset and compare with previous results. When trained using only BioCreative V training and development sets, our system achieves an F-score of 57.51 %, which already compares favorably to previous methods. Our system performance was further improved to 61.01 % in F-score when augmented with additional automatically generated weakly labeled data. Our text-mining approach demonstrates state-of-the-art performance in disease-chemical relation extraction. More importantly, this work exemplifies the use of (freely available) curated document-level annotations in existing biomedical databases, which are largely overlooked in text-mining system development.
Dewhurst, D J
An Introduction to Biomedical Instrumentation presents a course of study and applications covering the basic principles of medical and biological instrumentation, as well as the typical features of its design and construction. The book aims to aid not only the cognitive domain of the readers, but also their psychomotor domain as well. Aside from the seminar topics provided, which are divided into 27 chapters, the book complements these topics with practical applications of the discussions. Figures and mathematical formulas are also given. Major topics discussed include the construction, handli
Roberts, M.L.; Velsko, C.; Turteltaub, K.W.
We are developing 3 H-AMS to measure 3 H activity of mg-sized biological samples. LLNL has already successfully applied 14 C AMS to a variety of problems in the area of biomedical research. Development of 3 H AMS would greatly complement these studies. The ability to perform 3 H AMS measurements at sensitivities equivalent to those obtained for 14 C will allow us to perform experiments using compounds that are not readily available in 14 C-tagged form. A 3 H capability would also allow us to perform unique double-labeling experiments in which we learn the fate, distribution, and metabolism of separate fractions of biological compounds
Theoni K. Georgiou
Full Text Available Thermoresponsive polymers are a class of “smart” materials that have the ability to respond to a change in temperature; a property that makes them useful materials in a wide range of applications and consequently attracts much scientific interest. This review focuses mainly on the studies published over the last 10 years on the synthesis and use of thermoresponsive polymers for biomedical applications including drug delivery, tissue engineering and gene delivery. A summary of the main applications is given following the different studies on thermoresponsive polymers which are categorized based on their 3-dimensional structure; hydrogels, interpenetrating networks, micelles, crosslinked micelles, polymersomes, films and particles.
Street, Laurence J
IntroductionHistory of Medical DevicesThe Role of Biomedical Engineering Technologists in Health CareCharacteristics of Human Anatomy and Physiology That Relate to Medical DevicesSummaryQuestionsDiagnostic Devices: Part OnePhysiological Monitoring SystemsThe HeartSummaryQuestionsDiagnostic Devices: Part TwoCirculatory System and BloodRespiratory SystemNervous SystemSummaryQuestionsDiagnostic Devices: Part ThreeDigestive SystemSensory OrgansReproductionSkin, Bone, Muscle, MiscellaneousChapter SummaryQuestionsDiagnostic ImagingIntroductionX-RaysMagnetic Resonance Imaging ScannersPositron Emissio
Mahendra R.R Raj
Full Text Available The importance of waste disposal management is a very essential and integral part of any health care system. Health care providers have been ignorant or they did not essentially know the basic aspect of the importance and effective management of hospital waste.This overview of biomedical waste disposal/management gives a thorough insight into the aspects of the guidelines to be followed and adopted according to the international WHO approved methodology for a cleaner, disease-free, and healthier medical services to the populace, i.e., to the hospital employees, patients, and society.
Ciaccio Edward J
Full Text Available Abstract This article is a review of the book: 'Biomedical Image Processing', by Thomas M. Deserno, which is published by Springer-Verlag. Salient information that will be useful to decide whether the book is relevant to topics of interest to the reader, and whether it might be suitable as a course textbook, are presented in the review. This includes information about the book details, a summary, the suitability of the text in course and research work, the framework of the book, its specific content, and conclusions.
Say, Jana M; van Vreden, Caryn; Reilly, David J; Brown, Louise J; Rabeau, James R; King, Nicholas J C
In recent years, nanodiamonds have emerged from primarily an industrial and mechanical applications base, to potentially underpinning sophisticated new technologies in biomedical and quantum science. Nanodiamonds are relatively inexpensive, biocompatible, easy to surface functionalise and optically stable. This combination of physical properties are ideally suited to biological applications, including intracellular labelling and tracking, extracellular drug delivery and adsorptive detection of bioactive molecules. Here we describe some of the methods and challenges for processing nanodiamond materials, detection schemes and some of the leading applications currently under investigation.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements
stop at the surface extraction of events, as is the case for many existing bio -event extraction tasks. With a general deep language...We describe below several extensions to the gen- eral TRIPS system to better handle the text char - acteristics of the biomedical literature. 3.1. Genre...lexical entries with appropriate syntactic templates and semantic re- strictions to distinguish the everyday and bio - logical senses of the words. 3.4
Abdul Khader Jilani Saudagar
Full Text Available The overall goal of the research is to improve the quality of biomedical image for telemedicine with minimum percentages of noise in the retrieved image and to take less computation time. The novelty of this technique lies in the implementation of spectral coding for biomedical images using neural networks in order to accomplish the above objectives. This work is in continuity of an ongoing research project aimed at developing a system for efficient image compression approach for telemedicine in Saudi Arabia. We compare the efficiency of this technique against existing image compression techniques, namely, JPEG2000, in terms of compression ratio, peak signal to noise ratio (PSNR, and computation time. To our knowledge, the research is the primary in providing a comparative study with other techniques used in the compression of biomedical images. This work explores and tests biomedical images such as X-rays, computed tomography (CT, magnetic resonance imaging (MRI, and positron emission tomography (PET.
National Research Council Staff; Commission on Physical Sciences, Mathematics, and Applications; Division on Engineering and Physical Sciences; National Research Council; National Academy of Sciences
.... Incorporating input from dozens of biomedical researchers who described what they perceived as key open problems of imaging that are amenable to attack by mathematical scientists and physicists...
Liu, Feng; Goodarzi, Ali; Wang, Haifeng; Stasiak, Joanna; Sun, Jianbo; Zhou, Yu
The 2nd International Conference on Biomedical Engineering and Biotechnology (iCBEB 2013), held in Wuhan on 11–13 October 2013, is an annual conference that aims at providing an opportunity for international and national researchers and practitioners to present the most recent advances and future challenges in the fields of Biomedical Information, Biomedical Engineering and Biotechnology. The papers published by this issue are selected from this conference, which witnesses the frontier in the field of Biomedical Engineering and Biotechnology, which particularly has helped improving the level of clinical diagnosis in medical work.
Disease associated gene discovery is a critical step to realize the future of personalized medicine. However empirical and clinical validation of disease associated genes are time consuming and expensive. In silico discovery of disease associated genes from literature is therefore becoming the first essential step for biomarker discovery to…
Taha, Kamal; Yoo, Paul D
We propose a classifier system called PFPBT that predicts the functions of un-annotated proteins. PFPBT assigns an un-annotated protein p the functional category of annotated proteins that are semantically similar to p. Each protein p is represented by a vector of weights. Each weight reflects the significance of a molecule m in the biomedical abstracts associated with p. That is, each weight quantifies the likelihood of the association between m and p. This is because all proteins bind to other molecules, which are highly predictive of the functions of the proteins. Let S be the set of proteins that is semantically similar to an un-annotated protein p. p is annotated with the functional category f, if its occurrence probability in abstracts associated with S whose functional category is f is statistically significantly different than its occurrences in abstracts associated with S that belong to all other functional categories. PFPBT automatically extracts each co-occurrence of a protein-molecule pair that represents semantic relationship between the pair. We present novel semantic rules based on the syntactic structures of sentences for identifying the semantic relationships between each co-occurrence of a protein-molecule pair in a sentence. We evaluated PFPBT by comparing it experimentally with two systems. Results showed improvement.
Hacısalihzade, Selim S
Biomedical Applications of Control Engineering is a lucidly written textbook for graduate control engineering and biomedical engineering students as well as for medical practitioners who want to get acquainted with quantitative methods. It is based on decades of experience both in control engineering and clinical practice. The book begins by reviewing basic concepts of system theory and the modeling process. It then goes on to discuss control engineering application areas like · Different models for the human operator, · Dosage and timing optimization in oral drug administration, · Measuring symptoms of and optimal dopaminergic therapy in Parkinson’s disease, · Measurement and control of blood glucose levels both naturally and by means of external controllers in diabetes, and · Control of depth of anaesthesia using inhalational anaesthetic agents like sevoflurane using both fuzzy and state feedback controllers....
Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G
'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated
Shakya, Kabita; O'Connell, Mary J.; Ruskin, Heather J.
Recent advances in molecular biology and computational power have seen the biomedical sector enter a new era, with corresponding development of Bioinformatics as a major discipline. Generation of enormous amounts of data has driven the need for more advanced storage solutions and shared access through a range of public repositories. The number of such biomedical resources is increasing constantly and mining these large and diverse data sets continues to present real challenges. This paper attempts a general overview of currently available resources, together with remarks on their data mining and analysis capabilities. Of interest here is the recent shift in focus from genetic to epigenetic/epigenomic research and the emergence and extension of resource provision to support this both at local and global scale. Biomedical text and numerical data mining are both considered, the first dealing with automated methods for analyzing research content and information extraction, and the second (broadly) with pattern recognition and prediction. Any summary and selection of resources is inherently limited, given the spectrum available, but the aim is to provide a guideline for the assessment and comparison of currently available provision, particularly as this relates to epigenetics/epigenomics. PMID:22874136
Cho, Hyejin; Choi, Wonjun; Lee, Hyunju
In biomedical articles, a named entity recognition (NER) technique that identifies entity names from texts is an important element for extracting biological knowledge from articles. After NER is applied to articles, the next step is to normalize the identified names into standard concepts (i.e., disease names are mapped to the National Library of Medicine's Medical Subject Headings disease terms). In biomedical articles, many entity normalization methods rely on domain-specific dictionaries for resolving synonyms and abbreviations. However, the dictionaries are not comprehensive except for some entities such as genes. In recent years, biomedical articles have accumulated rapidly, and neural network-based algorithms that incorporate a large amount of unlabeled data have shown considerable success in several natural language processing problems. In this study, we propose an approach for normalizing biological entities, such as disease names and plant names, by using word embeddings to represent semantic spaces. For diseases, training data from the National Center for Biotechnology Information (NCBI) disease corpus and unlabeled data from PubMed abstracts were used to construct word representations. For plants, a training corpus that we manually constructed and unlabeled PubMed abstracts were used to represent word vectors. We showed that the proposed approach performed better than the use of only the training corpus or only the unlabeled data and showed that the normalization accuracy was improved by using our model even when the dictionaries were not comprehensive. We obtained F-scores of 0.808 and 0.690 for normalizing the NCBI disease corpus and manually constructed plant corpus, respectively. We further evaluated our approach using a data set in the disease normalization task of the BioCreative V challenge. When only the disease corpus was used as a dictionary, our approach significantly outperformed the best system of the task. The proposed approach shows robust
In institutions where biological research are made and some technologies make use of radioisotope, the radiation protection is an issue of biosecurity for conceptual reasons. In the process of architectural design of Biomedical Laboratories, engineering and architecture reveal interfaces with other areas of knowledge and specific concepts. Exploring the role of architectural design in favor of personal and environmental protection in biological containment laboratories that handle non-sealed sources in research, the work discusses the triad that compose the principle of containment in health environments: best practices, protective equipment, physical facilities, with greater emphasis on the latter component. The shortcomings of the design process are reflected in construction and in use-operation and maintenance of these buildings, with direct consequences on the occupational health and safety, environmental and credibility of work processes. In this context, the importance of adoption of alternatives to improve the design process is confirmed, taking into account the early consideration of several variables involved and providing subsidies to the related laboratories . The research, conducted at FIOCRUZ - a Brazilian health institution, developed from the analysis of the participants in the architectural project, aiming at the formulation of design guidelines which could contribute to the rationalisation of this kind of building construction
Fouda, H G
The ideal laboratory robot can be viewed as "an indefatigable assistant capable of working continuously for 24 h a day with constant efficiency". The development of a system approaching that promise requires considerable skill and time commitment, a thorough understanding of the capabilities and limitations of the robot and its specialized modules and an intimate knowledge of the functions to be automated. The robot need not emulate every manual step. Effective substitutes for difficult steps must be devised. The future of laboratory robots depends not only on technological advances in other fields, but also on the skill and creativity of chromatographers and other scientists. The robot has been applied to automate numerous biomedical chromatography and electrophoresis methods. The quality of its data can approach, and in some cases exceed, that of manual methods. Maintaining high data quality during continuous operation requires frequent maintenance and validation. Well designed robotic systems can yield substantial increase in the laboratory productivity without a corresponding increase in manpower. They can free skilled personnel from mundane tasks and can enhance the safety of the laboratory environment. The integration of robotics, chromatography systems and laboratory information management systems permits full automation and affords opportunities for unattended method development and for future incorporation of artificial intelligence techniques and the evolution of expert systems. Finally, humanoid attributes aside, robotic utilization in the laboratory should not be an end in itself. The robot is a useful tool that should be utilized only when it is prudent and cost-effective to do so.
García-Vigil, José Luis
In human communication and personal relations, there is the possibility of dissent and have a conflict related to the perception or acceptance of the content of a message. To reach an agreement, it is important that the communication between people is horizontal and bidirectional while the issue is being discussed, in order to bring together the interlocutors' expectations and interests. In the administration of services and goods, friendship and nepotism have been the most frequent forms of potential conflicts of interest. These behaviors arise when a person, like a civil servant or employee, feels influenced by personal considerations when he is doing his work and when he is making decisions. The conflict of perceived interested can be so harmful to the reputation and confidence of an organization, as the real existence of a conflict of interest. In some countries, the law obliges organisms to have codes of ethics that cover these aspects. Thus, it is desirable the incorporation of ethical principles and "moral competences" in the curricula of health professionals. Actually, in medicine and biomedical investigation, conflicts of interest are a condition related to clinicians and researchers, who distort their results and work to obtain personal or financial benefits. In the generation and transmission of knowledge, the circumstances determine if a conflict of interest exists, not the methodology, either the results of the investigation, not even the technology used on their diffusion.
Ahluwalia, Arti; Atwine, Daniel; De Maria, Carmelo; Ibingira, Charles; Kipkorir, Emmauel; Kiros, Fasil; Madete, June; Mazzei, Daniele; Molyneux, Elisabeth; Moonga, Kando; Moshi, Mainen; Nzomo, Martin; Oduol, Vitalice; Okuonzi, John
Despite the virtual revolution, the mainstream academic community in most countries remains largely ignorant of the potential of web-based teaching resources and of the expansion of open source software, hardware and rapid prototyping. In the context of Biomedical Engineering (BME), where human safety and wellbeing is paramount, a high level of supervision and quality control is required before open source concepts can be embraced by universities and integrated into the curriculum. In the meantime, students, more than their teachers, have become attuned to continuous streams of digital information, and teaching methods need to adapt rapidly by giving them the skills to filter meaningful information and by supporting collaboration and co-construction of knowledge using open, cloud and crowd based technology. In this paper we present our experience in bringing these concepts to university education in Africa, as a way of enabling rapid development and self-sufficiency in health care. We describe the three summer schools held in sub-Saharan Africa where both students and teachers embraced the philosophy of open BME education with enthusiasm, and discuss the advantages and disadvantages of opening education in this way in the developing and developed world.
animals for research, testing, or training in different countries. In the few that have done so, the measures adopted vary widely: on the one hand, legally enforceable detailed regulations with licensing of experimenters and their premises together with an official inspectorate; on the other, entirely voluntary self-regulation by the biomedical community, with lay participation. Many variations are possible between these extremes, one intermediate situation being a legal requirement that experiments or other procedures involving the use of animals should be subject to the approval of ethical committees of specified composition.The International Guiding Principles are the product of the collaboration of a representative sample of the international biomedical community, including experts of the World Health Organization, and of consultations with responsible animal welfare groups. The International Guiding Principles have already gained a considerable measure of acceptance internationally. European Medical Research Councils (EMRC, an international association that includes all the West European medical research councils, fully endorsed the Guiding Principles in 1984. Here we bring the basic bioethical principles for using animals in biomedical research: Methods such as mathematical models, computer simulation and in vitro biological systems should be used wherever appropriate,Animal experiments should be undertaken only after due consideration of their relevance for human or animal health and the advancement of biological knowledge,The animals selected for an experiment should be of an appropriate species and quality, and the minimum number required to obtain scientifically valid results,Investigators and other personnel should never fail to treat animals as sentient, and should regard their proper care and use and the avoidance or minimization of discomfort, distress, or pain as ethical imperatives,Procedures with animals that may cause more than momentary or minimal
Sinha, Saurabh; Song, Jun; Weinshilboum, Richard; Jongeneel, Victor; Han, Jiawei
We describe here the vision, motivations, and research plans of the National Institutes of Health Center for Excellence in Big Data Computing at the University of Illinois, Urbana-Champaign. The Center is organized around the construction of "Knowledge Engine for Genomics" (KnowEnG), an E-science framework for genomics where biomedical scientists will have access to powerful methods of data mining, network mining, and machine learning to extract knowledge out of genomics data. The scientist will come to KnowEnG with their own data sets in the form of spreadsheets and ask KnowEnG to analyze those data sets in the light of a massive knowledge base of community data sets called the "Knowledge Network" that will be at the heart of the system. The Center is undertaking discovery projects aimed at testing the utility of KnowEnG for transforming big data to knowledge. These projects span a broad range of biological enquiry, from pharmacogenomics (in collaboration with Mayo Clinic) to transcriptomics of human behavior. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: email@example.com.
Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi
Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…
Archives of Medical and Biomedical Research is the official journal of the International Association of Medical and Biomedical Researchers (IAMBR) and the Society for Free Radical Research Africa (SFRR-Africa). It is an internationally peer reviewed, open access and multidisciplinary journal aimed at publishing original ...
Discusses the publication of biomedical journals on the Internet. Highlights include pros and cons of electronic publishing; the Global Health Network at the University of Pittsburgh; the availability of biomedical journals on the World Wide Web; current applications, including access to journal contents tables and electronic delivery of full-text…
van Alste, Jan A.
At the University of Twente together with the Free University of Amsterdam a new educational program on Biomedical Engineering will be developed. The academic program with a five-year duration will start in September 2001. After a general, broad education in Biomedical Engineering in the first three
Biomedical Engineering is the application of principles of physics, chemistry, nd engineering to problems of human health. The National Laboratories of the U.S. Department of Energy have been leaders in this scientific field since 1947. This inventory of their biomedical engineering projects was compiled in January 1999
AIMS AND SCOPE: The journal is conceived as an academic and professional journal covering all fields within the Biomedical Sciences including the allied health fields. Articles from the Physical Sciences and humanities related to the Medical Sciences will also be considered. The African Journal of Biomedical Research ...
Vanegas, Jorge A; Matos, Sérgio; González, Fabio; Oliveira, José L
This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.
Jorge A. Vanegas
Full Text Available This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.
Majernik, Jaroslav; Pancerz, Krzysztof; Zaitseva, Elena
This book presents latest results and selected applications of Computational Intelligence in Biomedical Technologies. Most of contributions deal with problems of Biomedical and Medical Informatics, ranging from theoretical considerations to practical applications. Various aspects of development methods and algorithms in Biomedical and Medical Informatics as well as Algorithms for medical image processing, modeling methods are discussed. Individual contributions also cover medical decision making support, estimation of risks of treatments, reliability of medical systems, problems of practical clinical applications and many other topics This book is intended for scientists interested in problems of Biomedical Technologies, for researchers and academic staff, for all dealing with Biomedical and Medical Informatics, as well as PhD students. Useful information is offered also to IT companies, developers of equipment and/or software for medicine and medical professionals. .
Moradi, Milad; Ghadiri, Nasser
Automatic text summarization tools help users in the biomedical domain to acquire their intended information from various textual resources more efficiently. Some of biomedical text summarization systems put the basis of their sentence selection approach on the frequency of concepts extracted from the input text. However, it seems that exploring other measures rather than the raw frequency for identifying valuable contents within an input document, or considering correlations existing between concepts, may be more useful for this type of summarization. In this paper, we describe a Bayesian summarization method for biomedical text documents. The Bayesian summarizer initially maps the input text to the Unified Medical Language System (UMLS) concepts; then it selects the important ones to be used as classification features. We introduce six different feature selection approaches to identify the most important concepts of the text and select the most informative contents according to the distribution of these concepts. We show that with the use of an appropriate feature selection approach, the Bayesian summarizer can improve the performance of biomedical summarization. Using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) toolkit, we perform extensive evaluations on a corpus of scientific papers in the biomedical domain. The results show that when the Bayesian summarizer utilizes the feature selection methods that do not use the raw frequency, it can outperform the biomedical summarizers that rely on the frequency of concepts, domain-independent and baseline methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Sweileh Waleed M
Full Text Available Abstract Background Medical research productivity reflects the level of medical education and practice in a particular country. The objective of this study was to examine the quantity and quality of medical and biomedical research published from Palestine. Findings Comprehensive review of the literature indexed by Scopus was conducted. Data from Jan 01, 2002 till December 31, 2011 was searched for authors affiliated with Palestine or Palestinian authority. Results were refined to limit the search to medical and biomedical subjects. The quality of publication was assessed using Journal Citation Report. The total number of publications was 2207. A total of 770 publications were in the medical and biomedical subject areas. The annual rate of publication was 0.077 articles per gross domestic product/capita. The 770 publications have an h-index of 32. One hundred and thirty eight (18% articles were published in 46 journals that were not indexed in the web of knowledge. Twenty two (22/770; 2.9% articles were published in journals with an IF > 10. Conclusions The quantity and quality of research originating from Palestinian institutions is promising given the scarce resources of Palestine. However, more effort is needed to bridge the gap in medical research productivity and to promote better health in Palestine.
Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.
Nanoscale structures and materials have been explored in many biological applications because of their novel and impressive physical and chemical properties. Such properties allow remarkable opportunities to study and interact with complex biological processes. This book analyses the state of the art of piezoelectric nanomaterials and introduces their applications in the biomedical field. Despite their impressive potentials, piezoelectric materials have not yet received significant attention for bio-applications. This book shows that the exploitation of piezoelectric nanoparticles in nanomedicine is possible and realistic, and their impressive physical properties can be useful for several applications, ranging from sensors and transducers for the detection of biomolecules to “sensible” substrates for tissue engineering or cell stimulation.
Tangney, John F.
The mission of ONR's Human and Bioengineered Systems Division is to direct, plan, foster, and encourage Science and Technology in cognitive science, computational neuroscience, bioscience and bio-mimetic technology, social/organizational science, training, human factors, and decision making as related to future Naval needs. This paper highlights current programs that contribute to future biomedical wellness needs in context of humanitarian assistance and disaster relief. ONR supports fundamental research and related technology demonstrations in several related areas, including biometrics and human activity recognition; cognitive sciences; computational neurosciences and bio-robotics; human factors, organizational design and decision research; social, cultural and behavioral modeling; and training, education and human performance. In context of a possible future with automated casualty evacuation, elements of current science and technology programs are illustrated.
Chmiel, Alan; Humphreys, Brad
A compact, ambulatory biometric data acquisition system has been developed for space and commercial terrestrial use. BioWATCH (Bio medical Wireless and Ambulatory Telemetry for Crew Health) acquires signals from biomedical sensors using acquisition modules attached to a common data and power bus. Several slots allow the user to configure the unit by inserting sensor-specific modules. The data are then sent real-time from the unit over any commercially implemented wireless network including 802.11b/g, WCDMA, 3G. This system has a distributed computing hierarchy and has a common data controller on each sensor module. This allows for the modularity of the device along with the tailored ability to control the cards using a relatively small master processor. The distributed nature of this system affords the modularity, size, and power consumption that betters the current state of the art in medical ambulatory data acquisition. A new company was created to market this technology.
Kulkarni, M; Gongadze, E; Perutkova, Š; A Iglič; Mazare, A; Schmuki, P; Kralj-Iglič, V; Milošev, I; Mozetič, M
Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO 2 ) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO 2 nanotubes in cell interactions is based on the fact that TiO 2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO 2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties. (topical review)
Kulkarni, M.; Mazare, A.; Gongadze, E.; Perutkova, Š.; Kralj-Iglič, V.; Milošev, I.; Schmuki, P.; Iglič, A.; Mozetič, M.
Titanium and titanium alloys exhibit a unique combination of strength and biocompatibility, which enables their use in medical applications and accounts for their extensive use as implant materials in the last 50 years. Currently, a large amount of research is being carried out in order to determine the optimal surface topography for use in bioapplications, and thus the emphasis is on nanotechnology for biomedical applications. It was recently shown that titanium implants with rough surface topography and free energy increase osteoblast adhesion, maturation and subsequent bone formation. Furthermore, the adhesion of different cell lines to the surface of titanium implants is influenced by the surface characteristics of titanium; namely topography, charge distribution and chemistry. The present review article focuses on the specific nanotopography of titanium, i.e. titanium dioxide (TiO2) nanotubes, using a simple electrochemical anodisation method of the metallic substrate and other processes such as the hydrothermal or sol-gel template. One key advantage of using TiO2 nanotubes in cell interactions is based on the fact that TiO2 nanotube morphology is correlated with cell adhesion, spreading, growth and differentiation of mesenchymal stem cells, which were shown to be maximally induced on smaller diameter nanotubes (15 nm), but hindered on larger diameter (100 nm) tubes, leading to cell death and apoptosis. Research has supported the significance of nanotopography (TiO2 nanotube diameter) in cell adhesion and cell growth, and suggests that the mechanics of focal adhesion formation are similar among different cell types. As such, the present review will focus on perhaps the most spectacular and surprising one-dimensional structures and their unique biomedical applications for increased osseointegration, protein interaction and antibacterial properties.
Full Text Available Applying machine learning techniques to on-line biomedical databases is a challenging task, as this data is collected from large number of sources and it is multi-dimensional. Also retrieval of relevant document from large repository such as gene document takes more processing time and an increased false positive rate. Generally, the extraction of biomedical document is based on the stream of prior observations of gene parameters taken at different time periods. Traditional web usage models such as Markov, Bayesian and Clustering models are sensitive to analyze the user navigation patterns and session identification in online biomedical database. Moreover, most of the document ranking models on biomedical database are sensitive to sparsity and outliers. In this paper, a novel user recommendation system was implemented to predict the top ranked biomedical documents using the disease type, gene entities and user navigation patterns. In this recommendation system, dynamic session identification, dynamic user identification and document ranking techniques were used to extract the highly relevant disease documents on the online PubMed repository. To verify the performance of the proposed model, the true positive rate and runtime of the model was compared with that of traditional static models such as Bayesian and Fuzzy rank. Experimental results show that the performance of the proposed ranking model is better than the traditional models.
... LBD! Donate Increasing Knowledge - How to Read a Research Paper Click here to download a pdf of this ... the human system is the subject of many research papers. Basic research in the biomedical sciences investigates the ...
Wang, Beichen; Chen, Xiaodong; Mamitsuka, Hiroshi; Zhu, Shanfeng
With the rapid development of biomedical sciences, a great number of documents have been published to report new scientific findings and advance the process of knowledge discovery. By the end of 2013, the largest biomedical literature database, MEDLINE, has indexed over 23 million abstracts. It is thus not easy for scientific professionals to find experts on a certain topic in the biomedical domain. In contrast to the existing services that use some ad hoc approaches, we developed a novel solution to biomedical expert finding, BMExpert, based on the language model. For finding biomedical experts, who are the most relevant to a specific topic query, BMExpert mines MEDLINE documents by considering three important factors: relevance of documents to the query topic, importance of documents, and associations between documents and experts. The performance of BMExpert was evaluated on a benchmark dataset, which was built by collecting the program committee members of ISMB in the past three years (2012-2014) on 14 different topics. Experimental results show that BMExpert outperformed three existing biomedical expert finding services: JANE, GoPubMed, and eTBLAST, with respect to both MAP (mean average precision) and P@50 (Precision). BMExpert is freely accessed at http://datamining-iip.fudan.edu.cn/service/BMExpert/.
Lin, Kang-Ping; Kao, Tsair; Wang, Jia-Jung; Chen, Mei-Jung; Su, Fong-Chin
Biomedical Engineers (BME) play an important role in medical and healthcare society. Well educational programs are important to support the healthcare systems including hospitals, long term care organizations, manufacture industries of medical devices/instrumentations/systems, and sales/services companies of medical devices/instrumentations/system. In past 30 more years, biomedical engineering society has accumulated thousands people hold a biomedical engineering degree, and work as a biomedical engineer in Taiwan. Most of BME students can be trained in biomedical engineering departments with at least one of specialties in bioelectronics, bio-information, biomaterials or biomechanics. Students are required to have internship trainings in related institutions out of campus for 320 hours before graduating. Almost all the biomedical engineering departments are certified by IEET (Institute of Engineering Education Taiwan), and met the IEET requirement in which required mathematics and fundamental engineering courses. For BMEs after graduation, Taiwanese Society of Biomedical Engineering (TSBME) provides many continue-learning programs and certificates for all members who expect to hold the certification as a professional credit in his working place. In current status, many engineering departments in university are continuously asked to provide joint programs with BME department to train much better quality students. BME is one of growing fields in Taiwan.
Dankar, Fida K; Ptitsyn, Andrey; Dankar, Samar K
Contemporary biomedical databases include a wide range of information types from various observational and instrumental sources. Among the most important features that unite biomedical databases across the field are high volume of information and high potential to cause damage through data corruption, loss of performance, and loss of patient privacy. Thus, issues of data governance and privacy protection are essential for the construction of data depositories for biomedical research and healthcare. In this paper, we discuss various challenges of data governance in the context of population genome projects. The various challenges along with best practices and current research efforts are discussed through the steps of data collection, storage, sharing, analysis, and knowledge dissemination.
Zhang, Han; Fiszman, Marcelo; Shin, Dongwook
Background: Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts).Results: Sem...
Zhang, Han; Fiszman, Marcelo; Shin, Dongwook; Wilkowski, Bartlomiej; Rindflesch, Thomas C
Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts). SemRep is used to extract semantic predications from the citations returned by a PubMed search. Cliques were identified from frequently occurring predications with highly connected arguments filtered by degree centrality. Themes contained in the summary were identified with a hierarchical clustering algorithm based on common arguments shared among cliques. The validity of the clusters in the summaries produced was compared to the Silhouette-generated baseline for cohesion, separation and overall validity. The theme labels were also compared to a reference standard produced with major MeSH headings. For 11 topics in the testing data set, the overall validity of clusters from the system summary was 10% better than the baseline (43% versus 33%). While compared to the reference standard from MeSH headings, the results for recall, precision and F-score were 0.64, 0.65, and 0.65 respectively.
Background Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts). Results SemRep is used to extract semantic predications from the citations returned by a PubMed search. Cliques were identified from frequently occurring predications with highly connected arguments filtered by degree centrality. Themes contained in the summary were identified with a hierarchical clustering algorithm based on common arguments shared among cliques. The validity of the clusters in the summaries produced was compared to the Silhouette-generated baseline for cohesion, separation and overall validity. The theme labels were also compared to a reference standard produced with major MeSH headings. Conclusions For 11 topics in the testing data set, the overall validity of clusters from the system summary was 10% better than the baseline (43% versus 33%). While compared to the reference standard from MeSH headings, the results for recall, precision and F-score were 0.64, 0.65, and 0.65 respectively. PMID:23742159
Njiru, M W; Mutai, C; Gikunju, J
The proper handling and disposal of Bio-medical waste (BMW) is very imperative. There are well defined set rules for handling BMW worldwide. Unfortunately, laxity and lack of adequate training and awareness in execution of these rules leads to staid health and environment apprehension. To assessthe awareness and practice regarding biomedical waste management among health care personnel in Kenyatta National Hospital (KNH) DESIGN: A cross sectional study design. Kenyatta National Hospital Doctors, Nurses and support staff who have worked in the institution for more than six months and consented were evaluated. The total level of awareness on biomedical waste management among health care personnel was found to be 60%. The doctors scored 51% which was the lowest score the nurses scored 65% which was the highest score while the support staff scored 55%. As for the practices, the results showed that most of the healthcare personnel were aware of the biomedical waste management practices in the hospital with the lowest scores emerging from doctors and this shows no association between knowledge on biomedical waste management and education. When asked how they would describe the control of waste management in the institution 59% said good and 40% said fair while 1% said poor. The present study therefore outlines the gap between biomedical waste management rules and inadequate state of execution and awareness in practice. It is recommended that enhancement be done to the already existing Hospital Infection Control Committee to supervise all the aspects of biomedical waste management. Periodical training programmes for biomedical waste handling and disposal to the staff with focus on doctors is highlighted.
This book provides an introduction to design of biomedical optical imaging technologies and their applications. The main topics include: fluorescence imaging, confocal imaging, micro-endoscope, polarization imaging, hyperspectral imaging, OCT imaging, multimodal imaging and spectroscopic systems. Each chapter is written by the world leaders of the respective fields, and will cover: principles and limitations of optical imaging technology, system design and practical implementation for one or two specific applications, including design guidelines, system configuration, optical design, component requirements and selection, system optimization and design examples, recent advances and applications in biomedical researches and clinical imaging. This book serves as a reference for students and researchers in optics and biomedical engineering.
Kemnitzer, Ronald; Dorsa, Ed
The development of biomedical equipment is justifiably focused on making products that "work." However, this approach leaves many of the people affected by these designs (operators, patients, etc.) with little or no representation when it comes to the design of these products. Industrial design is a "user focused" profession which takes into account the needs of diverse groups when making design decisions. The authors propose that biomedical equipment design can be enhanced, made more user and patient "friendly" by adopting the industrial design approach to researching, analyzing, and ultimately designing biomedical products.
Full Text Available Biomedical signal and image processing establish a dynamic area of specialization in both academic as well as research aspects of biomedical engineering. The concepts of signal and image processing have been widely used for extracting the physiological information in implementing many clinical procedures for sophisticated medical practices and applications. In this paper, the relationship between electrophysiological signals, i.e., electrocardiogram (ECG, electromyogram (EMG, electroencephalogram (EEG and functional image processing and their derived interactions have been discussed. Examples have been investigated in various case studies such as neurosciences, functional imaging, and cardiovascular system, by using different algorithms and methods. The interaction between the extracted information obtained from multiple signals and modalities seems to be very promising. The advanced algorithms and methods in the area of information retrieval based on time-frequency representation have been investigated. Finally, some examples of algorithms have been discussed in which the electrophysiological signals and functional images have been properly extracted and have a significant impact on various biomedical applications. Keywords: Biomedical signals and images, Processing, Analysis
Kim, Seongsoon; Park, Donghyeon; Choi, Yonghwa; Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon; Kang, Jaewoo
With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last
De la extracción al modelado del conocimiento en un Sistema Basado en el Conocimiento. Un enfoque desde el agrupamiento conceptual lógico combinatorio (From the extraction to knowledge modeling in a Knowledge Based System. A logical combinatorial conceptual grouping approach
Yunia Reyes González
acquisition process required in a knowledge-based system can be automated or partially automated. The idea is to reduce the working time between the knowledge engineer and the knowledge expert in the intelligent computer system that is to be built. This paper presents the potential of logical combinatorial grouping for both extraction and knowledge modeling in the construction of this type of computer systems. Three specific cases of Knowledge Based Systems are presented in which concepts are used in their essential processes: how to represent the knowledge and method of solving the problem. This approach allows, among other advantages, the automation of knowledge extraction process which makes it possible to separate it from human experts and bring the Knowledge Based Systems theory to more current paradigms where techniques like Big Data are applied.
Wager, Elizabeth; Middleton, Philippa
Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no
McEntire, Robin; Szalkowski, Debbie; Butler, James; Kuo, Michelle S; Chang, Meiping; Chang, Man; Freeman, Darren; McQuay, Sarah; Patel, Jagruti; McGlashen, Michael; Cornell, Wendy D; Xu, Jinghai James
External content sources such as MEDLINE(®), National Institutes of Health (NIH) grants and conference websites provide access to the latest breaking biomedical information, which can inform pharmaceutical and biotechnology company pipeline decisions. The value of the sites for industry, however, is limited by the use of the public internet, the limited synonyms, the rarity of batch searching capability and the disconnected nature of the sites. Fortunately, many sites now offer their content for download and we have developed an automated internal workflow that uses text mining and tailored ontologies for programmatic search and knowledge extraction. We believe such an efficient and secure approach provides a competitive advantage to companies needing access to the latest information for a range of use cases and complements manually curated commercial sources. Copyright © 2016. Published by Elsevier Ltd.
Skrabalak, Sara E.; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl4. The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where the optical attenuation caused by blood and soft tissue is essentially negligible. They are also biocompatible and present a well-established surface for easy functionalization. We have tailored the scattering and absorption cross-sections of Au nanocages for use in optical coherence tomography and photothermal treatment, respectively. Our preliminary studies show greatly improved spectroscopic image contrast for tissue phantoms containing Au nanocages. Our most recent results also demonstrate the photothermal destruction of breast cancer cells in vitro by using immuno-targeted Au nanocages as an effective photo-thermal transducer. These experiments suggest that Au nanocages may be a new class of nanometer-sized agents for cancer diagnosis and therapy. PMID:18648528
Skrabalak, Sara E; Chen, Jingyi; Au, Leslie; Lu, Xianmao; Li, Xingde; Xia, Younan
Nanostructured materials provide a promising platform for early cancer detection and treatment. Here we highlight recent advances in the synthesis and use of Au nanocages for such biomedical applications. Gold nanocages represent a novel class of nanostructures, which can be prepared via a remarkably simple route based on the galvanic replacement reaction between Ag nanocubes and HAuCl(4). The Au nanocages have a tunable surface plasmon resonance peak that extends into the near-infrared, where the optical attenuation caused by blood and soft tissue is essentially negligible. They are also biocompatible and present a well-established surface for easy functionalization. We have tailored the scattering and absorption cross-sections of Au nanocages for use in optical coherence tomography and photothermal treatment, respectively. Our preliminary studies show greatly improved spectroscopic image contrast for tissue phantoms containing Au nanocages. Our most recent results also demonstrate the photothermal destruction of breast cancer cells in vitro by using immuno-targeted Au nanocages as an effective photo-thermal transducer. These experiments suggest that Au nanocages may be a new class of nanometer-sized agents for cancer diagnosis and therapy.
Deligkaris, Kosmas; Tadele, T.S.; Olthuis, Wouter; van den Berg, Albert
This review paper presents hydrogel-based devices for biomedical applications. The first part of the paper gives a comprehensive, qualitative, theoretical overview of hydrogels' synthesis and operation. Crosslinking methods, operation principles and transduction mechanisms are discussed in this
Federal Laboratory Consortium — The NICHD Biomedical Mass Spectrometry Core Facility was created under the auspices of the Office of the Scientific Director to provide high-end mass-spectrometric...
Parracino, A.; Gajula, G.P.; di Gennaro, A.K.
Medical interest in nanotechnology originates from a belief that nanoscale therapeutic devices can be constructed and directed towards its target inside the human body. Such nanodevices can be engineered by coupling superparamagnetic nanoparticle to biomedically active proteins. We hereby report ...
Kim, Donghyun; Somekh, Michael
Nanophotonics has emerged rapidly into technological mainstream with the advent and maturity of nanotechnology available in photonics and enabled many new exciting applications in the area of biomedical science and engineering that were unimagined even a few years ago with conventional photonic engineering techniques. Handbook of Nanophotonics in Biomedical Engineering is intended to be a reliable resource to a wealth of information on nanophotonics that can inspire readers by detailing emerging and established possibilities of nanophotonics in biomedical science and engineering applications. This comprehensive reference presents not only the basics of nanophotonics but also explores recent experimental and clinical methods used in biomedical and bioengineering research. Each peer-reviewed chapter of this book discusses fundamental aspects and materials/fabrication issues of nanophotonics, as well as applications in interfaces, cell, tissue, animal studies, and clinical engineering. The organization provides ...
Geusebroek, J.M.; Hoang, M.A.; van Gemert, J.; Worring, M.
We exploit the retrieval of visual information from biomedical scientific publication databses. Therefore, we consider the use of domain specific genres to automatically subdivide large image databases into smaller, consistent parts. Combination with Latent Semantic Indexing on the picture captions
A collaboration between the National Science Foundation and the National Institutes of Health will give NIH-funded researchers training to help them evaluate their scientific discoveries for commercial potential, with the aim of accelerating biomedical in
de Silva, N H Nisansa D
In various biomedical applications that collect, handle, and manipulate data, the amounts of data tend to build up and venture into the range identified as bigdata. In such occurrences, a design decision has to be taken as to what type of database would be used to handle this data. More often than not, the default and classical solution to this in the biomedical domain according to past research is relational databases. While this used to be the norm for a long while, it is evident that there is a trend to move away from relational databases in favor of other types and paradigms of databases. However, it still has paramount importance to understand the interrelation that exists between biomedical big data and relational databases. This chapter will review the pros and cons of using relational databases to store biomedical big data that previous researches have discussed and used.
Journal of Medical and Biomedical Sciences. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 3 (2014) >. Log in or Register to get access to full text downloads.
International Journal of Medicine and Biomedical Research. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 5, No 3 (2016) >. Log in or Register to get access to full text downloads.
Journal of Medical and Biomedical Sciences. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 1, No 3 (2012) >. Log in or Register to get access to full text downloads.
National Aeronautics and Space Administration — Our project investigated whether a software platform could integrate as wide a variety of devices and data types as needed for spaceflight biomedical support. The...
San, Ka-Yiu; McIntire, Larry V.
Presents an introduction to the Biochemical and Biomedical Engineering program at Rice University. Describes the development of the academic and enhancement programs, including organizational structure and research project titles. (YP)
Bustamante, John; Sierra, Daniel
This volume presents the proceedings of the CLAIB 2016, held in Bucaramanga, Santander, Colombia, 26, 27 & 28 October 2016. The proceedings, presented by the Regional Council of Biomedical Engineering for Latin America (CORAL), offer research findings, experiences and activities between institutions and universities to develop Bioengineering, Biomedical Engineering and related sciences. The conferences of the American Congress of Biomedical Engineering are sponsored by the International Federation for Medical and Biological Engineering (IFMBE), Society for Engineering in Biology and Medicine (EMBS) and the Pan American Health Organization (PAHO), among other organizations and international agencies to bring together scientists, academics and biomedical engineers in Latin America and other continents in an environment conducive to exchange and professional growth.
This volume presents the proceedings of the CLAIB 2014, held in Paraná, Entre Ríos, Argentina 29, 30 & 31 October 2014. The proceedings, presented by the Regional Council of Biomedical Engineering for Latin America (CORAL) offer research findings, experiences and activities between institutions and universities to develop Bioengineering, Biomedical Engineering and related sciences. The conferences of the American Congress of Biomedical Engineering are sponsored by the International Federation for Medical and Biological Engineering (IFMBE), Society for Engineering in Biology and Medicine (EMBS) and the Pan American Health Organization (PAHO), among other organizations and international agencies and bringing together scientists, academics and biomedical engineers in Latin America and other continents in an environment conducive to exchange and professional growth. The Topics include: - Bioinformatics and Computational Biology - Bioinstrumentation; Sensors, Micro and Nano Technologies - Biomaterials, Tissu...
This paper explains two fundamental approaches to knowledge management. The tacit knowledge approach emphasizes understanding the kinds of knowledge that individuals in an organization have, moving people to transfer knowledge within an organization, and managing key individuals as knowledge creators and carriers. By contrast, the explicit knowledge approach emphasizes processes for articulating knowledge held by individuals, the design of organizational approaches for creating...
Shaped by Quantum Theory, Technology, and the Genomics RevolutionThe integration of photonics, electronics, biomaterials, and nanotechnology holds great promise for the future of medicine. This topic has recently experienced an explosive growth due to the noninvasive or minimally invasive nature and the cost-effectiveness of photonic modalities in medical diagnostics and therapy. The second edition of the Biomedical Photonics Handbook presents recent fundamental developments as well as important applications of biomedical photonics of interest to scientists, engineers, manufacturers, teachers,
Biomedical applications have benefited greatly from the increasing interest and research into semiconducting silicon nanowires. Semiconducting Silicon Nanowires for Biomedical Applications reviews the fabrication, properties, and applications of this emerging material. The book begins by reviewing the basics, as well as the growth, characterization, biocompatibility, and surface modification, of semiconducting silicon nanowires. It goes on to focus on silicon nanowires for tissue engineering and delivery applications, including cellular binding and internalization, orthopedic tissue scaffol
Saha, Punam K; Basu, Subhadip
There has been rapid growth in biomedical engineering in recent decades, given advancements in medical imaging and physiological modelling and sensing systems, coupled with immense growth in computational and network technology, analytic approaches, visualization and virtual-reality, man-machine interaction and automation. Biomedical engineering involves applying engineering principles to the medical and biological sciences and it comprises several topics including biomedicine, medical imaging, physiological modelling and sensing, instrumentation, real-time systems, automation and control, sig
Kamala, K; Sivaperumal, P
Marine microbial enzyme technologies have progressed significantly in the last few decades for different applications. Among the various microorganisms, marine actinobacterial enzymes have significant active properties, which could allow them to be biocatalysts with tremendous bioactive metabolites. Moreover, marine actinobacteria have been considered as biofactories, since their enzymes fulfill biomedical and industrial needs. In this chapter, the marine actinobacteria and their enzymes' uses in biological activities and biomedical applications are described. © 2017 Elsevier Inc. All rights reserved.
Roč. 20, č. 6 (2009), s. 743-750 ISSN 1180-4009. [TIES 2007. Annual Meeting of the International Environmental Society /18./. Mikulov, 16.08.2007-20.08.2007] Institutional research plan: CEZ:AV0Z10300504 Keywords : biomedical informatics * biomedical statistics * genetic information * forensic dentistry Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.000, year: 2009
Duchange, Nathalie; Autard, Delphine; Pinhas, Nicole
International audience; Open access within the scientific community depends on the scientific context and the practices of the field. In the biomedical domain, the communication of research results is characterised by the importance of the peer reviewing process, the existence of a hierarchy among journals and the transfer of copyright to the editor. Biomedical publishing has become a lucrative market and the growth of electronic journals has not helped lower the costs. Indeed, it is difficul...
Chee Kai Chua; Wai Yee Yeong; Jia An
Three-dimensional (3D) printing has a long history of applications in biomedical engineering. The development and expansion of traditional biomedical applications are being advanced and enriched by new printing technologies. New biomedical applications such as bioprinting are highly attractive and trendy. This Special Issue aims to provide readers with a glimpse of the recent profile of 3D printing in biomedical research.
Chua, Chee Kai; Yeong, Wai Yee; An, Jia
Three-dimensional (3D) printing has a long history of applications in biomedical engineering. The development and expansion of traditional biomedical applications are being advanced and enriched by new printing technologies. New biomedical applications such as bioprinting are highly attractive and trendy. This Special Issue aims to provide readers with a glimpse of the recent profile of 3D printing in biomedical research.
Full Text Available The exponential growth in the volume of publications in the biomedical domain has made it impossible for an individual to keep pace with the advances. Even though evidence-based medicine has gained wide acceptance, the physicians are unable to access the relevant information in the required time, leaving most of the questions unanswered. This accentuates the need for fast and accurate biomedical question answering systems. In this paper we introduce INDOCÃ¢Â€Â”a biomedical question answering system based on novel ideas of indexing and extracting the answer to the questions posed. INDOC displays the results in clusters to help the user arrive the most relevant set of documents quickly. Evaluation was done against the standard OHSUMED test collection. Our system achieves high accuracy and minimizes user effort.
Vidaurre, Carmen; Sander, Tilmann H; Schlögl, Alois
BioSig is an open source software library for biomedical signal processing. The aim of the BioSig project is to foster research in biomedical signal processing by providing free and open source software tools for many different application areas. Some of the areas where BioSig can be employed are neuroinformatics, brain-computer interfaces, neurophysiology, psychology, cardiovascular systems, and sleep research. Moreover, the analysis of biosignals such as the electroencephalogram (EEG), electrocorticogram (ECoG), electrocardiogram (ECG), electrooculogram (EOG), electromyogram (EMG), or respiration signals is a very relevant element of the BioSig project. Specifically, BioSig provides solutions for data acquisition, artifact processing, quality control, feature extraction, classification, modeling, and data visualization, to name a few. In this paper, we highlight several methods to help students and researchers to work more efficiently with biomedical signals.
1. I present a combination of semi-objective and subjective evidence that the quality of English prose in biomedical scientific writing is deteriorating. 2. I consider seven possible strategies for reversing this apparent trend. These refer to a greater emphasis on good writing by students in schools and by university students, consulting books on science writing, one-on-one mentoring, using 'scientific' measures to reveal lexical poverty, making use of freelance science editors and encouraging the editors of biomedical journals to pay more attention to the problem. 3. I conclude that a fruitful, long-term, strategy would be to encourage more biomedical scientists to embark on a career in science editing. This strategy requires a complementary initiative on the part of biomedical research institutions and universities to employ qualified science editors. 4. An immediately realisable strategy is to encourage postgraduate students in the biomedical sciences to undertake the service courses provided by many universities on writing English prose in general and scientific prose in particular. This strategy would require that heads of departments and supervisors urge their postgraduate students to attend such courses. 5. Two major publishers of biomedical journals, Blackwell Publications and Elsevier Science, now provide lists of commercial editing services on their web sites. I strongly recommend that authors intending to submit manuscripts to their journals (including Blackwell's Clinical and Experimental Pharmacology and Physiology) make use of these services. This recommendation applies especially to those for whom English is a second language.
Séaghdha Diarmuid Ó
Full Text Available Abstract Background Applications of Natural Language Processing (NLP technology to biomedical texts have generated significant interest in recent years. In this paper we identify and investigate the phenomenon of linguistic subdomain variation within the biomedical domain, i.e., the extent to which different subject areas of biomedicine are characterised by different linguistic behaviour. While variation at a coarser domain level such as between newswire and biomedical text is well-studied and known to affect the portability of NLP systems, we are the first to conduct an extensive investigation into more fine-grained levels of variation. Results Using the large OpenPMC text corpus, which spans the many subdomains of biomedicine, we investigate variation across a number of lexical, syntactic, semantic and discourse-related dimensions. These dimensions are chosen for their relevance to the performance of NLP systems. We use clustering techniques to analyse commonalities and distinctions among the subdomains. Conclusions We find that while patterns of inter-subdomain variation differ somewhat from one feature set to another, robust clusters can be identified that correspond to intuitive distinctions such as that between clinical and laboratory subjects. In particular, subdomains relating to genetics and molecular biology, which are the most common sources of material for training and evaluating biomedical NLP tools, are not representative of all biomedical subdomains. We conclude that an awareness of subdomain variation is important when considering the practical use of language processing applications by biomedical researchers.
Nanotechnology is revolutionizing human's life. Synthesis and application of magnetic nanoparticles is a fast burgeoning field which has potential to bring significant advance in many fields, for example diagnosis and treatment in biomedical area. Novel nanoparticles to function efficiently and intelligently are in desire to improve the current technology. We used a magnetron-sputtering-based nanocluster deposition technique to synthesize magnetic nanoparticles in gas phase, and specifically engineered nanoparticles for different applications. Alternating magnetic field heating is emerging as a technique to assist cancer treatment or drug delivery. We proposed high-magnetic-moment Fe3Si particles with relatively large magnetic anisotropy energy should in principle provide superior performance. Such nanoparticles were experimentally synthesized and characterized. Their promising magnetic properties can contribute to heating performance under suitable alternating magnetic field conditions. When thermal energy is used for medical treatment, it is ideal to work in a designed temperature range. Biocompatible and "smart" magnetic nanoparticles with temperature self-regulation were designed from both materials science and biomedicine aspects. We chose Fe-Si material system to demonstrate the concept. Temperature dependent physical property was adjusted by tuning of exchange coupling between Fe atoms through incorporation of various amount of Si. The magnetic moment can still be kept in a promising range. The two elements are both biocompatible, which is favored by in-vivo medical applications. A combination of "smart" magnetic particles and thermo-sensitive polymer were demonstrated to potentially function as a platform for drug delivery. Highly sensitive diagnosis for point-of-care is in desire nowadays. We developed composition- and phase-controlled Fe-Co nanoparticles for bio-molecule detection. It has been demonstrated that Fe70Co30 nanoparticles and giant
Most biomedical journals charge readers a hefty access toll to read the full text version of a published research article. These tolls bring enormous profits to the traditional corporate publishing industry, but they make it impossible for most people worldwide--particularly in low and middle income countries--to access the biomedical literature. Traditional publishers also insist on owning the copyright on these articles, making it illegal for readers to freely distribute and photocopy papers, translate them, or create derivative educational works. This article argues that excluding the poor from accessing and freely using the biomedical research literature is harming global public health. Health care workers, for example, are prevented from accessing the information they need to practice effective medicine, while policymakers are prevented from accessing the essential knowledge they require to build better health care systems. The author proposes that the biomedical literature should be considered a global public good, basing his arguments upon longstanding and recent international declarations that enshrine access to scientific and medical knowledge as a human right. He presents an emerging alternative publishing model, called open access, and argues that this model is a more socially responsive and equitable approach to knowledge dissemination.
Cruser, des Anges; Dubin, Bruce; Brown, Sarah K; Bakken, Lori L; Licciardone, John C; Podawiltz, Alan L; Bulik, Robert J
Without systematic exposure to biomedical research concepts or applications, osteopathic medical students may be generally under-prepared to efficiently consume and effectively apply research and evidence-based medicine information in patient care. The academic literature suggests that although medical residents are increasingly expected to conduct research in their post graduate training specialties, they generally have limited understanding of research concepts.With grant support from the National Center for Complementary and Alternative Medicine, and a grant from the Osteopathic Heritage Foundation, the University of North Texas Health Science Center (UNTHSC) is incorporating research education in the osteopathic medical school curriculum. The first phase of this research education project involved a baseline assessment of students' understanding of targeted research concepts. This paper reports the results of that assessment and discusses implications for research education during medical school. Using a novel set of research competencies supported by the literature as needed for understanding research information, we created a questionnaire to measure students' confidence and understanding of selected research concepts. Three matriculating medical school classes completed the on-line questionnaire. Data were analyzed for differences between groups using analysis of variance and t-tests. Correlation coefficients were computed for the confidence and applied understanding measures. We performed a principle component factor analysis of the confidence items, and used multiple regression analyses to explore how confidence might be related to the applied understanding. Of 496 total incoming, first, and second year medical students, 354 (71.4%) completed the questionnaire. Incoming students expressed significantly more confidence than first or second year students (F = 7.198, df = 2, 351, P = 0.001) in their ability to understand the research concepts. Factor analyses
H. P. S. Abdul Khalil
Full Text Available The term hydrocolloid generally refers to substances that form gels or provide viscous dispersion in the presence of water. Alginate, agar, and carrageenan are three commercially valuable hydrocolloids derived from certain brown and red seaweed and each has their distinct physicochemical properties (i.e. functional and bioactive. Various applications of these seaweed hydrocolloids as thickeners, stabilizers, coagulants and salves (in the wound and burn dressings and materials to produce bio-medical impressions in the food, pharmaceutical, and biotechnology industries are highlighted in this review. Although the existing industrial methods of extraction for these seaweed hydrocolloids are well-established, still growing demand has exposed certain limitations of those methods, notably efficiency and product consistency. In order to achieve targeted hydrocolloids for specific purposes and functionalities, some novel and green extraction methods have also been proposed and discussed. Microwave-assisted extraction (MAE, ultrasound-assisted extraction (UAE, enzyme-assisted extraction (EAE, supercritical fluid extraction (SFE, pressurized solvent extractions (PSE, reactive extrusion and photobleaching process are selectively presented as highly promising candidates that can avoid the use of chemicals and provide novel means of access to seaweed hydrocolloids with both economic and environmental benefits. However, this review does not provide the ‘best’ method or procedure as many are still under development. Hence, the review gives ‘food for thought’as to new processes which might be adopted industrially and concluded that further research is required in order to contribute additional new knowledge and refinement to this field of study.
A methodology for extracting knowledge rules from artificial neural networks applied to forecast demand for electric power; Uma metodologia para extracao de regras de conhecimento a partir de redes neurais artificiais aplicadas para previsao de demanda por energia eletrica
Steinmetz, Tarcisio; Souza, Glauber; Ferreira, Sandro; Santos, Jose V. Canto dos; Valiati, Joao [Universidade do Vale do Rio dos Sinos (PIPCA/UNISINOS), Sao Leopoldo, RS (Brazil). Programa de Pos-Graduacao em Computacao Aplicada], Emails: firstname.lastname@example.org, email@example.com, sferreira, firstname.lastname@example.org, email@example.com
We present a methodology for the extraction of rules from Artificial Neural Networks (ANN) trained to forecast the electric load demand. The rules have the ability to express the knowledge regarding the behavior of load demand acquired by the ANN during the training process. The rules are presented to the user in an easy to read format, such as IF premise THEN consequence. Where premise relates to the input data submitted to the ANN (mapped as fuzzy sets), and consequence appears as a linear equation describing the output to be presented by the ANN, should the premise part holds true. Experimentation demonstrates the method's capacity for acquiring and presenting high quality rules from neural networks trained to forecast electric load demand for several amounts of time in the future. (author)
This paper explains two fundamental approaches to knowledge management. The tacit knowledge approach emphasizes understanding the kinds of knowledge that individuals in an organization have, moving people to transfer knowledge within an organization, and managing key individuals as knowledge...... within an organization. The relative advantages and disadvantages of both approaches to knowledge management are summarized. A synthesis of tacit and knowledge management approaches is recommended to create a hybrid design for the knowledge management practices in a given organization....
Cole, Brian S; Moore, Jason H
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Jiehui Jiang; Yuting Zhang; Mi Zhou; Xiaosong Zheng; Zhuangzhi Yan
Biomedical Engineering (BME) bachelor education aims to train qualified engineers who devote themselves to addressing biological and medical problems by integrating the technological, medical and biological knowledge. Design thinking and teamwork with other disciplines are necessary for biomedical engineers. In the current biomedical engineering education system of Shanghai University (SHU), however, such design thinking and teamwork through a practical project is lacking. This paper describes a creative "joint assignment" project in Shanghai University, China, which has provided BME bachelor students a two-year practical experience to work with students from multidisciplinary departments including sociology, mechanics, computer sciences, business and art, etc. To test the feasibility of this project, a twenty-month pilot project has been carried out from May 2015 to December 2016. The results showed that this pilot project obviously enhanced competitive power of BME students in Shanghai University, both in the capabilities of design thinking and teamwork.
Wang Xiupeng; Ito, Atsuo; Li Xia; Sogo, Yu; Oyane, Ayako
In this review, the current knowledge of signal molecules-calcium phosphate coprecipitation and its biomedical application as a functional coating are described. Although signal molecules regulate a variety of cellular processes, it is difficult to sustain the regulation activity for a long term when the signal molecules are only injected in a free form. The signal molecules-calcium phosphate coprecipitation on a substrate surface is a very promising process to achieve sustained regulation activity of the signal molecules by controlled and localized delivery of the signal molecules to specific body sites (implantation sites). However, the significance of immobilizing signal molecules with calcium phosphate coatings and their biomedical application are not systematically illustrated. For this purpose, the presently existing coprecipitation methods and strategies on biomedical application are summarized and discussed. (topical review)
Barroga, Edward; Vardaman, Maya
The primary objective of educational programs on biomedical writing, editing, and publishing is to nurture ethical skills among local and international researchers and editors from diverse professional backgrounds. The mechanics, essential components, and target outcomes of these programs are described in this article. The mechanics covers the objectives, design, benefits, duration, participants and qualifications, program formats, administrative issues, and mentorship. The essential components consist of three core schedules: Schedule I Basic aspects of biomedical writing, editing, and communications; Schedule II Essential skills in biomedical writing, editing, and publishing; and Schedule III Interactive lectures on relevant topics. The target outcomes of the programs comprise knowledge acquisition, skills development, paper write-up, and journal publication. These programs add to the prestige and academic standing of the host institutions.
, expressed interest in engaging with prayer camps to expand access to clinical care for patients residing in the camps.The findings demonstrate that biomedical care providers are interested in engaging with prayer camps. Key areas where partnerships may best improve conditions for patients at prayer camps include collaborating on creating safe and secure physical spaces and delivering medication for mental illness to patients living in prayer camps. However, while prayer camp staff are willing to engage biomedical knowledge, deeply held beliefs and routine practices of faith and biomedical healers are difficult to reconcile Additional discussion is needed to find the common ground on which the scarce resources for mental health care in Ghana can collaborate most effectively.
Full Text Available Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.
Full Text Available Abstract Background Negated biomedical events are often ignored by text-mining applications; however, such events carry scientific significance. We report on the development of BioN∅T, a database of negated sentences that can be used to extract such negated events. Description Currently BioN∅T incorporates ≈32 million negated sentences, extracted from over 336 million biomedical sentences from three resources: ≈2 million full-text biomedical articles in Elsevier and the PubMed Central, as well as ≈20 million abstracts in PubMed. We evaluated BioN∅T on three important genetic disorders: autism, Alzheimer's disease and Parkinson's disease, and found that BioN∅T is able to capture negated events that may be ignored by experts. Conclusions The BioN∅T database can be a useful resource for biomedical researchers. BioN∅T is freely available at http://bionot.askhermes.org/. In future work, we will develop semantic web related technologies to enrich BioN∅T.
Afshinnekoo, Ebrahim; Ahsanuddin, Sofia; Mason, Christopher E
Crowdfunding and crowdsourcing of medical research has emerged as a novel paradigm for many biomedical disciplines to rapidly collect, process and interpret data from high-throughput and high-dimensional experiments. The novelty and promise of these approaches have led to fundamental discoveries about RNA mechanisms, microbiome dynamics and even patient interpretation of test results. However, these methods require robust training protocols, uniform sampling methods and experimental rigor in order to be useful for subsequent research efforts. Executed correctly, crowdfunding and crowdsourcing can leverage public resources and engagement to generate support for scientific endeavors that would otherwise be impossible due to funding constraints and or the large number of participants needed for data collection. We conducted a comprehensive literature review of scientific studies that utilized crowdsourcing and crowdfunding to generate data. We also discuss our own experiences conducting citizen-science research initiatives (MetaSUB and PathoMap) in ensuring data robustness, educational outreach and public engagement. We demonstrate the efficacy of crowdsourcing mechanisms for revolutionizing microbiome and metagenomic research to better elucidate the microbial and genetic dynamics of cities around the world (as well as non-urban areas). Crowdsourced studies have been able to create an improved and unprecedented ability to monitor, design and measure changes at the microbial and macroscopic scale. Thus, the use of crowdsourcing strategies has dramatically altered certain genomics research to create global citizen-science initiatives that reveal new discoveries about the world's genetic dynamics. The effectiveness of crowdfunding and crowdsourcing is largely dependent on the study design and methodology. One point of contention for the present discussion is the validity and scientific rigor of data that are generated by non-scientists. Selection bias, limited sample
Zhu, Yongjun; Yan, Erjia; Wang, Fei
Understanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec's ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec. We download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects. Models trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task). Increasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article
Fang, Shancheng; Xie, Hongtao; Chen, Zhineng; Liu, Yizhi; Li, Yan
How to read Uyghur text from biomedical graphic images is a challenge problem due to the complex layout and cursive writing of Uyghur. In this paper, we propose a system that extracts text from Uyghur biomedical images, and matches the text in a specific lexicon for semantic analysis. The proposed system possesses following distinctive properties: first, it is an integrated system which firstly detects and crops the Uyghur text lines using a single fully convolutional neural network, and then keywords in the lexicon are matched by a well-designed matching network. Second, to train the matching network effectively an online sampling method is applied, which generates synthetic data continually. Finally, we propose a GPU acceleration scheme for matching network to match a complete Uyghur text line directly rather than a single window. Experimental results on benchmark dataset show our method achieves a good performance of F-measure 74.5%. Besides, our system keeps high efficiency with 0.5s running time for each image due to the GPU acceleration scheme.
Shi, Tujin [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Song, Ehwang [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Nie, Song [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Rodland, Karin D. [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Liu, Tao [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Qian, Wei-Jun [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA; Smith, Richard D. [Biological Sciences Division and Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland WA USA
Targeted proteomics technique has emerged as a powerful protein quantification tool in systems biology, biomedical research, and increasing for clinical applications. The most widely used targeted proteomics approach, selected reaction monitoring (SRM), also known as multiple reaction monitoring (MRM), can be used for quantification of cellular signaling networks and preclinical verification of candidate protein biomarkers. As an extension to our previous review on advances in SRM sensitivity (Shi et al., Proteomics, 12, 1074–1092, 2012) herein we review recent advances in the method and technology for further enhancing SRM sensitivity (from 2012 to present), and highlighting its broad biomedical applications in human bodily fluids, tissue and cell lines. Furthermore, we also review two recently introduced targeted proteomics approaches, parallel reaction monitoring (PRM) and data-independent acquisition (DIA) with targeted data extraction on fast scanning high-resolution accurate-mass (HR/AM) instruments. Such HR/AM targeted quantification with monitoring all target product ions addresses SRM limitations effectively in specificity and multiplexing; whereas when compared to SRM, PRM and DIA are still in the infancy with a limited number of applications. Thus, for HR/AM targeted quantification we focus our discussion on method development, data processing and analysis, and its advantages and limitations in targeted proteomics. Finally, general perspectives on the potential of achieving both high sensitivity and high sample throughput for large-scale quantification of hundreds of target proteins are discussed.
Rosol, Thomas J.; Moore, Rustin M.; Saville, William J.A.; Oglesbee, Michael J.; Rush, Laura J.; Mathes, Lawrence E.; Lairmore, Michael D.
The number of veterinarians in the United States is inadequate to meet societal needs in biomedical research and public health. Areas of greatest need include translational medical research, veterinary pathology, laboratory-animal medicine, emerging infectious diseases, public health, academic medicine, and production-animal medicine. Veterinarians have unique skill sets that enable them to serve as leaders or members of interdisciplinary research teams involved in basic science and biomedical research with applications to animal or human health. There are too few graduate veterinarians to serve broad national needs in private practice; academia; local, state, and federal government agencies; and private industry. There are no easy solutions to the problem of increasing the number of veterinarians in biomedical research. Progress will require creativity, modification of priorities, broad-based communication, support from faculty and professional organizations, effective mentoring, education in research and alternative careers as part of the veterinary professional curriculum, and recognition of the value of research experience among professional schools’ admissions committees. New resources should be identified to improve communication and education, professional and graduate student programs in biomedical research, and support to junior faculty. These actions are necessary for the profession to sustain its viability as an integral part of biomedical research. PMID:19435992
Gutiérrez, José María
The contributions published in Revista de Biología Tropical in the area of Biomedical Sciences are reviewed in terms of number of contributions and scope of research subjects. Biomedical Sciences, particularly Parasitology and Microbiology, constituted the predominant subject in the Revista during the first decade, reflecting the intense research environment at the School of Microbiology of the University of Costa Rica and at Hospital San Juan de Dios. The relative weight of Biomedicine in the following decades diminished, due to the outstanding increment in publications in Biological Sciences; however, the absolute number of contributions in Biomedical Sciences remained constant throughout the last decades, with around 80 contributions per decade. In spite of the predominance of Parasitology as the main biomedical subject, the last decades have witnessed the emergence of new areas of interest in the Revista, such as Pharmacology of natural products, Toxinology, especially related to snake venoms, and Human Genetics. This retrospective analysis evidences that Biomedical Sciences, particularly those related to Tropical Medicine, were a fundamental component during the first years of Revista de Biología Tropical, and have maintained a significant presence in the scientific output of this journal, the most relevant scientific publication in biological sciences in Central America.
Sherman, Michael A; Seth, Ajay; Delp, Scott L
Multibody software designed for mechanical engineering has been successfully employed in biomedical research for many years. For real time operation some biomedical researchers have also adapted game physics engines. However, these tools were built for other purposes and do not fully address the needs of biomedical researchers using them to analyze the dynamics of biological structures and make clinically meaningful recommendations. We are addressing this problem through the development of an open source, extensible, high performance toolkit including a multibody mechanics library aimed at the needs of biomedical researchers. The resulting code, Simbody, supports research in a variety of fields including neuromuscular, prosthetic, and biomolecular simulation, and related research such as biologically-inspired design and control of humanoid robots and avatars. Simbody is the dynamics engine behind OpenSim, a widely used biomechanics simulation application. This article reviews issues that arise uniquely in biomedical research, and reports on the architecture, theory, and computational methods Simbody uses to address them. By addressing these needs explicitly Simbody provides a better match to the needs of researchers than can be obtained by adaptation of mechanical engineering or gaming codes. Simbody is a community resource, free for any purpose. We encourage wide adoption and invite contributions to the code base at https://simtk.org/home/simbody.
Biomedical engineering is a new area of research in medicine and biology, providing new concepts and designs for the diagnosis, treatment and prevention of various diseases. There are several types of biomedical engineering, such as tissue, genetic, neural and stem cells, as well as chemical and clinical engineering for health care. Many electronic and magnetic methods and equipments are used for the biomedical engineering such as Computed Tomography (CT) scans, Magnetic Resonance Imaging (MRI) scans, Electroencephalography (EEG), Ultrasound and regenerative medicine and stem cell cultures, preparations of artificial cells and organs, such as pancreas, urinary bladders, liver cells, and fibroblasts cells of foreskin and others. The principle of tissue engineering is described with various types of cells used for tissue engineering purposes. The use of several medical devices and bionics are mentioned with scaffold, cells and tissue cultures and various materials are used for biomedical engineering. The use of biomedical engineering methods is very important for the human health, and research and development of diseases. The bioreactors and preparations of artificial cells or tissues and organs are described here.
Magjarevic, Ratko; Zequera Diaz, Martha L
Biomedical Engineering programs are present at a large number of universities all over the world with an increasing trend. New generations of biomedical engineers have to face the challenges of health care systems round the world which need a large number of professionals not only to support the present technology in the health care system but to develop new devices and services. Health care stakeholders would like to have innovative solutions directed towards solving problems of the world growing incidence of chronic disease and ageing population. These new solutions have to meet the requirements for continuous monitoring, support or care outside clinical settlements. Presence of these needs can be tracked through data from the Labor Organization in the U.S. showing that biomedical engineering jobs have the largest growth at the engineering labor market with expected 72% growth rate in the period from 2008-2018. In European Union the number of patents (i.e. innovation) is the highest in the category of biomedical technology. Biomedical engineering curricula have to adopt to the new needs and for expectations of the future. In this paper we want to give an overview of engineering professions in related to engineering in medicine and biology and the current status of BME education in some regions, as a base for further discussions.
Cheng, Huaitzung Andrew
Polymeric nanoparticles have a wide range of applications, particularly as drug delivery and diagnostic agents, and tannins have been regarded as a promising building block for redox and pH responsive systems. Tannins are a class of naturally occurring polyphenols commonly produced by plants and are found in many of our consumables like teas, spices, fresh fruits, and vegetables. Many of the health benefits associated with these foods are a result of their high tannin contents and the many different types of tannins found in various plants have demonstrated therapeutic potentials for conditions ranging from cardiovascular disease and diabetes to ulcers and cancer. Diets rich in tannins have been associated with lower blood pressure in patients with hypertension. The plurality of phenols in tannins also makes them powerful antioxidants and as a result, there is a lot of interest in taking advantage of their self-assembling abilities to make redox and pH responsive drug delivery systems. However, the benefit of natural tannins is limited by their instability in physiological conditions. Furthermore, there is limited control over molecular weight and reactivity of the phenolic content of plant extracts. Herein we report the novel synthesis of pseudotannins with control over molecular weight and reactivity of phenolic moieties. These pseudotannins have can form nanoscale interpolymer complexes under physiological conditions and have demonstrated antioxidative potential. Furthermore, pseudotannin IPCs have been shown to be responsive to physiologically relevant oxidation as well as the ability to easily incorporate cell targeting peptides, fluorescent tags, and MRI contrast agents. The work presented here describes how pseudotannins would be ideally suited to minimally invasive techniques for diagnosing atherosclerotic plaques and targeting triple negative breast cancer. We demonstrate that pseudotannin can very easily and quickly form nanoscale particles that are small
To summarize excellent current research in the field of knowledge representation and management (KRM). A synopsis of the articles selected for the IMIA Yearbook 2011 is provided and an attempt to highlight the current trends in the field is sketched. This last decade, with the extension of the text-based web towards a semantic-structured web, NLP techniques have experienced a renewed interest in knowledge extraction. This trend is corroborated through the five papers selected for the KRM section of the Yearbook 2011. They all depict outstanding studies that exploit NLP technologies whenever possible in order to accurately extract meaningful information from various biomedical textual sources. Bringing semantic structure to the meaningful content of textual web pages affords the user with cooperative sharing and intelligent finding of electronic data. As exemplified by the best paper selection, more and more advanced biomedical applications aim at exploiting the meaningful richness of free-text documents in order to generate semantic metadata and recently to learn and populate domain ontologies. These later are becoming a key piece as they allow portraying the semantics of the Semantic Web content. Maintaining their consistency with documents and semantic annotations that refer to them is a crucial challenge of the Semantic Web for the coming years.
This volume presents the proceedings of the 15th ICMBE held from 4th to 7th December 2013, Singapore. Biomedical engineering is applied in most aspects of our healthcare ecosystem. From electronic health records to diagnostic tools to therapeutic, rehabilitative and regenerative treatments, the work of biomedical engineers is evident. Biomedical engineers work at the intersection of engineering, life sciences and healthcare. The engineers would use principles from applied science including mechanical, electrical, chemical and computer engineering together with physical sciences including physics, chemistry and mathematics to apply them to biology and medicine. Applying such concepts to the human body is very much the same concepts that go into building and programming a machine. The goal is to better understand, replace or fix a target system to ultimately improve the quality of healthcare. With this understanding, the conference proceedings offer a single platform for individuals and organisations working i...
Dario, Paolo; Chiara Carrozza, Maria; Benvenuto, Antonella; Menciassi, Arianna
In this paper we analyse the main characteristics of some micro-devices which have been developed recently for biomedical applications. Among the many biomedical micro-systems proposed in the literature or already on the market, we have selected a few which, in our opinion, represent particularly well the technical problems to be solved, the research topics to be addressed and the opportunities offered by micro-system technology (MST) in the biomedical field. For this review we have identified four important areas of application of micro-systems in medicine and biology: (1) diagnostics (2) drug delivery; (3) neural prosthetics and tissue engineering; and (4) minimally invasive surgery. We conclude that MST has the potential to play a major role in the development of new medical instrumentation and to have a considerable industrial impact in this field.
Fuente Puch, A.E. de la
The human exposure to ionizing radiation in the context of medical and biomedical research raises specific ethical challenges whose resolution approaches should be based on scientific, legal and procedural matters. Joint Resolution MINSAP CITMA-Regulation 'Basic Standards of Radiation Safety' of 30 November 2001 (hereafter NBS) provides for the first time in Cuba legislation specifically designed to protect patients and healthy people who participate in research programs medical and biomedical and exposed to radiation. The objective of this paper is to demonstrate the need to develop specific requirements for radiation protection in medical and biomedical research, as well as to identify all the institutions involved in this in order to establish the necessary cooperation to ensure the protection of persons participating in the investigation
Full Text Available The use of optomechatronic technology, particularly in biomedical optical imaging, is becoming pronounced and ever increasing due to its synergistic effect of the integration of optics and mechatronics. The background of this trend is that the biomedical optical imaging for example in-vivo imaging related to retraction of tissues, diagnosis, and surgical operations have a variety of challenges due to complexity in internal structure and properties of biological body and the resulting optical phenomena. This paper addresses the technical issues related to tissue imaging, visualization of interior surfaces of organs, laparoscopic and endoscopic imaging and imaging of neuronal activities and structures. Within such problem domains the paper overviews the states of the art technology focused on how optical components are fused together with those of mechatronics to create the functionalities required for the imaging systems. Future perspective of the optical imaging in biomedical field is presented in short.
Hansen, Merete Kjær
been allocated this field. It is utterly important to utilize these ressources responsibly and efficiently by constantly striving to ensure high-quality biomedical studies. This involves the use of a sound statistical methodology regarding both the design and analysis of biomedical studies. The focus...... for the statistical power of studies with a hierarchical structure to guide biomedical researchers designing future studies of this type. Upon model fitting it is important to examine if the model assumptions are met to avoid that spurious conclusions are drawn. While the range of diagnostic methods is extensive...... for models assuming a normal response it is generally more limited for non-normal models. An R package providing diagnostic tools suitable for examining the validity of binomial regression models have been developed. The binom Tools package is publicly available at the CRAN repository....
Klein, R.C.; Reginatto, M.; Party, E.; Gershey, E.L.
This paper reports on calculations which exist for estimating shielding required for radioactivity; however, they are often not applicable for the radionuclides and activities common in biomedical research. A variety of commercially available Lucite shields are being marketed to the biomedical community. Their advertisements may lead laboratory workers to expect better radiation protection than these shields can provide or to assume erroneously that very weak beta emitters require extensive shielding. The authors have conducted a series of shielding experiments designed to simulate exposures from the amounts of 32 P, 51 Cr and 125 I typically used in biomedical laboratories. For most routine work, ≥0.64 cm of Lucite covered with various thicknesses of lead will reduce whole-body occupational exposure rates of < 1mR/hr at the point of contact
Zhang, Qing; Cao, Yong-Gang; Yu, Hong
Citations are used ubiquitously in biomedical full-text articles and play an important role for representing both the rhetorical structure and the semantic content of the articles. As a result, text mining systems will significantly benefit from a tool that automatically extracts the content of a citation. In this study, we applied the supervised machine-learning algorithms Conditional Random Fields (CRFs) to automatically parse a citation into its fields (e.g., Author, Title, Journal, and Year). With a subset of html format open-access PubMed Central articles, we report an overall 97.95% F1-score. The citation parser can be accessed at: http://www.cs.uwm.edu/∼qing/projects/cithit/index.html. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cristóbal-Luna, José Melesio; Álvarez-González, Isela; Madrigal-Bujaidar, Eduardo; Chamorro-Cevallos, Germán
Grapefruit (Citrus paradisi Mcfad) is a perenifolium tree 5-6 m high with a fruit of about 15 cm in diameter, protected by the peel we can find about 11-14 segments (carpels), each of which is surrounded by a membrane and each containing the juice sacs, as well as the seeds. The fruit is made up of numerous compounds, and is known to have nutritive value because of the presence of various vitamins and minerals, among other chemicals. The fruit is also used in the field of gastronomy. Information has been accumulated regarding the participation of the fruit structures in a variety of biomedical, antigenotoxic and chemopreventive effects, surely related with the presence of the numerous chemicals that have been determined to constitute the fruit. Such studies have been carried out in different in vitro and in vivo experimental models, and in a few human assays. The information published so far has shown interesting results, therefore, the aims of the present review are to initially examine the main characteristics of the fruit, followed by systematization of the acquired knowledge concerning the biomedical, antigenotoxic and chemopreventive effects produced by the three main structures of the fruit: peel, seed, and pulp. Copyright © 2018 Elsevier Ltd. All rights reserved.
Full Text Available Modern management of biomedical systems involves the use of many distributed resources, such as high performance computational resources to analyze biomedical data, mass storage systems to store them, medical instruments (microscopes, tomographs, etc., advanced visualization and rendering tools. Grids offer the computational power, security and availability needed by such novel applications. This paper presents BIG (Biomedical Imaging Grid, a Web-based Grid portal for management of biomedical information (data and images in a distributed environment. BIG is an interactive environment that deals with complex user's requests, regarding the acquisition of biomedical data, the "processing" and "delivering" of biomedical images, using the power and security of Computational Grids.
Full Text Available Electronic health records and scientific articles possess differing linguistic characteristics that may impact the performance of natural language processing tools developed for one or the other. In this paper, we investigate the performance of four extant concept recognition tools: the clinical Text Analysis and Knowledge Extraction System (cTAKES, the National Center for Biomedical Ontology (NCBO Annotator, the Biomedical Concept Annotation System (BeCAS and MetaMap. Each of the four concept recognition systems is applied to four different corpora: the i2b2 corpus of clinical documents, a PubMed corpus of Medline abstracts, a clinical trails corpus and the ShARe/CLEF corpus. In addition, we assess the individual system performances with respect to one gold standard annotation set, available for the ShARe/CLEF corpus. Furthermore, we built a silver standard annotation set from the individual systems' output and assess the quality as well as the contribution of individual systems to the quality of the silver standard. Our results demonstrate that mainly the NCBO annotator and cTAKES contribute to the silver standard corpora (F1-measures in the range of 21% to 74% and their quality (best F1-measure of 33%, independent from the type of text investigated. While BeCAS and MetaMap can contribute to the precision of silver standard annotations (precision of up to 42%, the F1-measure drops when combined with NCBO Annotator and cTAKES due to a low recall. In conclusion, the performances of individual systems need to be improved independently from the text types, and the leveraging strategies to best take advantage of individual systems' annotations need to be revised. The textual content of the PubMed corpus, accession numbers for the clinical trials corpus, and assigned annotations of the four concept recognition systems as well as the generated silver standard annotation sets are available from http://purl.org/phenotype/resources. The textual content
M. Tayfun Gülle
Full Text Available The book includes detailed information concerning knowledge and knowledge management with current resources in seven chapters uder the titles of “organizational effects of knowlegde management, knowledge management systems, new knowledge discovery: data mining, computer as an information sharing platform, technologies as knowledge management: artificial intelligence and knowledge based systems, future of knowlegde management”. Concepts of knowledge and knowledge management becomes phenomenon for all disciplinaries so global companies, other companies, state sector, epistemologists, experts of innovation and governance, information professionals etc may find informative to it. The book also includes three prefaces which are well-informed and so all of them is summarized in the text.
Bonazzi, Vivien R; Bourne, Philip E
The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH) and elsewhere, as an example of the move towards platforms for research.
Vivien R Bonazzi
Full Text Available The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH and elsewhere, as an example of the move towards platforms for research.
Zare, Yasser; Shabani, Iman
Polymer/metal nanocomposites consisting of polymer as matrix and metal nanoparticles as nanofiller commonly show several attractive advantages such as electrical, mechanical and optical characteristics. Accordingly, many scientific and industrial communities have focused on polymer/metal nanocomposites in order to develop some new products or substitute the available materials. In the current paper, characteristics and applications of polymer/metal nanocomposites for biomedical applications are extensively explained in several categories including strong and stable materials, conductive devices, sensors and biomedical products. Moreover, some perspective utilizations are suggested for future studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available Chaotic modulation is a strong method of improving communication security. Analog and discrete chaotic systems are presented in actual literature. Due to the expansion of digital communication, discrete-time systems become more efficient and closer to actual technology. The present contribution offers an in-depth analysis of the effects chaos encryption produce on 1D and 2D biomedical signals. The performed simulations show that modulating signals are precisely recovered by the synchronizing receiver if discrete systems are digitally implemented and the coefficients precisely correspond. Channel noise is also applied and its effects on biomedical signal demodulation are highlighted.
Andreoni, Giuseppe; Colombo, Barbara
During the past two decades incredible progress has been achieved in the instruments and devices used in the biomedical field. This progress stems from continuous scientific research that has taken advantage of many findings and advances in technology made available by universities and industry. Innovation is the key word, and in this context legal protection and intellectual property rights (IPR) are of crucial importance. This book provides students and practitioners with the fundamentals for designing biomedical devices and explains basic design principles. Furthermore, as an aid to the dev
Due to the rapid technological development in the world today, the role of physics in modern medicine is of great importance. The frequent use of equipment that produces ionizing radiation further increases the need for radiation protection, complicated equipment requires technical support, the diagnostic and therapeutic methods impose the highest professionals in the field of medical physics. Thus, medical physics and biomedical engineering have become an inseparable part of everyday medical practice. There are a certain number of highly qualified and dedicated professionals in medical physics in Macedonia who committed themselves to work towards resolving medical physics issues. In 2000 they established the first and still only professional Association for Medical Physics and Biomedical Engineering (AMPBE) in Macedonia; a one competent to cope with problems in the fields of medicine, which applies methods of physics and biomedical engineering to medical procedures in order to develop tools essential to the physicians that will ultimately lead to improve the quality of medical practice in general. The First National Conference on Medical Physics and Biomedical Engineering was organized by the AMPBE in 2007. The idea was to gather all the professionals working in medical physics and biomedical engineering in one place in order to present their work and increase the collaboration among them. Other involved professions such as medical doctors, radiation technologists, engineers and professors of physics at the University also took part and contributed to the success of the conference. As a result, the Proceedings were published in Macedonian, with summaries in English. In order to further promote the medical physics amongst the scientific community in Macedonia, our society decided to organize The Second Conference on Medical Physics and Biomedical Engineering in November 2010. Unlike the first, this one was with international participation. This was very suitable
Scheffer, C; Blanckenberg, M; Garth-Davis, B; Eisenberg, M
Most industrial projects require a team of engineers from a variety of disciplines. The team members are often culturally diverse and geographically dispersed. Many students do not acquire sufficient skills from typical university courses to function efficiently in such an environment. The Global Engineering Teams (GET) programme was designed to prepare students such a scenario in industry. This paper discusses five biomedical engineering themed projects completed by GET students. The benefits and success of the programme in educating students in the field of biomedical engineering are discussed.
Pardalos, Panos M; Xanthopoulos, Petros
This volume covers some of the topics that are related to the rapidly growing field of biomedical informatics. In June 11-12, 2010 a workshop entitled 'Optimization and Data Analysis in Biomedical Informatics' was organized at The Fields Institute. Following this event invited contributions were gathered based on the talks presented at the workshop, and additional invited chapters were chosen from world's leading experts. In this publication, the authors share their expertise in the form of state-of-the-art research and review chapters, bringing together researchers from different disciplines
Kim, Won; Yeganova, Lana; Comeau, Donald C; Wilbur, W John
In the modern world people frequently interact with retrieval systems to satisfy their information needs. Humanly understandable well-formed phrases represent a crucial interface between humans and the web, and the ability to index and search with such phrases is beneficial for human-web interactions. In this paper we consider the problem of identifying humanly understandable, well formed, and high quality biomedical phrases in MEDLINE documents. The main approaches used previously for detecting such phrases are syntactic, statistical, and a hybrid approach combining these two. In this paper we propose a supervised learning approach for identifying high quality phrases. First we obtain a set of known well-formed useful phrases from an existing source and label these phrases as positive. We then extract from MEDLINE a large set of multiword strings that do not contain stop words or punctuation. We believe this unlabeled set contains many well-formed phrases. Our goal is to identify these additional high quality phrases. We examine various feature combinations and several machine learning strategies designed to solve this problem. A proper choice of machine learning methods and features identifies in the large collection strings that are likely to be high quality phrases. We evaluate our approach by making human judgments on multiword strings extracted from MEDLINE using our methods. We find that over 85% of such extracted phrase candidates are humanly judged to be of high quality. Published by Elsevier Inc.
Yan, Su; Spangler, W Scott; Chen, Ying
The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.
Rotomskis, Ricardas; Karenauskaite, Violeta; Balzekiene, Aiste
To examine the learning and practice needs of medical professionals in the field of continuing education of biomedical physics in Lithuania. The study was based on a questionnaire survey of 309 medical professionals throughout Lithuania, 3 focus group discussions, and 18 interviews with medical and physics experts. The study showed that medical professionals lack knowledge of physics: only 15.1% of the respondents admitted that they had enough knowledge in biomedical physics to understand the functioning of the medical devices that they used, and 7.5% of respondents indicated that they had enough knowledge to understand and adopt medical devices of the new generation. Physics knowledge was valued more highly by medical professionals with scientific degrees. As regards continuing medical education, it was revealed that personal motivation (88.7%) and responsibility for patients (44.3%) were the most important motives for upgrading competencies, whereas workload (65.4%) and financial limits (45.3%) were the main obstacles. The most popular teaching methods were those based on practical work (78.9%), and the least popular was project work (27.8%). The study revealed that biomedical physics knowledge was needed in both specializations and practical work, and the most important factor for determining its need was professional aspirations. Medical professionals' understanding of medical devices, especially those of the new generation, is essentially functional in nature. Professional upgrading courses contain only fragmented biomedical physics content, and new courses should be developed jointly by experts in physics and medicine to meet the specialized needs of medical professionals.
Kauffmann, Lene Teglhus
of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... making, which I call ‘sound knowledge’. Sound knowledge is an approach to knowledge that takes the reflexive considerations of actors in policymaking processes as well as in research about what knowledge is into account. Seeing knowledge as sound makes connections between different ideas, concepts...... and ideologies explicit. Furthermore, in relation to an anthropology of knowledge, sound knowledge also offers a reconsideration of the way anthropologists study knowledge, as it specifies that studying knowledge for anthropologists means studying what people consider as knowledge, in what circumstances...
Rinaldi, Fabio; Clematide, Simon; Marques, Hernani; Ellendorff, Tilia; Romacker, Martin; Rodriguez-Esteban, Raul
Text mining services are rapidly becoming a crucial component of various knowledge management pipelines, for example in the process of database curation, or for exploration and enrichment of biomedical data within the pharmaceutical industry. Traditional architectures, based on monolithic applications, do not offer sufficient flexibility for a wide range of use case scenarios, and therefore open architectures, as provided by web services, are attracting increased interest. We present an approach towards providing advanced text mining capabilities through web services, using a recently proposed standard for textual data interchange (BioC). The web services leverage a state-of-the-art platform for text mining (OntoGene) which has been tested in several community-organized evaluation challenges,with top ranked results in several of them.
Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
The first of the four papers in this symposium, "Knowledge Management and Knowledge Dissemination" (Wim J. Nijhof), presents two case studies exploring the strategies companies use in sharing and disseminating knowledge and expertise among employees. "A Theory of Knowledge Management" (Richard J. Torraco), develops a conceptual…
Peng, Wanxi; Lin, Zhi; Wang, Lansheng; Chang, Junbo; Gu, Fangliang; Zhu, Xiangwei
Illicium verum, whose extractives can activate the demic acquired immune response, is an expensive medicinal plant. However, the rich extractives in I. verum biomass were seriously wasted for the inefficient extraction and separation processes. In order to further utilize the biomedical resources for the good acquired immune response, the four extractives were obtained by SJYB extraction, and then the immunology moleculars of SJYB extractives were identified and analyzed by GC–MS. The result ...
Items 1 - 15 of 15 ... Archives: International Journal of Medicine and Biomedical Research. Journal Home > Archives: International Journal of Medicine and Biomedical Research. Log in or Register to get access to full text downloads.
Journal of Medical and Biomedical Sciences: Journal Sponsorship. Journal Home > About the Journal > Journal of Medical and Biomedical Sciences: Journal Sponsorship. Log in or Register to get access to full text downloads.
Katashev, Alexei; Lancere, Linda
This volume presents the proceedings of the International Symposium on Biomedical Engineering and Medical Physics and is dedicated to the 150 anniversary of the Riga Technical University, Latvia. The content includes various hot topics in biomedical engineering and medical physics.
Bertuzzi, Stefano; Jamaleddine, Zeina
Assessing the real-world impact of biomedical research is notoriously difficult. Here, we present the framework for building a prospective science-centered information system from scratch that has been afforded by the Sidra Medical and Research Center in Qatar. This experiment is part of the global conversation on maximizing returns on research investment. Copyright © 2016 Elsevier Inc. All rights reserved.
Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana
This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.
Gutzman, Karen Elizabeth; Bales, Michael E; Belter, Christopher W; Chambers, Thane; Chan, Liza; Holmes, Kristi L; Lu, Ya-Ling; Palmer, Lisa A; Reznik-Zellen, Rebecca C; Sarli, Cathy C; Suiter, Amy M; Wheeler, Terrie R
The paper provides a review of current practices related to evaluation support services reported by seven biomedical and research libraries. A group of seven libraries from the United States and Canada described their experiences with establishing evaluation support services at their libraries. A questionnaire was distributed among the libraries to elicit information as to program development, service and staffing models, campus partnerships, training, products such as tools and reports, and resources used for evaluation support services. The libraries also reported interesting projects, lessons learned, and future plans. The seven libraries profiled in this paper report a variety of service models in providing evaluation support services to meet the needs of campus stakeholders. The service models range from research center cores, partnerships with research groups, and library programs with staff dedicated to evaluation support services. A variety of products and services were described such as an automated tool to develop rank-based metrics, consultation on appropriate metrics to use for evaluation, customized publication and citation reports, resource guides, classes and training, and others. Implementing these services has allowed the libraries to expand their roles on campus and to contribute more directly to the research missions of their institutions. Libraries can leverage a variety of evaluation support services as an opportunity to successfully meet an array of challenges confronting the biomedical research community, including robust efforts to report and demonstrate tangible and meaningful outcomes of biomedical research and clinical care. These services represent a transformative direction that can be emulated by other biomedical and research libraries.
One of the dual roles of the African Journal of Biomedical Research is to serve as a conduit for academic and professional media, covering all research findings within ... Editorial Team. Founding Editor. Raphael A. Elegbe, M.D. Managing Editor. Samuel. B. Olaleye. Department of Physiology,. University of Ibadan. Nigeria.