Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Cohen, K Bretonnel; Hunter, Lawrence E
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. © 2015 The British Pharmacological Society.
Overby, Casey Lynnette; Tarczy-Hornoch, Peter
Personalized medicine can be defined broadly as a model of healthcare that is predictive, personalized, preventive and participatory. Two US President's Council of Advisors on Science and Technology reports illustrate challenges in personalized medicine (in a 2008 report) and in use of health information technology (in a 2010 report). Translational bioinformatics is a field that can help address these challenges and is defined by the American Medical Informatics Association as "the development of storage, analytic and interpretive methods to optimize the transformation of increasing voluminous biomedical data into proactive, predictive, preventative and participatory health." This article discusses barriers to implementing genomics applications and current progress toward overcoming barriers, describes lessons learned from early experiences of institutions engaged in personalized medicine and provides example areas for translational bioinformatics research inquiry.
In this thesis, I detail my 4-year efforts in developing bioinformatics tools and algorithms to address the growing demands of current proteomics endeavors, covering a range of facets such as large-scale protein expression profiling, charting post-translation modifications as well as
Baldi, Pierre; Brunak, Søren
, and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...
Soualmia, L F; Lecroq, T
To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and
Vamathevan, J; Birney, E
Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.
... on molecular biology, especially D N A sequence analysis and protein structure prediction. These two issues are also central to this book. Other application areas covered here are: interpretation of spectroscopic data and discovery of structure-function relationships in D N A and proteins. Figure 1 depicts the interdependence of computer science,...
Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren
As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...
Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie
Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004
Full Text Available Abstract Background With over 20 million formalin-fixed, paraffin-embedded (FFPE tissue samples archived each year in the United States alone, archival tissues remain a vast and under-utilized resource in the genomic study of cancer. Technologies have recently been introduced for whole-transcriptome amplification and microarray analysis of degraded mRNA fragments from FFPE samples, and studies of these platforms have only recently begun to enter the published literature. Results The Emerging Technologies for Translational Bioinformatics symposium on gene expression profiling for archival tissues featured presentations of two large-scale FFPE expression profiling studies (each involving over 1,000 samples, overviews of several smaller studies, and representatives from three leading companies in the field (Illumina, Affymetrix, and NuGEN. The meeting highlighted challenges in the analysis of expression data from archival tissues and strategies being developed to overcome them. In particular, speakers reported higher rates of clinical sample failure (from 10% to 70% than are typical for fresh-frozen tissues, as well as more frequent probe failure for individual samples. The symposium program is available at http://www.hsph.harvard.edu/ffpe. Conclusions Multiple solutions now exist for whole-genome expression profiling of FFPE tissues, including both microarray- and sequencing-based platforms. Several studies have reported their successful application, but substantial challenges and risks still exist. Symposium speakers presented novel methodology for analysis of FFPE expression data and suggestions for improving data recovery and quality assessment in pre-analytical stages. Research presentations emphasized the need for careful study design, including the use of pilot studies, replication, and randomization of samples among batches, as well as careful attention to data quality control. Regardless of any limitations in quantitave transcriptomics for
Varma, B Sharat Chandra; Balakrishnan, M
This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.
The aim of this review is to discuss the importance of bioinformatics and emphasize the need to acquire bioinformatics training and skills so as to maximize its potentials for improved delivery of animal health. In this review, bioinformatics is introduced, challenges to effective animal disease diagnosis, prevention and control, ...
de Knikker, Remko; Guo, Youjun; Li, Jin-Long; Kwan, Albert K H; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi
Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web services using a web
Cheung David W
Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates
de Knikker, Remko; Guo, Youjun; Li, Jin-long; Kwan, Albert KH; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi
Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web
Tucker, Allan; Duplisea, Daniel
There has been a huge effort in the advancement of analytical techniques for molecular biological data over the past decade. This has led to many novel algorithms that are specialized to deal with data associated with biological phenomena, such as gene expression and protein interactions. In contrast, ecological data analysis has remained focused to some degree on off-the-shelf statistical techniques though this is starting to change with the adoption of state-of-the-art methods, where few assumptions can be made about the data and a more explorative approach is required, for example, through the use of Bayesian networks. In this paper, some novel bioinformatics tools for microarray data are discussed along with their ‘crossover potential’ with an application to fisheries data. In particular, a focus is made on the development of models that identify functionally equivalent species in different fish communities with the aim of predicting functional collapse. PMID:22144390
Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...
Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan
One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.
Full Text Available Precision medicine (PM requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of i warranting the integration and the traceability of data, ii ensuring the correct processing and analyses of genomic data and iii applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application.
Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan
This proceedings presents recent practical applications of Computational Biology and Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...
Due to their sessile nature, plants require a tight regulation of energy homeostasis in order to survive and reproduce in changing environmental conditions. Regulation of gene expression is controlled at several levels, from transcription to translation and beyond. Sugars themselves can act directly
Rezig, Slim; Sakhri, Saber
Salmonellas are the main responsible agent for the frequent food-borne gastrointestinal diseases. Their detection using classical methods are laborious and their results take a lot of time to be revealed. In this context, we tried to set up a revealing technique of the invA virulence gene, found in the majority of Salmonella species. After amplification with PCR using specific primers created and verified by bioinformatics programs, two couples of primers were set up and they appeared to be very specific and sensitive for the detection of invA gene. (Author)
Liu, Lin; Tang, Lin; Dong, Wen; Yao, Shaowen; Zhou, Wei
With the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics. This paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications. Topic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.
Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
Yang, Jack Y; Yang, Mary Qu; Arabnia, Hamid R; Deng, Youping
learning to recognize function in 4D), Dr. Mary Qu Yang (IEEE BIBM workshop keynote lecturer on new initiatives of detecting microscopic disease using machine learning and molecular biology, http://ieeexplore.ieee.org/servlet/opac?punumber=4425386) and Dr. Jack Y. Yang (IEEE BIBM workshop keynote lecturer on data mining and knowledge discovery in translational medicine) from the first IEEE Computer Society BioInformatics and BioMedicine (IEEE BIBM) international conference and workshops, November 2-4, 2007, Silicon Valley, California, USA.
Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev
Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert
Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.
Taylor, Ronald C.
Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.
Full Text Available The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate the efficiency of this methodology in developing thematic, linguistic and cultural competences of the students, following Bloom’s revised taxonomy and European Master in Translation Network (EMT translator training competences. The methodology has been tested on the students of a professional Master study programme called Technical Translation implemented by the Institute of Applied Linguistics, Riga Technical University, Latvia. The group of students included representatives of different nationalities, translating from English into Latvian, Russian and French. Analysis of popular LSP texts provides an opportunity to structure student background knowledge and expand it to account for linguistic innovation. Application of popular LSP texts instead of purely technical or scientific texts characterised by neutral style and rigid genre conventions provides an opportunity for student translators to develop advanced text processing and decoding skills, to develop awareness of expressive resources of the source and target languages and to develop understanding of socio-pragmatic language use.
Full Text Available Abstract Background Discriminative models are designed to naturally address classification tasks. However, some applications require the inclusion of grammar rules, and in these cases generative models, such as Hidden Markov Models (HMMs and Stochastic Grammars, are routinely applied. Results We introduce Grammatical-Restrained Hidden Conditional Random Fields (GRHCRFs as an extension of Hidden Conditional Random Fields (HCRFs. GRHCRFs while preserving the discriminative character of HCRFs, can assign labels in agreement with the production rules of a defined grammar. The main GRHCRF novelty is the possibility of including in HCRFs prior knowledge of the problem by means of a defined grammar. Our current implementation allows regular grammar rules. We test our GRHCRF on a typical biosequence labeling problem: the prediction of the topology of Prokaryotic outer-membrane proteins. Conclusion We show that in a typical biosequence labeling problem the GRHCRF performs better than CRF models of the same complexity, indicating that GRHCRFs can be useful tools for biosequence analysis applications. Availability GRHCRF software is available under GPLv3 licence at the website http://www.biocomp.unibo.it/~savojard/biocrf-0.9.tar.gz.
Taylor, Ronald C
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Suciu, Radu M; Aydin, Emir; Chen, Brian E
With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics
The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the 6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.
Wu, Cen; Ma, Shuangge
A drastic amount of data have been and are being generated in bioinformatics studies. In the analysis of such data, the standard modeling approaches can be challenged by the heavy-tailed errors and outliers in response variables, the contamination in predictors (which may be caused by, for instance, technical problems in microarray gene expression studies), model mis-specification and others. Robust methods are needed to tackle these challenges. When there are a large number of predictors, variable selection can be as important as estimation. As a generic variable selection and regularization tool, penalization has been extensively adopted. In this article, we provide a selective review of robust penalized variable selection approaches especially designed for high-dimensional data from bioinformatics and biomedical studies. We discuss the robust loss functions, penalty functions and computational algorithms. The theoretical properties and implementation are also briefly examined. Application examples of the robust penalization approaches in representative bioinformatics and biomedical studies are also illustrated. © The Author 2014. Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola
Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.
Lin, Chun-Hung Richard; Wen, Chun-Hao; Lin, Ying-Chih; Tung, Kuang-Yuan; Lin, Rung-Wei; Lin, Chun-Yuan
Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P) model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT) as the associated key, and it locates data according to Distributed Hash Table (DHT) and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.
Chun-Hung Richard Lin
Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.
Liotta, L.A.; Petricoin, E.; Garaci, E.; De Maria, R.; Belluco, C.
Deriving public benefit from basic biomedical research requires a dedicated and highly coordinated effort between basic scientists, physicians, bioinformaticians, clinical trial coordinators, MD and PhD trainees and fellows, and a host of other skilled participants. The Istituto Superiore di Sanita/George Mason University US-Italy Oncoproteomics program, established in 2005, is a successful example of a synergistic creative collaboration between basic scientists and clinical investigators conducting translational research. This program focuses on the application of the new field of proteomics to three urgent and fundamental clinical needs in cancer medicine: 1.) Biomarkers for early diagnosis of cancer, when it is still treatable, 2.) Individualizing patient therapy for molecular targeted inhibitors that block signal pathways driving cancer pathogenesis and 3.) Cancer Progenitor Cells (CSCs): When do the lethal progenitors of cancer first emerge, and how can we treat these CSCs with molecular targeted inhibitors
Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin
Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.
Ashokkumar A. Patel
Full Text Available Background: The Pennsylvania Cancer Alliance Bioinformatics Consortium (PCABC, http://www.pcabc.upmc.edu is one of the first major project-based initiatives stemming from the Pennsylvania Cancer Alliance that was funded for four years by the Department of Health of the Commonwealth of Pennsylvania. The objective of this was to initiate a prototype biorepository and bioinformatics infrastructure with a robust data warehouse by developing a statewide data model (1 for bioinformatics and a repository of serum and tissue samples; (2 a data model for biomarker data storage; and (3 a public access website for disseminating research results and bioinformatics tools. The members of the Consortium cooperate closely, exploring the opportunity for sharing clinical, genomic and other bioinformatics data on patient samples in oncology, for the purpose of developing collaborative research programs across cancer research institutions in Pennsylvania. The Consortium’s intention was to establish a virtual repository of many clinical specimens residing in various centers across the state, in order to make them available for research. One of our primary goals was to facilitate the identification of cancer specific biomarkers and encourage collaborative research efforts among the participating centers.Methods: The PCABC has developed unique partnerships so that every region of the state can effectively contribute and participate. It includes over 80 individuals from 14 organizations, and plans to expand to partners outside the State. This has created a network of researchers, clinicians, bioinformaticians, cancer registrars, program directors, and executives from academic and community health systems, as well as external corporate partners - all working together to accomplish a common mission. The various sub-committees have developed a common IRB protocol template, common data elements for standardizing data collections for three organ sites, intellectual
Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y
Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.
Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of
Lucas Antón Pastur-Romay
Full Text Available Over the past decade, Deep Artificial Neural Networks (DNNs have become the state-of-the-art algorithms in Machine Learning (ML, speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs. All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS, Quantitative Structure–Activity Relationship (QSAR research, protein structure prediction and genomics (and other omics data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.
Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén
Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.
Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén
Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods. PMID:27529225
Full Text Available Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.
Full Text Available Translation, viewed as a multi-faceted task, can arise different types of difficulties. Proverbs have been considered special patterns, displaying sometimes hidden meanings or suggesting morals issuing from a particular example. These paremic units - the proverbs - conveyed feelings, states of mind, behaviours or ‘metaphorical descriptions of certain situations’(Krikmann. Starting from Savory’s list of pair-wise contradictory translation principles, I intend to prove that the link between different ‘forms’ and their ‘contents’ lies in the principle of relevance when referring to proverbs. Even if relevance theory is not a theory of linguistic structure - and many translation problems imply structural mismatches - relevance theory offers insights about contextual information. Proverbs are seen as texts in themselves. My analysis will target the ethnofields of ‘to buy’ and ‘to sell’ in English proverbs and their Romanian corresponding versions.
Andrew B. Kinghorn
Full Text Available Aptamers are short nucleic acid sequences capable of specific, high-affinity molecular binding. They are isolated via SELEX (Systematic Evolution of Ligands by Exponential Enrichment, an evolutionary process that involves iterative rounds of selection and amplification before sequencing and aptamer characterization. As aptamers are genetic in nature, bioinformatic approaches have been used to improve both aptamers and their selection. This review will discuss the advancements made in several enclaves of aptamer bioinformatics, including simulation of aptamer selection, fragment-based aptamer design, patterning of libraries, identification of lead aptamers from high-throughput sequencing (HTS data and in silico aptamer optimization.
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Sroka, Ronald; Stepp, Herbert; Hennig, Georg; Brittenham, Gary M.; Rühm, Adrian; Lilge, Lothar
Medical laser applications based on widespread research and development is a very dynamic and increasingly popular field from an ecological as well as an economic point of view. Conferences and personal communication are necessary to identify specific requests and potential unmet needs in this multi- and interdisciplinary discipline. Precise gathering of all information on innovative, new, or renewed techniques is necessary to design medical devices for introduction into clinical applications and finally to become established for routine treatment or diagnosis. Five examples of successfully addressed clinical requests are described to show the long-term endurance in developing light-based innovative clinical concepts and devices. Starting from laboratory medicine, a noninvasive approach to detect signals related to iron deficiency is shown. Based upon photosensitization, fluorescence-guided resection had been discovered, opening the door for photodynamic approaches for the treatment of brain cancer. Thermal laser application in the nasal cavity obtained clinical acceptance by the introduction of new laser wavelengths in clinical consciousness. Varicose veins can be treated by innovative endoluminal treatment methods, thus reducing side effects and saving time. Techniques and developments are presented with potential for diagnosis and treatment to improve the clinical situation for the benefit of the patient.
Kurikka, Joona; Utriainen, Tuuli; Repokari, Lauri
This paper is based on work done at IdeaSquare, a new innovation experiment at CERN, the European Organization for Nuclear Research. The paper explores the translation of fundamental research into societal applications with the help of multidisciplinary student teams, project- and problem-based learning and design thinking methods. The theme is…
Belda-Medina, Jose Ramon
English has played a dominant role in the terminology of computers and the New Technologies in the last decades. The growing expansion worldwide of different electronic devices and multitasking smart phones has brought about an increasing number of software applications or apps in the market. Creating multilingual applications is a major challenge for developers and companies as sale revenues are on the rise in this sector. The translation and localisation into Spanish and other languages ent...
Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve
Li, Yong-an; Li, Jing-yun
The famous American translation theorist Eugene A. Nida has put forward lots of viewpoints and theories for translation work when he translated the Bible, which have important practical instructional meaning for translation of traditional Chinese medicine nowadays. The application of Nida's theories to translating practice of TCM is illustrated by specific examples in this paper.
Rocha, Miguel; Fdez-Riverola, Florentino; Santana, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Rocha, Miguel; Fdez-Riverola, Florentino; Mayo, Francisco; Paz, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Mohamad, Mohd; Rocha, Miguel; Paz, Juan; Pinto, Tiago
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next-generation sequencing technologies, together with novel and constantly evolving, distinct types of omics data technologies, have created an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information and requires tools from the computational sciences. In the last few years, we have seen the rise of a new generation of interdisciplinary scientists with a strong background in the biological and computational sciences. In this context, the interaction of r...
Nanni, Loris; Rocha, Miguel; Fdez-Riverola, Florentino
The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable and the trend is to increase its pace. In fact, the need for computational techniques that can efficiently handle the huge amounts of data produced by the new experimental techniques in Biology is still increasing driven by new advances in Next Generation Sequencing, several types of the so called omics data and image acquisition, just to name a few. The analysis of the datasets that produces and its integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Within this scenario of increasing data availability, Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. Indeed, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we ...
Marta Vílchez Monge
Full Text Available In the design of miniaturized robots, different types of microelectromechanical actuators must be evaluated to determine the most appropriate one for each specific function as well as the best actuation effect for the size, force, supply voltage, energy, precision and degrees of freedom required.The piezoelectric effect is one of the most promising effects to incorporate microactuators in miniaturized robots. This paper presents a brief comparison on actuation methods for translational displacement and discusses its feasibility for application in miniaturized robots. The specific case of evaluation of a translational piezoelectric actuator using COMSOL Multiphysics is described, as well as an application example for a microgripper to be incorporated in a miniaturized robot for grasping submicrometric objects.
Joseph L. Johnson
Full Text Available BACE1, a membrane-bound aspartyl protease that is implicated in Alzheimer's disease, is the first protease to cut the amyloid precursor protein resulting in the generation of amyloid-β and its aggregation to form senile plaques, a hallmark feature of the disease. Few other native BACE1 substrates have been identified despite its relatively loose substrate specificity. We report a bioinformatics approach identifying several putative BACE1 substrates. Using our algorithm, we successfully predicted the cleavage sites for 70% of known BACE1 substrates and further validated our algorithm output against substrates identified in a recent BACE1 proteomics study that also showed a 70% success rate. Having validated our approach with known substrates, we report putative cleavage recognition sequences within 962 proteins, which can be explored using in vivo methods. Approximately 900 of these proteins have not been identified or implicated as BACE1 substrates. Gene ontology cluster analysis of the putative substrates identified enrichment in proteins involved in immune system processes and in cell surface protein-protein interactions.
Fox, Caroline S.; Hall, Jennifer L.; Arnett, Donna K.; Ashley, Euan A.; Delles, Christian; Engler, Mary B.; Freeman, Mason W.; Johnson, Julie A.; Lanfear, David E.; Liggett, Stephen B.; Lusis, Aldons J.; Loscalzo, Joseph; MacRae, Calum A.; Musunuru, Kiran; Newby, L. Kristin; O’Donnell, Christopher J.; Rich, Stephen S.; Terzic, Andre
The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardiovascular clinical care. In brief, this scientific statement assesses the current timeline for effective translation of basic discoveries to clinical advances, highlighting past successes. Current discoveries in the area of genetics and genomics are covered next, followed by future expectations, tools, and competencies for achieving the goal of improving clinical care. PMID:25882488
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.
Biancani, M; Blanchet, C; Bedri, M; Gibrat, Jean-Francois; Baranda, J I A; Hacker, D; Kourkouli, D
Multi-cloud applications delivered across heterogeneous public cloud service providers poses different technical and operational challenges in terms of resource brokering, integration, implementation of security trusts, etc. In this paper we describe the federation approach taken in the H2020 CYCLONE project, which designed a multi-cloud platform integrating and orchestrating different open-source cloud and network management tools (Openstack, OpenNaaS, SlipStream and TCTP). Some real-life us...
Kamali, Amir Hossein; Giannoulatou, Eleni; Chen, Tsong Yueh; Charleston, Michael A; McEwan, Alistair L; Ho, Joshua W K
Bioinformatics is the application of computational, mathematical and statistical techniques to solve problems in biology and medicine. Bioinformatics programs developed for computational simulation and large-scale data analysis are widely used in almost all areas of biophysics. The appropriate choice of algorithms and correct implementation of these algorithms are critical for obtaining reliable computational results. Nonetheless, it is often very difficult to systematically test these programs as it is often hard to verify the correctness of the output, and to effectively generate failure-revealing test cases. Software testing is an important process of verification and validation of scientific software, but very few studies have directly dealt with the issues of bioinformatics software testing. In this work, we review important concepts and state-of-the-art methods in the field of software testing. We also discuss recent reports on adapting and implementing software testing methodologies in the bioinformatics field, with specific examples drawn from systems biology and genomic medicine.
Welch, Michael J.; Eckelman, William C.; Vera, David
Molecular imaging is becoming a larger part of imaging research and practice. The Office of Biological and Environmental Research of the Department of Energy funds a significant number of researchers in this area. The proposal is to partially fund a workshop to inform scientists working in nuclear medicine and nuclear medicine practitioners of the recent advances of molecular imaging in nuclear medicine as well as other imaging modalities. A limited number of topics related to radionuclide therapy will also be discussed. The proposal is to request partial funds for the workshop entitled ''Translational Applications of Molecular Imaging and Radionuclide Therapy'' to be held prior to the Society of Nuclear Medicine Annual Meeting in Toronto, Canada in June 2005. The meeting will be held on June 17-18. This will allow scientists interested in all aspects of nuclear medicine imaging to attend. The chair of the organizing group is Dr. Michael J. Welch. The organizing committee consists of Dr. Welch, Dr. William C. Eckelman and Dr. David Vera. The goal is to invite speakers to discuss the most recent advances of modern molecular imaging and therapy. Speakers will present advances made in in vivo tagging imaging assays, technical aspects of small animal imaging, in vivo imaging and bench to bedside translational study; and the role of a diagnostic scan on therapy selection. This latter topic will include discussions on therapy and new approaches to dosimetry. Several of these topics are those funded by the Department of Energy Office of Biological and Environmental Research
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.
Chen, Liang; Tokuda, Naoyuki
Discusses a new template-automaton-based knowledge database system for an interactive intelligent language tutoring system (ILTS) for Japanese-English translation, whereby model translations as well as a taxonomy of bugs extracted from ill-formed translations typical of nonnative learners are collected. (Author/VWL)
D'Angelo, Gianni; Rampone, Salvatore
The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of
... television, television translators and television booster stations. 73.3521 Section 73.3521 Telecommunication... Applicable to All Broadcast Stations § 73.3521 Mutually exclusive applications for low power television, television translators and television booster stations. When there is a pending application for a new low...
Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas
Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .
Moore, Laura Kent
Nanotechnology marks the next phase of development for drug delivery, contrast agents and gene therapy. For these novel systems to achieve success in clinical translation we must see that they are both effective and safe. Diamond nanoparticles, also known as nanodiamonds (NDs), have been gaining popularity as molecular delivery vehicles over the last decade. The uniquely faceted, carbon nanoparticles possess a number of beneficial properties that are being harnessed for applications ranging from small-molecule drug delivery to biomedical imaging and gene therapy. In addition to improving the effectiveness of a variety of therapeutics and contrast agents, initial studies indicate that NDs are biocompatible. In this work we evaluate the translational potential of NDs by demonstrating efficacy in molecular delivery and scrutinizing particle tolerance. Previous work has demonstrated that NDs are effective vehicles for the delivery of anthracycline chemotherapeutics and gadolinium(III) based contrast agents. We have sought to enhance the gains made in both areas through the addition of active targeting. We find that ND-mediated targeted delivery of epirubicin to triple negative breast cancers induces tumor regression and virtually eliminates drug toxicities. Additionally, ND-mediated delivery of the MRI contrast agent ProGlo boosts the per gadolinium relaxivity four fold, eliminates water solubility issues and effectively labels progesterone receptor expressing breast cancer cells. Both strategies open the door to the development of targeted, theranostic constructs based on NDs, capable of treating and labeling breast cancers at the same time. Although we have seen that NDs are effective vehicles for molecular delivery, for any nanoparticle to achieve clinical utility it must be biocompatible. Preliminary research has shown that NDs are non-toxic, however only a fraction of the ND-subtypes have been evaluated. Here we present an in depth analysis of the cellular
Bostad, Dale A.
Describes some of the processes involved in the data structure manipulation and machine translation of a specific text form, namely, Soviet patent bulletins. The effort to modify this system in order to do specialized processing and translation is detailed. (Author/SED)
Thomas K Karikari
Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Karikari, Thomas K
Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
Full Text Available Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing, and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating, to use dimensionality reduction to aid gating, and to find populations automatically in higher dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data
Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin
Bioinformatics data tend to be highly dimensional in nature thus impose significant computational demands. To resolve limitations of conventional computing methods, several alternative high performance computing solutions have been proposed by scientists such as Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The latter have shown to be efficient and high in performance. In recent years, FPGAs have been benefiting from dynamic partial reconfiguration (DPR) feature for adding flexibility to alter specific regions within the chip. This work proposes combing the use of FPGAs and DPR to build a dynamic multi-classifier architecture that can be used in processing bioinformatics data. In bioinformatics, applying different classification algorithms to the same dataset is desirable in order to obtain comparable, more reliable and consensus decision, but it can consume long time when performed on conventional PC. The DPR implementation of two common classifiers, namely support vector machines (SVMs) and K-nearest neighbor (KNN) are combined together to form a multi-classifier FPGA architecture which can utilize specific region of the FPGA to work as either SVM or KNN classifier. This multi-classifier DPR implementation achieved at least ~8x reduction in reconfiguration time over the single non-DPR classifier implementation, and occupied less space and hardware resources than having both classifiers. The proposed architecture can be extended to work as an ensemble classifier.
Zoellner, Jamie; Van Horn, Linda; Gleason, Philip M; Boushey, Carol J
This monograph is tenth in a series of articles focused on research design and analysis, and provides an overview of translational research concepts. Specifically, this article presents models and processes describing translational research, defines key terms, discusses methodological considerations for speeding the translation of nutrition research into practice, illustrates application of translational research concepts for nutrition practitioners and researchers, and provides examples of translational research resources and training opportunities. To promote the efficiency and translation of evidence-based nutrition guidelines into routine clinical-, community-, and policy-based practice, the dissemination and implementation phases of translational research are highlighted and illustrated in this monograph. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Jiang, Yue-Rong; Chen, Ke-Ji
The background, concept and status quo of translational medicine at home and abroad were introduced systematically in this review, and the application mode of translational medicine in the research and development of Chinese medicine (CM) was analyzed. Targeting the characteristics of CM and the changes in the spectrum of diseases in China, some suggestions were made to strengthen the translational research in CM and integrative medicine.
Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B
The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention. © 2011 Blackwell Publishing Ltd.
Bahri, Hossein; Mahadi, Tengku Sepora Tengku
While the presence of mobile electronic devices in the classroom has posed real challenges to instructors, a growing number of teachers believe they should seize the chance to improve the quality of instruction. The advent of new mobile technologies (laptops, smartphones, tablets, etc.) in the translation classroom has opened up new opportunities…
... translingual and intralingual paradigms, promotional to civilization, growth and enrichment of multiculturalism. Translations have afforded generation after generation opportunities to share in the ich ideas, ideals, discoveries, inventions and theories of iconic figures like Christ, Aristotle, Plato, Galileo and Kopernicus etc; ...
Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong
In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.
Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I
Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.
Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele
MapReduce Hadoop bioinformatics applications require the availability of special-purpose routines to manage the input of sequence files. Unfortunately, the Hadoop framework does not provide any built-in support for the most popular sequence file formats like FASTA or BAM. Moreover, the development of these routines is not easy, both because of the diversity of these formats and the need for managing efficiently sequence datasets that may count up to billions of characters. We present FASTdoop, a generic Hadoop library for the management of FASTA and FASTQ files. We show that, with respect to analogous input management routines that have appeared in the Literature, it offers versatility and efficiency. That is, it can handle collections of reads, with or without quality scores, as well as long genomic sequences while the existing routines concentrate mainly on NGS sequence data. Moreover, in the domain where a comparison is possible, the routines proposed here are faster than the available ones. In conclusion, FASTdoop is a much needed addition to Hadoop-BAM. The software and the datasets are available at http://www.di.unisa.it/FASTdoop/ . firstname.lastname@example.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com
Branco,Sinara de Oliveira
This paper presents the analysis of the application of activities using films and the intersemiotic category of translation as a tool for the practice of the abilities of listening, speaking, reading and writing. The theoretical framework is based on the Functionalist Approach of Translation, Translation Categories, the Theory of Translation and Culture, as well as the Theory of Translation and Cinema. Four activities were created to use the English language with beginner students of the Mode...
... power television and television translator stations. 74.789 Section 74.789 Telecommunication FEDERAL....789 Broadcast regulations applicable to digital low power television and television translator stations. The following sections are applicable to digital low power television and television translator...
Ferruzzi, Mario G; Peterson, Devin G; Singh, R Paul; Schwartz, Steven J; Freedman, Marjorie R
This paper, based on the symposium "Real-World Nutritional Translation Blended With Food Science," describes how an integrated "farm-to-cell" approach would create the framework necessary to address pressing public health issues. The paper describes current research that examines chemical reactions that may influence food flavor (and ultimately food consumption) and posits how these reactions can be used in health promotion; it explains how mechanical engineering and computer modeling can study digestive processes and provide better understanding of how physical properties of food influence nutrient bioavailability and posits how this research can also be used in the fight against obesity and diabetes; and it illustrates how an interdisciplinary scientific collaboration led to the development of a novel functional food that may be used clinically in the prevention and treatment of prostate cancer.
Schweikhart, Sharon A; Dembe, Allard E
Lean and Six Sigma are business management strategies commonly used in production industries to improve process efficiency and quality. During the past decade, these process improvement techniques increasingly have been applied outside the manufacturing sector, for example, in health care and in software development. This article concerns the potential use of Lean and Six Sigma in improving the processes involved in clinical and translational research. Improving quality, avoiding delays and errors, and speeding up the time to implementation of biomedical discoveries are prime objectives of the National Institutes of Health (NIH) Roadmap for Medical Research and the NIH's Clinical and Translational Science Award program. This article presents a description of the main principles, practices, and methods used in Lean and Six Sigma. Available literature involving applications of Lean and Six Sigma to health care, laboratory science, and clinical and translational research is reviewed. Specific issues concerning the use of these techniques in different phases of translational research are identified. Examples of Lean and Six Sigma applications that are being planned at a current Clinical and Translational Science Award site are provided, which could potentially be replicated elsewhere. We describe how different process improvement approaches are best adapted for particular translational research phases. Lean and Six Sigma process improvement methods are well suited to help achieve NIH's goal of making clinical and translational research more efficient and cost-effective, enhancing the quality of the research, and facilitating the successful adoption of biomedical research findings into practice.
Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Processing TV broadcast and translator station applications. 1.572 Section 1.572 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Broadcast Applications and Proceedings General Filing Requirements § 1.572 Processing TV broadcast...
Fabric Images Inc., specializing in the printing and manufacturing of fabric tension architecture for the retail, museum, and exhibit/tradeshow communities, designed software to translate 2-D graphics for 3-D surfaces prior to print production. Fabric Images' fabric-flattening design process models a 3-D surface based on computer-aided design (CAD) specifications. The surface geometry of the model is used to form a 2-D template, similar to a flattening process developed by NASA's Glenn Research Center. This template or pattern is then applied in the development of a 2-D graphic layout. Benefits of this process include 11.5 percent time savings per project, less material wasted, and the ability to improve upon graphic techniques and offer new design services. Partners include Exhibitgroup/Giltspur (end-user client: TAC Air, a division of Truman Arnold Companies Inc.), Jack Morton Worldwide (end-user client: Nickelodeon), as well as 3D Exhibits Inc., and MG Design Associates Corp.
Examines the application of linguistic theory to machine translation and translator tools, discusses the use of machine translation and translator tools in the real world of translation, and addresses the impact of translation technology on conceptions of language and other issues. Findings indicate that the human mind is flexible and linguistic…
Yang, Po; Dong, Feng; Codreanu, Valeriu
to the lack of specialist GPU (Graphics processing units) programming skills, the explosion of GPU power has not been fully utilized in general SME applications by inexperienced users. Also, existing automatic CPU-to-GPU code translators are mainly designed for research purposes with poor user interface...... design and hard-to-use. Little attentions have been paid to the applicability, usability and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system, (GPSME) for inexperienced users to utilize GPU capability in accelerating general...
Dian Puspita Tedjosurya
Full Text Available Along with the development of information technology in recent era, a number of new applications emerge, especially on mobile phones. The use of mobile phones, besides as communication media, is also as media of learning, such as translator application. Translator application can be a tool to learn a language, such as English to Bahasa Indonesia translator application. The purpose of this research is to allow user to be able to translate English to Bahasa Indonesia on mobile phone easily. Translator application on this research was developed using Java programming language (especially J2ME because of its advantage that can run on various operating systems and its open source that can be easily developed and distributed. In this research, data collection was done through literature study, observation, and browsing similar application. Development of the system used object-oriented analysis and design that can be described by using case diagrams, class diagrams, sequence diagrams, and activity diagrams. The translation process used rule-based method. Result of this research is the application of Java-based translator which can translate English sentence into Indonesian sentence. The application can be accessed using a mobile phone with Internet connection. The application has spelling check feature that is able to check the wrong word and provide alternative word that approaches the word input. Conclusion of this research is the application can translate sentence in daily conversation quite well with the sentence structure corresponds and is close to its original meaning.
Matsushita, Kenichi; Dzau, Victor J
Obesity is now a major public health problem worldwide. Lifestyle modification to reduce the characteristic excess body adiposity is important in the treatment of obesity, but effective therapeutic intervention is still needed to control what has become an obesity epidemic. Unfortunately, many anti-obesity drugs have been withdrawn from market due to adverse side effects. Bariatric surgery therefore remains the most effective therapy for severe cases, although such surgery is invasive and researchers continue to seek new control strategies for obesity. Mesenchymal stem cells (MSCs) are a major source of adipocyte generation, and studies have been conducted into the potential roles of MSCs in treating obesity. However, despite significant progress in stem cell research and its potential applications for obesity, adipogenesis is a highly complex process and the molecular mechanisms governing MSC adipogenesis remain ill defined. In particular, successful clinical application of MSCs will require extensive identification and characterization of the transcriptional regulators controlling MSC adipogenesis. Since obesity is associated with the incidence of multiple important comorbidities, an in-depth understanding of the relationship between MSC adipogenesis and the comorbidities of obesity is also necessary to evaluate the potential of effective and safe MSC-based therapies for obesity. In addition, brown adipogenesis is an attractive topic from the viewpoint of therapeutic innovation and future research into MSC-based brown adipogenesis could lead to a novel breakthrough. Ongoing stem cell studies and emerging research fields such as epigenetics are expected to elucidate the complicated mechanisms at play in MSC adipogenesis and develop novel MSC-based therapeutic options for obesity. This review discusses the current understanding of MSCs in adipogenesis and their potential clinical applications for obesity.
Full Text Available Developing reproductive organs within a flower are sensitive to environmental stress. A higher incidence of environmental stress during this stage of a crop plants’ developmental cycle will lead to major breaches in food security. Clearly, we need to understand this sensitivity and try and overcome it, by agricultural practices and/or the breeding of more tolerant cultivars. Although passion fruit vines initiate flowers all year round, flower primordia abort during warm summers. This restricts the season of fruit production in regions with warm summers. Previously, using controlled chambers, stages in flower development that are sensitive to heat were identified. Based on genetic analysis and physiological experiments in controlled environments, gibberellin activity appeared to be a possible point of horticultural intervention. Here, we aimed to shield flowers of a commercial cultivar from end of summer conditions, thus allowing fruit production in new seasons. We conducted experiments over three years in different settings, and our findings consistently show that a single application of an inhibitor of gibberellin biosynthesis to vines in mid-August can cause precocious flowering of ~2–4 weeks, leading to earlier fruit production of ~1 month. In this case, knowledge obtained on phenology, environmental constraints and genetic variation, allowed us to reach a practical solution.
Wang, Yijun; Lu, Wenjie; Deng, Dexiang
Diverse bioinformatic resources have been developed for plant transcription factor (TF) research. This review presents the bioinformatic resources and methodologies for the elucidation of plant TF-mediated biological events. Such information is helpful to dissect the transcriptional regulatory systems in the three reference plants Arabidopsis , rice, and maize and translation to other plants. Transcription factors (TFs) orchestrate diverse biological programs by the modulation of spatiotemporal patterns of gene expression via binding cis-regulatory elements. Advanced sequencing platforms accompanied by emerging bioinformatic tools revolutionize the scope and extent of TF research. The system-level integration of bioinformatic resources is beneficial to the decoding of TF-involved networks. Herein, we first briefly introduce general and specialized databases for TF research in three reference plants Arabidopsis, rice, and maize. Then, as proof of concept, we identified and characterized heat shock transcription factor (HSF) members through the TF databases. Finally, we present how the integration of bioinformatic resources at -omics layers can aid the dissection of TF-mediated pathways. We also suggest ways forward to improve the bioinformatic resources of plant TFs. Leveraging these bioinformatic resources and methodologies opens new avenues for the elucidation of transcriptional regulatory systems in the three model systems and translation to other plants.
Mihalas, George I; Tudor, Anca; Paralescu, Sorin; Andor, Minodora; Stoicu-Tivadar, Lacramioara
The paper refers to our methodology and experience in establishing the content of the course in bioinformatics introduced to the school of "Information Systems in Healthcare" (SIIS), master level. The syllabi of both lectures and laboratory works are presented and discussed.
Dam-Jensen, Helle; Heine, Carmen
Teaching of translation and writing in the university classroom tends to focus on task knowledge by practicing text production and analyzing and discussing the quality of products. In this article, we will argue that the outcome of teaching may be increased if students are taught to see themselves...... not only as learners, but also as thinkers and problem solvers. This can be achieved by systematically applying knowledge from process research as this can give insight into mental and physical processes of text production. This article provides an overview of methods commonly used in process research...... and discusses the pros and cons of their application in teaching of translation and writing at university levels....
Nomi L Harris
Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Wei, Dongqing; Zhao, Tangzhen; Dai, Hao
This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
Good, Benjamin M; Su, Andrew I
Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
Scarrow, Gayle; Angus, Donna; Holmes, Bev J
Health research funding agencies are placing a growing focus on knowledge translation (KT) plans, also known as dissemination and implementation (D&I) plans, in grant applications to decrease the gap between what we know from research and what we do in practice, policy, and further research. Historically, review panels have focused on the scientific excellence of applications to determine which should be funded; however, relevance to societal health priorities, the facilitation of evidence-informed practice and policy, or realizing commercialization opportunities all require a different lens. While experts in their respective fields, grant reviewers may lack the competencies to rigorously assess the KT components of applications. Funders of health research-including health charities, non-profit agencies, governments, and foundations-have an obligation to ensure that these components of funding applications are as rigorously evaluated as the scientific components. In this paper, we discuss the need for a more rigorous evaluation of knowledge translation potential by review panels and propose how this may be addressed. We propose that reviewer training supported in various ways including guidelines and KT expertise on review panels and modalities such as online and face-to-face training will result in the rigorous assessment of all components of funding applications, thus increasing the relevance and use of funded research evidence. An unintended but highly welcome consequence of such training could be higher quality D&I or KT plans in subsequent funding applications from trained reviewers.
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Lu, Hong; Xie, Cheng; Zhao, Yi-Min; Chen, Fa-Ming
Stem cells have received a great deal of interest from the research community as potential therapeutic "tools" for a variety of chronic debilitating diseases that lack clinically effective therapies. Stem cells are also of interest for the regeneration of tooth-supporting tissues that have been lost to periodontal disease. Indeed, substantial data have demonstrated that the exogenous administration of stem cells or their derivatives in preclinical animal models of periodontal defects can restore damaged tissues to their original form and function. As we discuss here, however, considerable hurdles must be overcome before these findings can be responsibly translated to novel clinical therapies. Generally, the application of stem cells for periodontal therapy in clinics will not be realized until the best cell(s) to use, the optimal dose, and an effective mode of administration are identified. In particular, we need to better understand the mechanisms of action of stem cells after transplantation in the periodontium and to learn how to preciously control stem cell fates in the pathological environment around a tooth. From a translational perspective, we outline the challenges that may vary across preclinical models for the evaluation of stem cell therapy in situations that require periodontal reconstruction and the safety issues that are related to clinical applications of human stem cells. Although clinical trials that use autologous periodontal ligament stem cells have been approved and have already been initiated, proper consideration of the technical, safety, and regulatory concerns may facilitate, rather than inhibit, the clinical translation of new therapies.
Fox, Caroline S; Hall, Jennifer L; Arnett, Donna K; Ashley, Euan A; Delles, Christian; Engler, Mary B; Freeman, Mason W; Johnson, Julie A; Lanfear, David E; Liggett, Stephen B; Lusis, Aldons J; Loscalzo, Joseph; MacRae, Calum A; Musunuru, Kiran; Newby, L Kristin; O'Donnell, Christopher J; Rich, Stephen S; Terzic, Andre
The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardiovascular clinical care. In brief, this scientific statement assesses the current timeline for effective translation of basic discoveries to clinical advances, highlighting past successes. Current discoveries in the area of genetics and genomics are covered next, followed by future expectations, tools, and competencies for achieving the goal of improving clinical care. © 2015 American Heart Association, Inc.
Czyzewicz, Nathan; Stes, Elisabeth; De Smet, Ive
The first signaling peptide discovered and purified was insulin in 1921. However, it was not until 1991 that the first peptide signal, systemin, was discovered in plants. Since the discovery of systemin, peptides have emerged as a potent and diverse class of signaling molecules in plant systems. Peptides consist of small amino acid sequences, which often act as ligands of receptor kinases. However, not all peptides are created equal, and signaling peptides are grouped into several subgroups dependent on the type of post-translational processing they undergo. Here, we focus on the application of synthetic, post-translationally modified peptides (PTMPs) to plant systems, describing several methods appropriate for the use of peptides in Arabidopsis thaliana and crop models.
Bujarski, Spencer; Ray, Lara A
In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.
William Frith Harris
Full Text Available An appendix to Le Grand’s 1945 book, Optique Physiologique: Tome Premier: La Dioptrique de l’Œil et Sa Correction, briefly dealt with the application of matrices in optics. However the appendix was omitted from the well-known English translation, Physiological Optics, which appeared in 1980. Consequently the material is all but forgotten. This is unfortunate in view of the importance of the dioptric power matrix and the ray transference which entered the optometric literature many years later. Motivated by the perception that there has not been enough care in optometry to attribute concepts appropriately this paper attempts a careful analysis of Le Grand’s thinking as reflected in his appendix. A translation into English is provided in the appendix to this paper. The paper opens with a summary of the basics of Gaussian and linear optics sufficient for the interpretation of Le Grand’s appendix which follows. The paper looks more particularly at what Le Grand says in relation to the transference and the dioptric power matrix though many other issues are also touched on including the conditions under which distant objects will map to clear images on the retina and, more particularly, to clear images that are undistorted. Detailed annotations of Le Grand’s translated appendix are provided. (S Afr Optom 2013 72(4 145-166
Carr, Eloise Cj; Babione, Julie N; Marshall, Deborah
To identify the needs and requirements of the end users, to inform the development of a user-interface to translate an existing evidence-based decision support tool into a practical and usable interface for health service planning for osteoarthritis (OA) care. We used a user-centered design (UCD) approach that emphasized the role of the end-users and is well-suited to knowledge translation (KT). The first phase used a needs assessment focus group (n=8) and interviews (n=5) with target users (health care planners) within a provincial health care organization. The second phase used a participatory design approach, with two small group sessions (n=6) to explore workflow, thought processes, and needs of intended users. The needs assessment identified five design recommendations: ensuring the user-interface supports the target user group, allowing for user-directed data explorations, input parameter flexibility, clear presentation, and provision of relevant definitions. The second phase identified workflow insights from a proposed scenario. Graphs, the need for a visual overview of the data, and interactivity were key considerations to aid in meaningful use of the model and knowledge translation. A UCD approach is well suited to identify health care planners' requirements when using a decision support tool to improve health service planning and management of OA. We believe this is one of the first applications to be used in planning for health service delivery. We identified specific design recommendations that will increase user acceptability and uptake of the user-interface and underlying decision support tool in practice. Our approach demonstrated how UCD can be used to enable knowledge translation. Copyright © 2017 Elsevier B.V. All rights reserved.
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.
Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…
Full Text Available If we want to encompass adequately the wide-ranging field of human translation, it is necessary to include in translation studies (TS the concept of translator awareness (or translator consciousness, for that matter. However, this is more easily said than done, because this concept does not easily lend itself to definition, let alone to measurement, e. g., by investigating translator behaviour. To put it bluntly: Translator awareness is a fuzzy concept. Like many obviously difficult-to-define concepts, with which dialogue in TS is burdened, translator awareness lacks an articulated theory within which different forms of translator behaviour can be convincingly related to, or distinguished from, one another. Hence, TS has so far not tackled, at least not systematically, the issue of translator awareness. If we want to encompass adequately the wide-ranging field of human translation, it is necessary to include in translation studies (TS the concept of translator awareness (or translator consciousness, for that matter. However, this is more easily said than done, because this concept does not easily lend itself to definition, let alone to measurement, e. g., by investigating translator behaviour. To put it bluntly: Translator awareness is a fuzzy concept. Like many obviously difficult-to-define concepts, with which dialogue in TS is burdened, translator awareness lacks an articulated theory within which different forms of translator behaviour can be convincingly related to, or distinguished from, one another. Hence, TS has so far not tackled, at least not systematically, the issue of translator awareness.
Kirchhoff, Katrin; Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Machine translation plus postediting took 15-53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations.
Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco
Objective Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. Design The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Results Machine translation plus postediting took 15–53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. Conclusion The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations. PMID:21498805
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability
Drinkenburg, Wilhelmus H I M; Ahnaou, Abdallah; Ruigt, Gé S F
Current research on the effects of pharmacological agents on human neurophysiology finds its roots in animal research, which is also reflected in contemporary animal pharmaco-electroencephalography (p-EEG) applications. The contributions, present value and translational appreciation of animal p-EEG-based applications are strongly interlinked with progress in recording and neuroscience analysis methodology. After the pioneering years in the late 19th and early 20th century, animal p-EEG research flourished in the pharmaceutical industry in the early 1980s. However, around the turn of the millennium the emergence of structurally and functionally revealing imaging techniques and the increasing application of molecular biology caused a temporary reduction in the use of EEG as a window into the brain for the prediction of drug efficacy. Today, animal p-EEG is applied again for its biomarker potential - extensive databases of p-EEG and polysomnography studies in rats and mice hold EEG signatures of a broad collection of psychoactive reference and test compounds. A multitude of functional EEG measures has been investigated, ranging from simple spectral power and sleep-wake parameters to advanced neuronal connectivity and plasticity parameters. Compared to clinical p-EEG studies, where the level of vigilance can be well controlled, changes in sleep-waking behaviour are generally a prominent confounding variable in animal p-EEG studies and need to be dealt with. Contributions of rodent pharmaco-sleep EEG research are outlined to illustrate the value and limitations of such preclinical p-EEG data for pharmacodynamic and chronopharmacological drug profiling. Contemporary applications of p-EEG and pharmaco-sleep EEG recordings in animals provide a common and relatively inexpensive window into the functional brain early in the preclinical and clinical development of psychoactive drugs in comparison to other brain imaging techniques. They provide information on the impact of
Fu, Zhiyan; Lin, Jing
The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Ayoub, Catherine C; O'Connor, Erin; Rappolt-Schlichtmann, Gabrielle; Fischer, Kurt W; Rogosch, Fred A; Toth, Sheree L; Cicchetti, Dante
Through a translational approach, dynamic skill theory enhances the understanding of the variation in the behavioral and cognitive presentations of a high-risk population-maltreated children. Two studies illustrate the application of normative developmental constructs from a dynamic skills perspective to samples of young maltreated and nonmaltreated children. Each study examines the emotional and cognitive development of maltreated children with attention to their developing world view or negativity bias and cognitive skills. Across both studies, maltreated children demonstrate negativity bias when compared to their nonmaltreated counterparts. Cognitive complexity demonstrated by the maltreated children is dependent upon a positive or negative context. Positive problem solving is more difficult for maltreated children when compared to their nonmaltreated counterparts. Differences by maltreatment type, severity, timing of the abuse, and identity of the perpetrator are also delineated, and variation in the resulting developmental trajectories in each case is explored. This translation of dynamic skill theory, as applied to maltreated children, enhances our basic understanding of their functioning, clarifies the nature of their developmental differences, and underscores the need for early intervention.
Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas
Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling. ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For
Handl, Julia; Kell, Douglas B; Knowles, Joshua
This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.
Shi, Lin; López Villar, Elena; Chen, Chengshui
Because of the economic growth and changes in lifestyle, metabolic diseases have become a major public health problem, which impose heavy economic burdens on individuals, families and health systems. However, its precise mediators and mechanisms remain to be fully understood. Clinical translational medicine (CTM) is an emerging area comprising multidisciplinary research from basic science to medical applications and as a new tool to improve human health by reducing disease incidence, morbidity and mortality. It can bridge knowledge of metabolic diseases processes, gained by in vitro and experimental animal models, with the disease pathways found in humans, further to identify their susceptibility genes and enable patients to achieve personalized medicament treatment. Thus, we have the reasons to believe that CTM will play even more roles in the development of new diagnostics, therapies, healthcare, and policies and the Sino-American Symposium on Clinical and Translational Medicine (SAS-CTM) will become a more and more important platform for exchanging ideas on clinical and translational research and entails a close collaboration among hospital, academia and industry.
Respiratory diseases will become one of the top 3 leading causes of estimated mortality in 2020 and become about one third of total causes of estimated mortality. The journal of Translational Respiratory Medicine is a truly international, peer-reviewed journal devoted to the publication of articles on outstanding work with translational potentials between basic research and clinical application to understanding respiratory disease. Translational respiratory medicine will more focus on biomarker identification and validation in pulmonary diseases in combination with clinical informatics, targeted proteomics, bioinformatics, systems medicine, or mathematical science; on different translational strategies of cell-based therapy to clinical application to treat lung diseases; on targeted therapies in combination with personalized medicine; and on distant electronic medicine to monitor a large population of people's health. Translational Respiratory Medicine is an additional but unique opportunity for scientists and clinicians who work on pulmonary diseases to publish their outstanding findings, initiative results, and critical and perceptive opinions in the journal.
van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.
Bruhn, Russel Elton; Burton, Philip John
Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.
de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.
Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.
Huttin, Christine; Liebman, Michael
This paper discusses the application of an adaptive technology platform to evaluate clinical pathways, clinical strategies and their application to early genetic testing in Europe. It results from a collaboration between Professor Christine Huttin, who created a technology startup called endepusresearchinc (www.endepusresearchinc-com) in Cambridge U.S.A. and Dr. Michael Liebman, Managing Director of Strategic Medicine, Inc.(www.strategicmedicine.com) positioned in the translational medicine space with technologies to model disease processes and experience in early genetic testing for breast cancer. Diagnostics was identified as a potential area for application of economic and business models using ENDEPUSresearchinc statistical methods combining implicit and explicit financial information for prospective payment systems in various types of health care financing systems. The new economics resulting from such platforms challenges the constrained environment for public, private laboratories and partnerships, the restrictions imposed by governmental agencies, alliances between stakeholders such as the medical profession and laboratories: such early technologies are clearly identified as a key benefit to the system over time but a major challenge for short term returns.
Full Text Available The article presents a Think Aloud Protocol study and conversation analysis implemented as strategies by inexperienced translators who translated an extract of an audiovisual message from French into German. We observed and analysed the case in which were recorded the students in order to externalize the process of translating. The author of this paper wants to show how they used different understanding and search strategies during the act of translation. The results can serve as hypotheses for the teaching of translation.
Huang, Zou-Qin; Pei, Jian
In recent years, translational medicine, which is characterized by advanced concepts and methods, developes rapidly and playes a strategic role in the development of TCM acupuncture and moxibustion. Therefore, it is worth studying by acupuncturists. Through the background, development, features and research model of translational medicine, the present situation and problems of TCM acupuncture research are analyzed. Several cases of translational Chinese medicine and acupuncture are listed with the consideration of the concept of translational medicine. Studies and thoughts on translational acupuncture are expounded as well. Thus, it is suggested that combined with characteristics of acupuncture, the concept of translational medicine should be utilized to instruct the clinical treatment and research of acupuncture, foster researchers of translational medicine as well as establish the related research teams.
Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin
Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.
Diaz Acosta, B.
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
Hugh P Shanahan
Full Text Available We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P
We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Full Text Available Cartilage defects can impair the most elementary daily activities and, if not properly treated, can lead to the complete loss of articular function. The limitations of standard treatments for cartilage repair have triggered the development of stem cell-based therapies. In this scenario, the development of efficient cell differentiation protocols and the design of proper biomaterial-based supports to deliver cells to the injury site need to be addressed through basic and applied research to fully exploit the potential of stem cells. Here, we discuss the use of microfluidics and bioprinting approaches for the translation of stem cell-based therapy for cartilage repair in clinics. In particular, we will focus on the optimization of hydrogel-based materials to mimic the articular cartilage triggered by their use as bioinks in 3D bioprinting applications, on the screening of biochemical and biophysical factors through microfluidic devices to enhance stem cell chondrogenesis, and on the use of microfluidic technology to generate implantable constructs with a complex geometry. Finally, we will describe some new bioprinting applications that pave the way to the clinical use of stem cell-based therapies, such as scaffold-free bioprinting and the development of a 3D handheld device for the in situ repair of cartilage defects.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20?23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology,...
Sinara de Oliveira Branco
Full Text Available This paper presents the analysis of the application of activities using films and the intersemiotic category of translation as a tool for the practice of the abilities of listening, speaking, reading and writing. The theoretical framework is based on the Functionalist Approach of Translation, Translation Categories, the Theory of Translation and Culture, as well as the Theory of Translation and Cinema. Four activities were created to use the English language with beginner students of the Modern Languages Course (Letras-Inglês of the Federal University of Campina Grande (UFCG. The activities were created based on the films Finding Neverland and Tim Burton's Alice in Wonderland. A questionnaire was also created to assess the students' opinion concerning the validity of such activities for the study of the English language. Findings have shown that the use of activities involving the intersemiotic category of translation and multimodality, when applied together with specific theories, helps the study of the English language, promoting more participation and interaction between teacher and students.
Full Text Available Ideology plays an important role in our life. Translation and language are always sites of ideological encounters. Translation is represented through a dominant ideology of any society. If translation theories and ideology put under scrutiny, evidences regarding the influence of cultural conflicts will be found in them. This paper is firstly aimed at investigating the analytical framework proposed by Hatim & Mason (1990, 1991, and 1997 to study and analyze the issues of Genre, discourse and text; and then for the purpose of studying the issue of ideology and its angles in translations. The focus of this study is the application of Hatim & Mason's analytical framework on J. D. Salinger's "The Catcher in the Rye" and its two Persian translations by Ahmad Karimi and Mohammad Najafi. From the ideological standpoint which affects the process of translation, enough probing has again been carried out into the very same work of literature. Worthy of mention would be that in the present study, the differences between the source text and the target texts have been studied separately in terms of lexical choices, nominalizations and from the standpoint of discoursal constrains.
van Kampen, Antoine H C; Moerland, Perry D
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http......://www.secondarymetabolites.org) is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field....
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Matthias S. Klein
Full Text Available Type 2 diabetes (T2D and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids and α-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine.
Sleeboom-Faulkner, Margaret; Chekar, Choon Key; Faulkner, Alex; Heitmeyer, Carolyn; Marouda, Marina; Rosemann, Achim; Chaisinthop, Nattaka; Chang, Hung-Chieh Jessica; Ely, Adrian; Kato, Masae; Patra, Prasanna K; Su, Yeyang; Sui, Suli; Suzuki, Wakana; Zhang, Xinqing
A very large grey area exists between translational stem cell research and applications that comply with the ideals of randomised control trials and good laboratory and clinical practice and what is often referred to as snake-oil trade. We identify a discrepancy between international research and ethics regulation and the ways in which regulatory instruments in the stem cell field are developed in practice. We examine this discrepancy using the notion of 'national home-keeping', referring to the way governments articulate international standards and regulation with conflicting demands on local players at home. Identifying particular dimensions of regulatory tools - authority, permissions, space and acceleration - as crucial to national home-keeping in Asia, Europe and the USA, we show how local regulation works to enable development of the field, notwithstanding international (i.e. principally 'western') regulation. Triangulating regulation with empirical data and archival research between 2012 and 2015 has helped us to shed light on how countries and organisations adapt and resist internationally dominant regulation through the manipulation of regulatory tools (contingent upon country size, the state's ability to accumulate resources, healthcare demands, established traditions of scientific governance, and economic and scientific ambitions). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Yoon, Jung Won; Hwang, Yoon Kwon [Gyeongsang National University, Jinju (Korea, Republic of); Ryu, Je Ha [Gwangju Institute of Science and Technology, Gwangju (Korea, Republic of)
This paper proposes an optimum design method that satisfies the desired orientation workspace at the boundary of the translation workspace while maximizing the mechanism isotropy for parallel manipulators. A simple genetic algorithm is used to obtain the optimal linkage parameters of a six-degree-of-freedom parallel manipulator that can be used as a haptic device. The objective function is composed of a desired spherical shape translation workspace and a desired orientation workspace located on the boundaries of the desired translation workspace, along with a global conditioning index based on a homogeneous Jacobian matrix. The objective function was optimized to satisfy the desired orientation workspace at the boundary positions as translated from a neutral position of the increased entropy mechanism. An optimization result with desired translation and orientation workspaces for a haptic device was obtained to show the effectiveness of the suggested scheme, and the kinematic performances of the proposed model were compared with those of a preexisting base model
Full Text Available Over the past years deep sequencing experiments have opened novel doors to reconstruct viral populations in a high-throughput and cost-effective manner. Currently a substantial number of studies have been performed which employ Next Generation Sequencing (NGS techniques to either analyze known viruses by means of a reference-guided approach or to discover novel viruses using a de novo-based strategy. Taking advantage of the well-known Cymbidium ringspot virus we have carried out a comparison of different bioinformatics tools to reconstruct the viral genome based on 21-27 nt short (sRNA sequencing with the aim to identify the most efficient pipeline. The same approach was applied to a population of plants constituting an ancient variety of Cicer arietinum with red seeds. Among the discovered viruses, we describe the presence of a Tobamovirus referring to the Tomato mottle mosaic virus (NC_022230, which was not yet observed on C. arietinum nor revealed in Europe and a virod referring to Hop stunt viroid (NC_001351.1 never reported in chickpea. Notably, a reference sequence guided approach appeared the most efficient in such kind of investigation. Instead, the de novo assembly reached a non-appreciable coverage although the most prominent viral species could still be identified. Advantages and limitations of viral metagenomics analysis using sRNAs are discussed.
El-Assaad, Atlal; Dawy, Zaher; Nemer, Georges; Kobeissy, Firas
The crucial biological role of proteases has been visible with the development of degradomics discipline involved in the determination of the proteases/substrates resulting in breakdown-products (BDPs) that can be utilized as putative biomarkers associated with different biological-clinical significance. In the field of cancer biology, matrix metalloproteinases (MMPs) have shown to result in MMPs-generated protein BDPs that are indicative of malignant growth in cancer, while in the field of neural injury, calpain-2 and caspase-3 proteases generate BDPs fragments that are indicative of different neural cell death mechanisms in different injury scenarios. Advanced proteomic techniques have shown a remarkable progress in identifying these BDPs experimentally. In this work, we present a bioinformatics-based prediction method that identifies protease-associated BDPs with high precision and efficiency. The method utilizes state-of-the-art sequence matching and alignment algorithms. It starts by locating consensus sequence occurrences and their variants in any set of protein substrates, generating all fragments resulting from cleavage. The complexity exists in space O(mn) as well as in O(Nmn) time, where N, m, and n are the number of protein sequences, length of the consensus sequence, and length per protein sequence, respectively. Finally, the proposed methodology is validated against βII-spectrin protein, a brain injury validated biomarker.
van Kampen, Antoine H. C.; Moerland, Perry D.
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
Vesth, Tammi Camilla; Rasmussen, Jane Lind Nybo; Theobald, Sebastian
with the Joint Genome Institute. The Aspergillus Mine is not intended as a genomic data sharing service but instead focuses on creating an environment where the results of bioinformatic analysis is made available for inspection. The data and code is public upon request and figures can be obtained directly from...
Vaez Barzani, Ahmad
In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping
Cazals, Frédéric; Dreyfus, Tom
Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org
Chan, Landon L; Jiang, Peiyong
The discovery of cell-free DNA molecules in plasma has opened up numerous opportunities in noninvasive diagnosis. Cell-free DNA molecules have become increasingly recognized as promising biomarkers for detection and management of many diseases. The advent of next generation sequencing has provided unprecedented opportunities to scrutinize the characteristics of cell-free DNA molecules in plasma in a genome-wide fashion and at single-base resolution. Consequently, clinical applications of circulating cell-free DNA analysis have not only revolutionized noninvasive prenatal diagnosis but also facilitated cancer detection and monitoring toward an era of blood-based personalized medicine. With the remarkably increasing throughput and lowering cost of next generation sequencing, bioinformatics analysis becomes increasingly demanding to understand the large amount of data generated by these sequencing platforms. In this Review, we highlight the major bioinformatics algorithms involved in the analysis of cell-free DNA sequencing data. Firstly, we briefly describe the biological properties of these molecules and provide an overview of the general bioinformatics approach for the analysis of cell-free DNA. Then, we discuss the specific upstream bioinformatics considerations concerning the analysis of sequencing data of circulating cell-free DNA, followed by further detailed elaboration on each key clinical situation in noninvasive prenatal diagnosis and cancer management where downstream bioinformatics analysis is heavily involved. We also discuss bioinformatics analysis as well as clinical applications of the newly developed massively parallel bisulfite sequencing of cell-free DNA. Finally, we offer our perspectives on the future development of bioinformatics in noninvasive diagnosis. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Adilia Maria Pires Sciarra
Full Text Available ABSTRACT INTRODUCTION: In a world in which global communication is becoming ever more important and in which English is increasingly positioned as the pre-eminent international language, that is, English as a Lingua Franca refers to the use of English as a medium of communication between peoples of different languages. It is important to highlight the positive advances in communication in health, provided by technology. OBJECTIVE: To present an overview on some technological devices of translating languages provided by the Web as well as to point out some advantages and disadvantages specially using Google Translate in Medicine and Health Sciences. METHODS: A bibliographical survey was performed to provide an overview on the usefulness of online translators for applicability using written and spoken languages. RESULTS: As we have to consider this question to be further surely answered, this study could present some advantages and disadvantages in using translating online devices. CONCLUSION: Considering Medicine and Health Sciences as expressive into the human scientific knowledge to be spread worldwidely; technological devices available on communication should be used to overcome some language barriers either written or spoken, but with some caution depending on the context of their applicability.
Wæraas, Arild; Nielsen, Jeppe
common theoretical approaches to translation within the organization and management discipline: actor-network theory, knowledge-based theory, and Scandinavian institutionalism. Although each of these approaches already has borne much fruit in research, the literature is diverse and somewhat fragmented......Translation theory has proved to be a versatile analytical lens used by scholars working from different traditions. On the basis of a systematic literature review, this study adds to our understanding of the ‘translations’ of translation theory by identifying the distinguishing features of the most......, but also overlapping. We discuss the ways in which the three versions of translation theory may be combined and enrich each other so as to inform future research, thereby offering a more complete understanding of translation in and across organizational settings....
subsampling method proposed by Kishore Papineni (personal communication ), which reduced the amount of training data that the system must process without...to prune each edge is only 171 0 3 4 2 5 7 to tonb nb nband tonba ton band ba nd and tonband nba Figure 4.7: A full segmentation lattice (WFST) as of...will also benefit from the propagation of ambiguity. That is, the translation interface should communicate ambigu- ity about the translations to the
"The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...
Maass, Christian; Stokes, Cynthia L; Griffith, Linda G; Cirit, Murat
Microphysiological systems (MPS) provide relevant physiological environments in vitro for studies of pharmacokinetics, pharmacodynamics and biological mechanisms for translational research. Designing multi-MPS platforms is essential to study multi-organ systems. Typical design approaches, including direct and allometric scaling, scale each MPS individually and are based on relative sizes not function. This study's aim was to develop a new multi-functional scaling approach for integrated multi-MPS platform design for specific applications. We developed an optimization approach using mechanistic modeling and specification of an objective that considered multiple MPS functions, e.g., drug absorption and metabolism, simultaneously to identify system design parameters. This approach informed the design of two hypothetical multi-MPS platforms consisting of gut and liver (multi-MPS platform I) and gut, liver and kidney (multi-MPS platform II) to recapitulate in vivo drug exposures in vitro. This allows establishment of clinically relevant drug exposure-response relationships, a prerequisite for efficacy and toxicology assessment. Design parameters resulting from multi-functional scaling were compared to designs based on direct and allometric scaling. Human plasma time-concentration profiles of eight drugs were used to inform the designs, and profiles of an additional five drugs were calculated to test the designed platforms on an independent set. Multi-functional scaling yielded exposure times in good agreement with in vivo data, while direct and allometric scaling approaches resulted in short exposure durations. Multi-functional scaling allows appropriate scaling from in vivo to in vitro of multi-MPS platforms, and in the cases studied provides designs that better mimic in vivo exposures than standard MPS scaling methods.
Ouyang, Xiang-ying; Yang, Wen
Translational Medicine is an evolutional concept that encompasses the rapid translation of basic research for use in clinical disease diagnosis, prevention, treatment and finally in public health promotion. It follows the idea "from bench to bedside and back", and hence relies on cooperation between laboratory research and clinical care. Translation process is a complex process that requires both research and non-research activities. During the past ten years, there has been intense interest in the development of new clinical procedures, therapeutic molecules, and prototypes based on translational medicine concept including dentistry. Periodontitis is a globally prevalent inflammatory disease that causes the destruction of the tooth supporting apparatus. Current methods to reconstitute lost periodontal structures have been shown to have limited and variable outcomes. Stem cell therapy can be used for periodontal regeneration and it is also one of the hot topics in translational regenerative medicine. In this article, recent advances and current status of translational medicine in stem cell therapy in periodontal regeneration field were reviewed. However, a number of biological, technical and clinical hurdles must be overcome before stem cell therapy could be used in clinics.
Cook, L. M.; Samaras, C.; Anderson, C.
Engineers generally use historical precipitation trends to inform assumptions and parameters for long-lived infrastructure designs. However, resilient design calls for the adjustment of current engineering practice to incorporate a range of future climate conditions that are likely to be different than the past. Despite the availability of future projections from downscaled climate models, there remains a considerable mismatch between climate model outputs and the inputs needed in the engineering community to incorporate climate resiliency. These factors include differences in temporal and spatial scales, model uncertainties, and a lack of criteria for selection of an ensemble of models. This research addresses the limitations to working with climate data by providing a framework for the use of publicly available downscaled climate projections to inform engineering resiliency. The framework consists of five steps: 1) selecting the data source based on the engineering application, 2) extracting the data at a specific location, 3) validating for performance against observed data, 4) post-processing for bias or scale, and 5) selecting the ensemble and calculating statistics. The framework is illustrated with an example application to extreme precipitation-frequency statistics, the 25-year daily precipitation depth, using four publically available climate data sources: NARCCAP, USGS, Reclamation, and MACA. The attached figure presents the results for step 5 from the framework, analyzing how the 24H25Y depth changes when the model ensemble is culled based on model performance against observed data, for both post-processing techniques: bias-correction and change factor. Culling the model ensemble increases both the mean and median values for all data sources, and reduces range for NARCCAP and MACA ensembles due to elimination of poorer performing models, and in some cases, those that predict a decrease in future 24H25Y precipitation volumes. This result is especially
Thomas, Aliki; Menon, Anita; Boruff, Jill; Rodriguez, Ana Maria; Ahmed, Sara
Use of theory is essential for advancing the science of knowledge translation (KT) and for increasing the likelihood that KT interventions will be successful in reducing existing research-practice gaps in health care. As a sociological theory of knowledge, social constructivist theory may be useful for informing the design and evaluation of KT interventions. As such, this scoping review explored the extent to which social constructivist theory has been applied in the KT literature for healthcare professionals. Searches were conducted in six databases: Ovid MEDLINE (1948 - May 16, 2011), Ovid EMBASE, CINAHL, ERIC, PsycInfo, and AMED. Inclusion criteria were: publications from all health professions, research methodologies, as well as conceptual and theoretical papers related to KT. To be included in the review, key words such as constructivism, social constructivism, or social constructivist theories had to be included within the title or abstract. Papers that discussed the use of social constructivist theories in the context of undergraduate learning in academic settings were excluded from the review. An analytical framework of quantitative (numerical) and thematic analysis was used to examine and combine study findings. Of the 514 articles screened, 35 papers published between 1992 and 2011 were deemed eligible and included in the review. This review indicated that use of social constructivist theory in the KT literature was limited and haphazard. The lack of justification for the use of theory continues to represent a shortcoming of the papers reviewed. Potential applications and relevance of social constructivist theory in KT in general and in the specific studies were not made explicit in most papers. For the acquisition, expression and application of knowledge in practice, there was emphasis on how the social constructivist theory supports clinicians in expressing this knowledge in their professional interactions. This scoping review was the first to examine
Feenstra, K. Anton; Abeln, Sanne
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which
Spengler, Sylvia J.
There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control
Nelson Rex T
Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded
Nelson, Rex T; Avraham, Shulamit; Shoemaker, Randy C; May, Gregory D; Ware, Doreen; Gessler, Damian Dg
Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). The need for semantic integration technologies has preceded available solutions. We report the feasibility of mapping high
Gront, Dominik; Kolinski, Andrzej
In this Note we present a new software library for structural bioinformatics. The library contains programs, computing sequence- and profile-based alignments and a variety of structural calculations with user-friendly handling of various data formats. The software organization is very flexible. Algorithms are written in Java language and may be used by Java programs. Moreover the modules can be accessed from Jython (Python scripting language implemented in Java) scripts. Finally, the new version of BioShell delivers several utility programs that can do typical bioinformatics task from a command-line level. Availability The software is available for download free of charge from its website: http://bioshell.chem.uw.edu.pl. This website provides also numerous examples, code snippets and API documentation.
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Kumar, Anand; Rodrigues, Jean M
International cross-border private hospital chains need to apply the standards for foreign currency translation in order to consolidate the balance sheet and income statements. This not only exposes such chains to exchange rate fluctuations in different ways, but also creates added requirements for enterprise-level IT systems especially when they produce parameters which are used to measure the financial and operational performance of the foreign subsidiary or the parent hospital. Such systems would need to come to terms with the complexities involved in such currency-related translations in order to provide the correct data for performance benchmarking.
Hettne, K.M.; Kleinjans, J.; Stierum, R.H.; Boorsma, A.; Kors, J.A.
This chapter concerns the application of bioinformatics methods to the analysis of toxicogenomics data. The chapter starts with an introduction covering how bioinformatics has been applied in toxicogenomics data analysis, and continues with a description of the foundations of a specific
Likic, Vladimir A.
This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
Tao, Ying; Liu, Yang; Friedman, Carol
Information visualization techniques, which take advantage of the bandwidth of human vision, are powerful tools for organizing and analyzing a large amount of data. In the postgenomic era, information visualization tools are indispensable for biomedical research. This paper aims to present an overview of current applications of information visualization techniques in bioinformatics for visualizing different types of biological data, such as from genomics, proteomics, expression profiling and structural studies. Finally, we discuss the challenges of information visualization in bioinformatics related to dealing with more complex biological information in the emerging fields of systems biology and systems medicine. PMID:20976032
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter
We have developed "GLYCANthrope " - CROSSWORKS for glycans: a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: email@example.com.
Santos, Nathália Porfírio Dos; Couto, Maria Inês Vieira; Martinho-Carvalho, Ana Claudia
Cross-cultural adaptation and translation of the Nijmegen Cochlear Implant Questionnaire (NCIQ) into Brazilian Portuguese and analysis of quality of life (QoL) results in adults with cochlear implant (CI). The NCIQ instrument was translated into Brazilian Portuguese and culturally adapted. After that, a cross-sectional and clinical QoL evaluation was conducted with a group of 24 adults with CI. The questionnaire title in Brazilian Portuguese is 'Questionário Nijmegen de Implantes Cocleares' (NCIQ-P). The version of the NCIQ questionnaire translated into Brazilian Portuguese presented good internal consistency (0.78). The social and physical domains presented the highest scores, with the basic and advanced sound perception subdomains achieving the highest scores. No correlation between gender and time of device use was found for the questionnaire domains and subdomains. The cross-cultural adaptation and translation of the NCIQ into Brazilian Portuguese suggests that this instrument is reliable and useful for clinical and research purposes in Brazilian adults with CI.
Full Text Available The term “Translational Genomics” reflects both title and mission of this new journal. “Translational” has traditionally been understood as “applied research” or “development”, different from or even opposed to “basic research”. Recent scientific and societal developments have triggered a re-assessment of the connotation that “translational” and “basic” are either/or activities: translational research nowadays aims at feeding the best science into applications and solutions for human society. We therefore argue here basic science to be challenged and leveraged for its relevance to human health and societal benefits. This more recent approach and attitude are catalyzed by four trends or developments: evidence-based solutions; large-scale, high dimensional data; consumer/patient empowerment; and systems-level understanding.
Full Text Available Abstract Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians" can be essential members of translational medicine teams.
Sarkar, Indra Neil
Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics) may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records) and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians") can be essential members of translational medicine teams.
Schneider, Maria V; Walter, Peter; Blatter, Marie-Claude; Watson, James; Brazas, Michelle D; Rother, Kristian; Budd, Aidan; Via, Allegra; van Gelder, Celia W G; Jacob, Joachim; Fernandes, Pedro; Nyrönen, Tommi H; De Las Rivas, Javier; Blicher, Thomas; Jimenez, Rafael C; Loveland, Jane; McDowall, Jennifer; Jones, Phil; Vaughan, Brendan W; Lopez, Rodrigo; Attwood, Teresa K; Brooksbank, Catherine
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response to the development of 'high-throughput biology', the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials. Conversely, there is much relevant teaching material and training expertise available worldwide that, were it properly organized, could be exploited by anyone who needs to provide training or needs to set up a new course. To do this, however, the materials would have to be centralized in a database and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review it, respectively, to similar initiatives and collections.
Wightman, Bruce; Hark, Amy T
The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this study, we deliberately integrated bioinformatics instruction at multiple course levels into an existing biology curriculum. Students in an introductory biology course, intermediate lab courses, and advanced project-oriented courses all participated in new course components designed to sequentially introduce bioinformatics skills and knowledge, as well as computational approaches that are common to many bioinformatics applications. In each course, bioinformatics learning was embedded in an existing disciplinary instructional sequence, as opposed to having a single course where all bioinformatics learning occurs. We designed direct and indirect assessment tools to follow student progress through the course sequence. Our data show significant gains in both student confidence and ability in bioinformatics during individual courses and as course level increases. Despite evidence of substantial student learning in both bioinformatics and mathematics, students were skeptical about the link between learning bioinformatics and learning mathematics. While our approach resulted in substantial learning gains, student "buy-in" and engagement might be better in longer project-based activities that demand application of skills to research problems. Nevertheless, in situations where a concentrated focus on project-oriented bioinformatics is not possible or desirable, our approach of integrating multiple smaller components into an existing curriculum provides an alternative. Copyright © 2012 Wiley Periodicals, Inc.
Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M
Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. © The Author 2013. Published by Oxford University Press.
Rodriguez-Gil, Luis; Orduna, Pablo; Bollen, Lars; Govaerts, Sten; Holzer, Adrian; Gillet, Dennis; Lopez-de-Ipina, Diego; Garcia-Zubia, Javier
Developing educational apps that cover a wide range of learning contexts and languages is a challenging task. In this paper, we introduce the AppComposer Web app to address this issue. The AppComposer aims at empowering teachers to easily translate and adapt existing apps that fit their educational
and dhurrin, which have not previously been characterized in blueberries. There are more than 44,500 spider species with distinct habitats and unique characteristics. Spiders are masters of producing silk webs to catch prey and using venom to neutralize. The exploration of the genetics behind these properties...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... has just started. We have assembled and annotated the first two spider genomes to facilitate our understanding of spiders at the molecular level. The need for analyzing the large and increasing amount of sequencing data has increased the demand for efficient, user friendly, and broadly applicable...
Schjoldager, Anne Gram; Gottlieb, Henrik; Klitgård, Ida
Understanding Translation is designed as a textbook for courses on the theory and practice of translation in general and of particular types of translation - such as interpreting, screen translation and literary translation. The aim of the book is to help you gain an in-depth understanding...... of the phenomenon of translation and to provide you with a conceptual framework for the analysis of various aspects of professional translation. Intended readers are students of translation and languages, but the book will also be relevant for others who are interested in the theory and practice of translation...... - translators, language teachers, translation users and literary, TV and film critics, for instance. Discussions focus on translation between Danish and English....
Oliver, Jeffrey C
Health sciences research is increasingly focusing on big data applications, such as genomic technologies and precision medicine, to address key issues in human health. These approaches rely on biological data repositories and bioinformatic analyses, both of which are growing rapidly in size and scope. Libraries play a key role in supporting researchers in navigating these and other information resources. With the goal of supporting bioinformatics research in the health sciences, the University of Arizona Health Sciences Library established a Bioinformation program. To shape the support provided by the library, I developed and administered a needs assessment survey to the University of Arizona Health Sciences campus in Tucson, Arizona. The survey was designed to identify the training topics of interest to health sciences researchers and the preferred modes of training. Survey respondents expressed an interest in a broad array of potential training topics, including "traditional" information seeking as well as interest in analytical training. Of particular interest were training in transcriptomic tools and the use of databases linking genotypes and phenotypes. Staff were most interested in bioinformatics training topics, while faculty were the least interested. Hands-on workshops were significantly preferred over any other mode of training. The University of Arizona Health Sciences Library is meeting those needs through internal programming and external partnerships. The results of the survey demonstrate a keen interest in a variety of bioinformatic resources; the challenge to the library is how to address those training needs. The mode of support depends largely on library staff expertise in the numerous subject-specific databases and tools. Librarian-led bioinformatic training sessions provide opportunities for engagement with researchers at multiple points of the research life cycle. When training needs exceed library capacity, partnering with intramural and
Oakley, Mark T; Barthel, Daniel; Bykov, Yuri; Garibaldi, Jonathan M; Burke, Edmund K; Krasnogor, Natalio; Hirst, Jonathan D
Optimisation problems pervade structural bioinformatics. In this review, we describe recent work addressing a selection of bioinformatics challenges. We begin with a discussion of research into protein structure comparison, and highlight the utility of Kolmogorov complexity as a measure of structural similarity. We then turn to research into de novo protein structure prediction, in which structures are generated from first principles. In this endeavour, there is a compromise between the detail of the model and the extent to which the conformational space of the protein can be sampled. We discuss some developments in this area, including off-lattice structure prediction using the great deluge algorithm. One strategy to reduce the size of the search space is to restrict the protein chain to sites on a regular lattice. In this context, we highlight the use of memetic algorithms, which combine genetic algorithms with local optimisation, to the study of simple protein models on the two-dimensional square lattice and the face-centred cubic lattice.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response...... to the development of ‘high-throughput biology’, the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes...... to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials...
Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.
Ferruzzi, Mario G.; Peterson, Devin G.; Singh, R. Paul; Schwartz, Steven J.; Freedman, Marjorie R.
This paper, based on the symposium “Real-World Nutritional Translation Blended With Food Science,” describes how an integrated “farm-to-cell” approach would create the framework necessary to address pressing public health issues. The paper describes current research that examines chemical reactions that may influence food flavor (and ultimately food consumption) and posits how these reactions can be used in health promotion; it explains how mechanical engineering and computer modeling can study digestive processes and provide better understanding of how physical properties of food influence nutrient bioavailability and posits how this research can also be used in the fight against obesity and diabetes; and it illustrates how an interdisciplinary scientific collaboration led to the development of a novel functional food that may be used clinically in the prevention and treatment of prostate cancer. PMID:23153735
Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas
The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Deepti D. Deobagkar
Full Text Available Bioinformatics software and visualisation tools have been a key factor in the rapid and phenomenal advances in genomics, proteomics, medicine, drug discovery, systems approaches and in fact in every area of new development. Indian scientists have also made a mark in a few specific areas. India has an advantage of an early start and extensive and organised network in the Bioinformatics education and research with substantial inputs from the Indian government. India has a strong hold in computation and IT and has a pool of bright and young talent with demographic dividend along with experienced and excellent mentors and researchers. Although small in number and scale, Bioinformatics Industry also has a presence and is making its mark in India. There are a number of high throughput and extremely useful resources available which are critical in biological data analysis and interpretation. This has made a paradigm shift in the way research can be carried out and discoveries can be made in any area of biological, biochemical and chemical research. This article summarises the current status and contributions from India in the development of software and web servers for Bioinformatics applications.
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475
Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy
Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
is given to genre conventions in source texts and the ways in which they can best be translated. I propose that translators of statutes with an informative function in expert-to-expert communication may be allowed limited translational creativity when translating specific types of genre convention....... This creativity is a result of translators adopting either a source-language or a target-language oriented strategy and is limited by the pragmatic principle of co-operation. Examples of translation options are provided illustrating the different results in target texts. The use of a target-language oriented......A long-established approach to legal translation focuses on terminological equivalence making translators strictly follow the words of source texts. Recent research suggests that there is room for some creativity allowing translators to deviate from the source texts. However, little attention...
López, Vivian F.; Aguilar, Ramiro; Alonso, Luis; Moreno, María N.; Corchado, Juan M.
In this paper we describe both theoretical and practical results of a novel data mining process that combines hybrid techniques of association analysis and classical sequentiation algorithms of genomics to generate grammatical structures of a specific language. We used an application of a compilers generator system that allows the development of a practical application within the area of grammarware, where the concepts of the language analysis are applied to other disciplines, such as Bioinformatic. The tool allows the complexity of the obtained grammar to be measured automatically from textual data. A technique of incremental discovery of sequential patterns is presented to obtain simplified production rules, and compacted with bioinformatics criteria to make up a grammar.
Vassilev, D.; Leunissen, J.; Atanassov, A.; Nenov, A.; Dimov, G.
The goal of plant genomics is to understand the genetic and molecular basis of all biological processes in plants that are relevant to the specie. This understanding is fundamental to allow efficient exploitation of plants as biological resources in the development of new cultivars with improved
N. Spanakis, and A. Markogian- nakis. 2006. Carriage of OXA-58 but not of OXA-51 beta - lactamase gene correlates with carbapenem resistance in...properties of A. baumannii genomes Genome Accession no. Genomesize (bp) No. of genes Source (reference) Size of RI (kb) Beta - lactamase gene(s) Plasmid...carried beta - lactamase gene Resistance gene(s) Class A Class C Class D Tetra-cycline Chloram- phenicol Trimetho- prim-sulfa gyrA/parC QRDRa AB0057
Stringer-Calvert David WJ
Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the
Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Peterson, Janey C.; Czajkowski, Susan; Charlson, Mary E.; Link, Alissa R.; Wells, Martin T.; Isen, Alice M.; Mancuso, Carol A.; Allegrante, John P.; Boutin-Foster, Carla; Ogedegbe, Gbenga; Jobe, Jared B.
Objective To describe a mixed-methods approach to develop and test a basic behavioral science-informed intervention to motivate behavior change in three high-risk clinical populations. Our theoretically-derived intervention comprised a combination of positive affect and self-affirmation (PA/SA) which we applied to three clinical chronic disease populations. Methods We employed a sequential mixed methods model (EVOLVE) to design and test the PA/SA intervention in order to increase physical activity in people with coronary artery disease (post-percutaneous coronary intervention [PCI]) or asthma (ASM), and to improve medication adherence in African Americans with hypertension (HTN). In an initial qualitative phase, we explored participant values and beliefs. We next pilot tested and refined the intervention, and then conducted three randomized controlled trials (RCTs) with parallel study design. Participants were randomized to combined PA/SA vs. an informational control (IC) and followed bimonthly for 12 months, assessing for health behaviors and interval medical events. Results Over 4.5 years, we enrolled 1,056 participants. Changes were sequentially made to the intervention during the qualitative and pilot phases. The three RCTs enrolled 242 PCI, 258 ASM and 256 HTN participants (n=756). Overall, 45.1% of PA/SA participants versus 33.6% of IC participants achieved successful behavior change (p=0.001). In multivariate analysis PA/SA intervention remained a significant predictor of achieving behavior change (pbehavioral science research can be translated into efficacious interventions for chronic disease populations. PMID:22963594
Translating feminism Pointing to manifold and long-lasting connections between feminism and translation, the article ﬁrst presents a selection of multilingual writers (Narcyza Żmichowska and Deborah Vogel), translators (Zoﬁa Żeleńska and Kazimiera Iłłakowiczówna) and translation commentators (Joanna Lisek and Karolina Szymaniak) to ponder why the work of early Polish feminists is neglected. It seems that one of the reasons might be the current colonization of Polish femini...
Levac, Danielle; Glegg, Stephanie M N; Sveistrup, Heidi; Colquhoun, Heather; Miller, Patricia A; Finestone, Hillel; DePaul, Vincent; Harris, Jocelyn E; Velikonja, Diana
Despite increasing evidence for the effectiveness of virtual reality (VR)-based therapy in stroke rehabilitation, few knowledge translation (KT) resources exist to support clinical integration. KT interventions addressing known barriers and facilitators to VR use are required. When environmental barriers to VR integration are less amenable to change, KT interventions can target modifiable barriers related to therapist knowledge and skills. A multi-faceted KT intervention was designed and implemented to support physical and occupational therapists in two stroke rehabilitation units in acquiring proficiency with use of the Interactive Exercise Rehabilitation System (IREX; GestureTek). The KT intervention consisted of interactive e-learning modules, hands-on workshops and experiential practice. Evaluation included the Assessing Determinants of Prospective Take Up of Virtual Reality (ADOPT-VR) Instrument and self-report confidence ratings of knowledge and skills pre- and post-study. Usability of the IREX was measured with the System Usability Scale (SUS). A focus group gathered therapist experiences. Frequency of IREX use was recorded for 6 months post-study. Eleven therapists delivered a total of 107 sessions of VR-based therapy to 34 clients with stroke. On the ADOPT-VR, significant pre-post improvements in therapist perceived behavioral control (p = 0.003), self-efficacy (p = 0.005) and facilitating conditions (p =0.019) related to VR use were observed. Therapist intention to use VR did not change. Knowledge and skills improved significantly following e-learning completion (p = 0.001) and was sustained 6 months post-study. Below average perceived usability of the IREX (19 th percentile) was reported. Lack of time was the most frequently reported barrier to VR use. A decrease in frequency of perceived barriers to VR use was not significant (p = 0.159). Two therapists used the IREX sparingly in the 6 months following the study. Therapists reported
Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.
Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni
Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan
This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis. .
Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein
cancer immunotherapies has yet to be fulfilled. The insufficient efficacy of existing treatments can be attributed to a number of biological and technical issues. In this review, we detail the current limitations of immunotherapy target selection and design, and review computational methods to streamline...... therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...
Drawing on the Bourdieusian concept of habitus and its applicability in the field of translation, this article discusses Antjie Krog's profile in the practice of translation in. South Africa. Bourdieu's conceptualisation of the relationship between the initiating activities of translators and the structures which constrain and enable ...
Often just any bilingual person is approached to translate or a free web-based translation application such as Google Translate is employed. This article describes a study in which the quality of translation products created by Google Translate, a translation student and a professional translator were assessed and compared ...
Nakamura Satoshi; Morishima Shigeo
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image w...
Cramer, Alona O; MacLaren, Robert E
Induced pluripotent stem cells (iPSc) are a scientific and medical frontier. Application of reprogrammed somatic cells for clinical trials is in its dawn period; advances in research with animal and human iPSc are paving the way for retinal therapies with the ongoing development of safe animal cell transplantation studies and characterization of patient- specific and disease-specific human iPSc. The retina is an optimal model for investigation of neural regeneration; amongst other advantageous attributes, it is the most accessible part of the CNS for surgery and outcome monitoring. A recent clinical trial showing a degree of visual restoration via a subretinal electronic prosthesis implies that even a severely degenerate retina may have the capacity for repair after cell replacement through potential plasticity of the visual system. Successful differentiation of neural retina from iPSc and the recent generation of an optic cup from human ESc invitro increase the feasibility of generating an expandable and clinically suitable source of cells for human clinical trials. In this review we shall present recent studies that have propelled the field forward and discuss challenges in utilizing iPS cell derived retinal cells as reliable models for clinical therapies and as a source for clinical cell transplantation treatment for patients suffering from genetic retinal disease.
Tucker, Laura B; Velosky, Alexander G; McCabe, Joseph T
Acquired traumatic brain injury (TBI) is frequently accompanied by persistent cognitive symptoms, including executive function disruptions and memory deficits. The Morris Water Maze (MWM) is the most widely-employed laboratory behavioral test for assessing cognitive deficits in rodents after experimental TBI. Numerous protocols exist for performing the test, which has shown great robustness in detecting learning and memory deficits in rodents after infliction of TBI. We review applications of the MWM for the study of cognitive deficits following TBI in pre-clinical studies, describing multiple ways in which the test can be employed to examine specific aspects of learning and memory. Emphasis is placed on dependent measures that are available and important controls that must be considered in the context of TBI. Finally, caution is given regarding interpretation of deficits as being indicative of dysfunction of a single brain region (hippocampus), as experimental models of TBI most often result in more diffuse damage that disrupts multiple neural pathways and larger functional networks that participate in complex behaviors required in MWM performance. Published by Elsevier Ltd.
Fallov, Mia Arp; Birk, Rasmus
The purpose of this paper is to explore how practices of translation shape particular paths of inclusion for people living in marginalized residential areas in Denmark. Inclusion, we argue, is not an end-state, but rather something which must be constantly performed. Active citizenship, today......, is not merely a question of participation, but of learning to become active in all spheres of life. The paper draws on empirical examples from a multi-sited field work in 6 different sites of local community work in Denmark, to demonstrate how different dimensions of translation are involved in shaping active...... citizenship. We propose the following different dimensions of translation: translating authority, translating language, translating social problems. The paper takes its theoretical point of departure from assemblage urbanism, arguing that cities are heterogeneous assemblages of socio-material interactions...
Zhou, Yinhua; Datta, Saheli; Salter, Charlotte
The governments of China, India, and the United Kingdom are unanimous in their belief that bioinformatics should supply the link between basic life sciences research and its translation into health benefits for the population and the economy. Yet at the same time, as ambitious states vying for position in the future global bioeconomy they differ considerably in the strategies adopted in pursuit of this goal. At the heart of these differences lies the interaction between epistemic change within the scientific community itself and the apparatus of the state. Drawing on desk-based research and thirty-two interviews with scientists and policy makers in the three countries, this article analyzes the politics that shape this interaction. From this analysis emerges an understanding of the variable capacities of different kinds of states and political systems to work with science in harnessing the potential of new epistemic territories in global life sciences innovation. PMID:27546935
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. firstname.lastname@example.org. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
Brazas, Michelle D.; Ouellette, B. F. Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...
Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem
Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for
Shaikh, Faiq; Franc, Benjamin; Allen, Erastus; Sala, Evis; Awan, Omer; Hendrata, Kenneth; Halabi, Safwan; Mohiuddin, Sohaib; Malik, Sana; Hadley, Dexter; Shrestha, Rasu
Enterprise imaging has channeled various technological innovations to the field of clinical radiology, ranging from advanced imaging equipment and postacquisition iterative reconstruction tools to image analysis and computer-aided detection tools. More recently, the advancement in the field of quantitative image analysis coupled with machine learning-based data analytics, classification, and integration has ushered in the era of radiomics, a paradigm shift that holds tremendous potential in clinical decision support as well as drug discovery. However, there are important issues to consider to incorporate radiomics into a clinically applicable system and a commercially viable solution. In this two-part series, we offer insights into the development of the translational pipeline for radiomics from methodology to clinical implementation (Part 1) and from that point to enterprise development (Part 2). In Part 2 of this two-part series, we study the components of the strategy pipeline, from clinical implementation to building enterprise solutions. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Rasmussen, Kirsten Wølch; Schjoldager, Anne
out by specialised revisers, but by staff translators, who revise the work of colleagues and freelancers on an ad hoc basis. Corrections are mostly given in a peer-to-peer fashion, though the work of freelancers and inexperienced in-house translators is often revised in an authoritative (nonnegotiable...
grassroots activists in social movements use translation as a novel practice to debate political alternatives in the European Union's (EU) multilingual public sphere. In recent years, new cross-European protest movements have created the multilingual discursive democracy arena known as the European Social...... to the national context. In the ESF, grassroots deliberators work using a novel practice of translation that has the potential to include marginalized groups. It is, however, a distinct kind of translation that activists use. Translation, compared to EU-official practices of multilingualism, affects a change...... in institutionalized habits and norms of deliberation. Addressing democratic theorists, my findings suggest that translation could be a way to think about difference not as a hindrance but as a resource for democracy in linguistically heterogeneous societies and public spaces, without presupposing a shared language...
Morishima, Shigeo; Nakamura, Satoshi
We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Chen, Xiaoling; Chang, Jeffrey T
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. email@example.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org
Chen, Xiaoling; Chang, Jeffrey T.
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: email@example.com PMID:28052928
Schjoldager, Anne Gram; Christensen, Tina Paulsen
to be representative of the research field as a whole. Our analysis suggests that, while considerable knowledge is available about the technical side of TMs, more research is needed to understand how translators interact with TM technology and how TMs influence translators' cognitive translation processes....... It is no exaggeration to say that the advent of translation-memory (TM) systems in the translation profession has led to drastic changes in translators' processes and workflow, and yet, though many professional translators nowadays depend on some form of TM system, this has not been the object...... of much research. Our paper attempts to find out what we know about the nature, applications and influences of TM technology, including translators' interaction with TMs, and also how we know it. An essential part of the analysis is based on a selection of empirical TM studies, which we assume...
Bansal, Sorav; Aiken, Alex
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
Weissman, Myrna M.; Brown, Alan S.; Talati, Ardesheer
Translational research generally refers to the application of knowledge generated by advances in basic sciences research translated into new approaches for diagnosis, prevention, and treatment of disease. This direction is called bench-to-bedside. Psychiatry has similarly emphasized the basic sciences as the starting point of translational research. This article introduces the term translational epidemiology for psychiatry research as a bidirectional concept in which the knowledge generated from the bedside or the population can also be translated to the benches of laboratory science. Epidemiologic studies are primarily observational but can generate representative samples, novel designs, and hypotheses that can be translated into more tractable experimental approaches in the clinical and basic sciences. This bedside-to-bench concept has not been explicated in psychiatry, although there are an increasing number of examples in the research literature. This article describes selected epidemiologic designs, providing examples and opportunities for translational research from community surveys and prospective, birth cohort, and family-based designs. Rapid developments in informatics, emphases on large sample collection for genetic and biomarker studies, and interest in personalized medicine—which requires information on relative and absolute risk factors—make this topic timely. The approach described has implications for providing fresh metaphors to communicate complex issues in interdisciplinary collaborations and for training in epidemiology and other sciences in psychiatry. PMID:21646577
The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent
Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...
An integrative bioinformatics pipeline for the genomewide identification of novel porcine microRNA genes. Wei Fang, Na Zhou, Dengyun Li, Zhigang Chen, Pengfei Jiang and Deli Zhang. J. Genet. 92,587 593. Figure 1. Primary sequence of the predicted SSc-mir-2053 precursor and locations of some terms in the secondary ...
Lelieveld, S.H.; Veltman, J.A.; Gilissen, C.F.
With the widespread adoption of next generation sequencing technologies by the genetics community and the rapid decrease in costs per base, exome sequencing has become a standard within the repertoire of genetic experiments for both research and diagnostics. Although bioinformatics now offers
Dec 6, 2013 ... The majority of miRNAs in pig (Sus scrofa), an impor- tant domestic animal, remain unknown. From this perspec- tive, we attempted the genomewide identification of novel porcine miRNAs. Here, we propose a novel integrative bioinformatics pipeline to identify conservative and non- conservative novel ...
Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...
reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…
Nielsen, Henrik; Sperotto, Maria Maddalena
)-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita
We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of…
Brazas, Michelle D; Ouellette, B F Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.
Hiew, Hong Liang; Bellgard, Matthew
Life Science research faces the constant challenge of how to effectively handle an ever-growing body of bioinformatics software and online resources. The users and developers of bioinformatics resources have a diverse set of competing demands on how these resources need to be developed and organised. Unfortunately, there does not exist an adequate community-wide framework to integrate such competing demands. The problems that arise from this include unstructured standards development, the emergence of tools that do not meet specific needs of researchers, and often times a communications gap between those who use the tools and those who supply them. This paper presents an overview of the different functions and needs of bioinformatics stakeholders to determine what may be required in a community-wide framework. A Bioinformatics Reference Model is proposed as a basis for such a framework. The reference model outlines the functional relationship between research usage and technical aspects of bioinformatics resources. It separates important functions into multiple structured layers, clarifies how they relate to each other, and highlights the gaps that need to be addressed for progress towards a diverse, manageable, and sustainable body of resources. The relevance of this reference model to the bioscience research community, and its implications in progress for organising our bioinformatics resources, are discussed.
Cho, Kyunghyun; Esipova, Masha
We investigate the potential of attention-based neural machine translation in simultaneous translation. We introduce a novel decoding algorithm, called simultaneous greedy decoding, that allows an existing neural machine translation model to begin translating before a full source sentence is received. This approach is unique from previous works on simultaneous translation in that segmentation and translation are done jointly to maximize the translation quality and that translating each segmen...
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba
The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.
Batman, Angela M.; Miles, Michael F.
Alcohol use disorder (AUD) and its sequelae impose a major burden on the public health of the United States, and adequate long-term control of this disorder has not been achieved. Molecular and behavioral basic science research findings are providing the groundwork for understanding the mechanisms underlying AUD and have identified multiple candidate targets for ongoing clinical trials. However, the translation of basic research or clinical findings into improved therapeutic approaches for AUD must become more efficient. Translational research is a multistage process of streamlining the movement of basic biomedical research findings into clinical research and then to the clinical target populations. This process demands efficient bidirectional communication across basic, applied, and clinical science as well as with clinical practitioners. Ongoing work suggests rapid progress is being made with an evolving translational framework within the alcohol research field. This is helped by multiple interdisciplinary collaborative research structures that have been developed to advance translational work on AUD. Moreover, the integration of systems biology approaches with collaborative clinical studies may yield novel insights for future translational success. Finally, appreciation of genetic variation in pharmacological or behavioral treatment responses and optimal communication from bench to bedside and back may strengthen the success of translational research applications to AUD. PMID:26259085
Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori
The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima
Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... changes. These methods have been developed further and applied for computational screens of genomic sequence. Furthermore, a number of additional directions have emerged. These include methods to search for RNA 3D structure, RNA-RNA interactions, and design of interfering RNAs (RNAi) as well as methods...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....
Fang, Wai-Chi; Lue, Jaw-Chyng
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly
Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets
Christopher L Williams
Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework
LANT has a suite of related Language Technology products: LANT-Master, a language checker, integrates into existing word processors like MS-Word and allows the vocabulary and style oftexts to be in a controlled language which can then be automatically translated; Pangaea is an electronic dictionary that allows the.
Vandepitte, Sonia; Mousten, Birthe; Maylath, Bruce
After Kiraly (2000) introduced the collaborative form of translation in classrooms, Pavlovic (2007), Kenny (2008), and Huertas Barros (2011) provided empirical evidence that testifies to the impact of collaborative learning. This chapter sets out to describe the collaborative forms of learning...
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
The 9th Wound Healing and Tissue Repair and Regeneration Annual Meeting of Chinese Tissue Repair Society was hold in Wuhan, China. This meeting was focused on the innovation, translation application, and cooperation in wound care both in China and other countries. More than 400 delegates took part in this meeting and communicated successfully. © The Author(s) 2014.
Schneider, Maria Victoria; Watson, James; Attwood, Teresa; Rother, Kristian; Budd, Aidan; McDowall, Jennifer; Via, Allegra; Fernandes, Pedro; Nyronen, Tommy; Blicher, Thomas; Jones, Phil; Blatter, Marie-Claude; De Las Rivas, Javier; Judge, David Phillip; van der Gool, Wouter; Brooksbank, Cath
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first Trainer Networking Session held under the auspices of the EU-funded SLING Integrating Activity, which took place in November 2009.
Schneider, M.V.; Watson, J.; Attwood, T.
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...
Bansal Arvind K
Full Text Available Abstract The revolutionary growth in the computation speed and memory storage capability has fueled a new era in the analysis of biological data. Hundreds of microbial genomes and many eukaryotic genomes including a cleaner draft of human genome have been sequenced raising the expectation of better control of microorganisms. The goals are as lofty as the development of rational drugs and antimicrobial agents, development of new enhanced bacterial strains for bioremediation and pollution control, development of better and easy to administer vaccines, the development of protein biomarkers for various bacterial diseases, and better understanding of host-bacteria interaction to prevent bacterial infections. In the last decade the development of many new bioinformatics techniques and integrated databases has facilitated the realization of these goals. Current research in bioinformatics can be classified into: (i genomics – sequencing and comparative study of genomes to identify gene and genome functionality, (ii proteomics – identification and characterization of protein related properties and reconstruction of metabolic and regulatory pathways, (iii cell visualization and simulation to study and model cell behavior, and (iv application to the development of drugs and anti-microbial agents. In this article, we will focus on the techniques and their limitations in genomics and proteomics. Bioinformatics research can be classified under three major approaches: (1 analysis based upon the available experimental wet-lab data, (2 the use of mathematical modeling to derive new information, and (3 an integrated approach that integrates search techniques with mathematical modeling. The major impact of bioinformatics research has been to automate the genome sequencing, automated development of integrated genomics and proteomics databases, automated genome comparisons to identify the genome function, automated derivation of metabolic pathways, gene
Lass, Wiebke; Reusswig, Fritz
Lost in Translation? Introducing Planetary Boundaries into Social Systems. Fritz Reusswig, Wiebke Lass Potsdam Institute for Climate Impact Research, Potsdam, Germany Identifying and quantifying planetary boundaries by interdisciplinary science efforts is a challenging task—and a risky one, as the 1972 Limits to Growth publication has shown. Even if we may be assured that scientific understanding of underlying processes of the Earth system has significantly improved since then, the challenge of translating these findings into the social systems of the planet remains crucial for any kind of action, and in many respects far more challenging. We would like to conceptualize what could also be termed a problem of coupling social and natural systems as a nested set of social translation processes, well aware of the limited applicability of the language-related translation metaphor. Societies must, first, perceive these boundaries, and they have to understand their relevance. This includes, among many other things, the organization of transdisciplinary scientific cooperation. They will then have to translate this understood perception into possible actions, i.e. strategies for different local bodies, actors, and institutional settings. This implies a lot of 'internal' translation processes, e.g. from the scientific subsystem to the mass media, the political and the economic subsystem. And it implies to develop subsystem-specific schemes of evaluation for these alternatives, e.g. convincing narratives, cost-benefit analyses, or ethical legitimacy considerations. And, finally, societies do have to translate chosen action alternatives into monitoring and evaluation schemes, e.g. for agricultural production or renewable energies. This process includes the continuation of observing and re-analyzing the planetary boundary concept itself, as a re-adjustment of these boundaries in the light of new scientific insights cannot be excluded. Taken all together, societies may well
Ivakhno, S S
The paper reviews the 12th International Conference on Intelligent Systems for Molecular Biology/Third European Conference on Computational Biology 2004 that was held in Glasgow, UK, during July 31-August 4. A number of talks, papers and software demos from the conference in bioinformatics, genomics, proteomics, transcriptomics and systems biology are described. Recent applications of liquid chromatography - tandem mass spectrometry, comparative genomics and DNA microarrays are given along with the discussion of bioinformatics curricular in higher education.
Mulder, Nicola J.; Adebiyi, Ezekiel; Alami, Raouf; Benkahla, Alia; Brandful, James; Doumbia, Seydou; Everett, Dean; Fadlelmola, Faisal M.; Gaboun, Fatima; Gaseitsiwe, Simani; Ghazal, Hassan; Hazelhurst, Scott; Hide, Winston; Ibrahimi, Azeddine; Jaufeerally Fakim, Yasmina; Jongeneel, C. Victor; Joubert, Fourie; Kassim, Samar; Kayondo, Jonathan; Kumuthini, Judit; Lyantagaye, Sylvester; Makani, Julie; Mansour Alzohairy, Ahmed; Masiga, Daniel; Moussa, Ahmed; Nash, Oyekanmi; Ouwe Missi Oukem-Boyer, Odile; Owusu-Dabo, Ellis; Panji, Sumir; Patterton, Hugh; Radouani, Fouzia; Sadki, Khalid; Seghrouchni, Fouad; Tastan Bishop, Özlem; Tiffin, Nicki; Ulenga, Nzovu
The application of genomics technologies to medicine and biomedical research is increasing in popularity, made possible by new high-throughput genotyping and sequencing technologies and improved data analysis capabilities. Some of the greatest genetic diversity among humans, animals, plants, and microbiota occurs in Africa, yet genomic research outputs from the continent are limited. The Human Heredity and Health in Africa (H3Africa) initiative was established to drive the development of genomic research for human health in Africa, and through recognition of the critical role of bioinformatics in this process, spurred the establishment of H3ABioNet, a pan-African bioinformatics network for H3Africa. The limitations in bioinformatics capacity on the continent have been a major contributory factor to the lack of notable outputs in high-throughput biology research. Although pockets of high-quality bioinformatics teams have existed previously, the majority of research institutions lack experienced faculty who can train and supervise bioinformatics students. H3ABioNet aims to address this dire need, specifically in the area of human genetics and genomics, but knock-on effects are ensuring this extends to other areas of bioinformatics. Here, we describe the emergence of genomics research and the development of bioinformatics in Africa through H3ABioNet. PMID:26627985
Luscombe, N M; Greenbaum, D; Gerstein, M
The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.
In this thesis I focus on the application of bioinformatics to analyze RNA. The type of experimental data of interest is sequencing data generated with various Next Generation Sequencing technique: nuclear RNA, cytoplasmic RNA, captured polyadenylated RNA fragments, etc. I highlight the necessity in
Liu, Yao-Yuan; Harbison, SallyAnn
Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available Abstract Background Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. Results To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. Conclusions The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others.
Yuen Macaire MS
Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First
Morales, Hernán F; Giovambattista, Guillermo
We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk firstname.lastname@example.org Supplementary data are available at Bioinformatics online.
Oluwagbemi Olugbenga OLUSEUN
Full Text Available New scientific research fields are evolving on a yearly basis but some parts of the African continent are less aware. Thus, there arises the need for a suitable implementation strategy in introducing the basic components of an emerging scientific field to some part of the African populace through the development of an online distance education learning tool. This emerging field is known as bioinformatics. This research work was instrumental in elucidating the need for a suitable implementation platform for bioinformatics education in parts of the African continent that are less aware of this innovative and interesting field. The aim of this research work was to disseminate the basic knowledge and applications of bioinformatics to these parts of the African continent.
Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.
Kamlet, Adam S.; Neumann, Constanze N.; Lee, Eunsung; Carlin, Stephen M.; Moseley, Christian K.; Stephenson, Nickeisha; Hooker, Jacob M.; Ritter, Tobias
New chemistry methods for the synthesis of radiolabeled small molecules have the potential to impact clinical positron emission tomography (PET) imaging, if they can be successfully translated. However, progression of modern reactions from the stage of synthetic chemistry development to the preparation of radiotracer doses ready for use in human PET imaging is challenging and rare. Here we describe the process of and the successful translation of a modern palladium-mediated fluorination reaction to non-human primate (NHP) baboon PET imaging–an important milestone on the path to human PET imaging. The method, which transforms [18F]fluoride into an electrophilic fluorination reagent, provides access to aryl–18F bonds that would be challenging to synthesize via conventional radiochemistry methods. PMID:23554994
Kamlet, Adam S; Neumann, Constanze N; Lee, Eunsung; Carlin, Stephen M; Moseley, Christian K; Stephenson, Nickeisha; Hooker, Jacob M; Ritter, Tobias
New chemistry methods for the synthesis of radiolabeled small molecules have the potential to impact clinical positron emission tomography (PET) imaging, if they can be successfully translated. However, progression of modern reactions from the stage of synthetic chemistry development to the preparation of radiotracer doses ready for use in human PET imaging is challenging and rare. Here we describe the process of and the successful translation of a modern palladium-mediated fluorination reaction to non-human primate (NHP) baboon PET imaging-an important milestone on the path to human PET imaging. The method, which transforms [(18)F]fluoride into an electrophilic fluorination reagent, provides access to aryl-(18)F bonds that would be challenging to synthesize via conventional radiochemistry methods.
Award Number: W81XWH-14-1-0603 TITLE: Development of a PET Prostate-Specific Membrane Antigen Imaging Agent: Preclinical Translation for Future...other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated...needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this
Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard
This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied. In p...
Olwig, Mette Fog
This article contributes to the growing scholarship on local development practitioners by re-examining conceptualizations of practitioners as ‘brokers’ strategically translating between ‘travelling’ (development institution) rationalities and ‘placed’ (recipient area) rationalities in relation...... and practice spurred by new challenges deriving from climate change anxiety, the study shows how local practitioners often make local activities fit into travelling development rationalities as a matter of habit, rather than as a conscious strategy. They may therefore cease to ‘translate’ between different...... rationalities. This is shown to have important implications for theory, research and practice concerning disaster risk reduction and climate change adaptation in which such translation is often expected....
Karen Jean Day
Full Text Available Background: New Zealand is becoming more ethnically diverse, with rising numbers of people with limited English language proficiency. Consequently, hospital interactions are increasing where patients have insufficient English to communicate adequately with doctors or nurses for appropriate, effective, and safe care. Translation technology is rapidly evolving, but evidence is limited regarding its usefulness to clinicians. Objective: To examine the acceptability to doctors and nurses of a translation application (app used on a tablet, in brief interactions with Korean patients. Method: An app was developed to facilitate brief conversations between patients and clinicians as part of clinical care. We used the Technology Acceptance Model 2 to develop semi-structured interview questions for 15 junior and senior doctors and nurses in an urban hospital. Participants used the app to interact with the interviewer as part of a scenario. The interviews were analysed thematically. Results: The app was easy to use, learn to use, and to memorise for future use. It was considered useful for everyday brief interactions, urgent situations where there is no time to call an interpreter, and after hours, to augment the work of interpreters. Subject to perceived usefulness, there appears to be little need for social normalisation of a translation app, other than management support for the costs, maintenance, and implementation of the app for everyday use. Conclusion: Guidelines are required for the use of a translation app by doctors and nurses to augment the interpreter role. A larger study and future research on the patient’s perspective are required.
Shi, Lizhen [Florida State Univ., Tallahassee, FL (United States); Wang, Zhong [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yu, Weikuan [Florida State Univ., Tallahassee, FL (United States); Meng, Xiandong [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
The combination of the Hadoop MapReduce programming model and cloud computing allows biological scientists to analyze next-generation sequencing (NGS) data in a timely and cost-effective manner. Cloud computing platforms remove the burden of IT facility procurement and management from end users and provide ease of access to Hadoop clusters. However, biological scientists are still expected to choose appropriate Hadoop parameters for running their jobs. More importantly, the available Hadoop tuning guidelines are either obsolete or too general to capture the particular characteristics of bioinformatics applications. In this paper, we aim to minimize the cloud computing cost spent on bioinformatics data analysis by optimizing the extracted significant Hadoop parameters. When using MapReduce-based bioinformatics tools in the cloud, the default settings often lead to resource underutilization and wasteful expenses. We choose k-mer counting, a representative application used in a large number of NGS data analysis tools, as our study case. Experimental results show that, with the fine-tuned parameters, we achieve a total of 4× speedup compared with the original performance (using the default settings). Finally, this paper presents an exemplary case for tuning MapReduce-based bioinformatics applications in the cloud, and documents the key parameters that could lead to significant performance benefits.
Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori
The problem of quantum-like representation in economy cognitive science, and genetics / L. Accardi, A. Khrennikov and M. Ohya -- Chaotic behavior observed in linea dynamics / M. Asano, T. Yamamoto and Y. Togawa -- Complete m-level quantum teleportation based on Kossakowski-Ohya scheme / M. Asano, M. Ohya and Y. Tanaka -- Towards quantum cybernetics: optimal feedback control in quantum bio informatics / V. P. Belavkin -- Quantum entanglement and circulant states / D. Chruściński -- The compound Fock space and its application in brain models / K. -H. Fichtner and W. Freudenberg -- Characterisation of beam splitters / L. Fichtner and M. Gäbler -- Application of entropic chaos degree to a combined quantum baker's map / K. Inoue, M. Ohya and I. V. Volovich -- On quantum algorithm for multiple alignment of amino acid sequences / S. Iriyama and M. Ohya --Quantum-like models for decision making in psychology and cognitive science / A. Khrennikov -- On completely positive non-Markovian evolution of a d-level system / A. Kossakowski and R. Rebolledo -- Measures of entanglement - a Hilbert space approach / W. A. Majewski -- Some characterizations of PPT states and their relation / T. Matsuoka -- On the dynamics of entanglement and characterization ofentangling properties of quantum evolutions / M. Michalski -- Perspective from micro-macro duality - towards non-perturbative renormalization scheme / I. Ojima -- A simple symmetric algorithm using a likeness with Introns behavior in RNA sequences / M. Regoli -- Some aspects of quadratic generalized white noise functionals / Si Si and T. Hida -- Analysis of several social mobility data using measure of departure from symmetry / K. Tahata ... [et al.] -- Time in physics and life science / I. V. Volovich -- Note on entropies in quantum processes / N. Watanabe -- Basics of molecular simulation and its application to biomolecules / T. Ando and I. Yamato -- Theory of proton-induced superionic conduction in hydrogen-bonded systems
The Case of Translating “Al-Afaal-e naqesa" Ali Saidavi * Mansooreh Zarkoob ** Abstract In textbooks of translation used in B.A. courses of Arabic language and literature, the prescriptive methods of teaching translation and presentation of strategies for translation of syntactic structures have been adopted. In the present study, there are methods for translation of al-afaal-e naqesa which in some of these textbooks have been investigated and compared with works of some translators. The results indicated that mostly there are discrepancies between approaches adopted by translation textbook writers and translators. Also, the results indicated that the strategies adopted in these textbooks were not applicable and the prescriptive method of teaching translation was not viable. At last, a method of teaching translation to B.A. students was presented which seems more viable and applicable. Key words: Translation, translation textbooks, translation of al-afaal-e naqesa, translation and syntax, prescriptive method
Sollberger, David; Greenhalgh, Stewart A.; Schmelzbach, Cedric; Van Renterghem, Cédéric; Robertsson, Johan O. A.
We provide a six-component (6-C) polarization model for P-, SV-, SH-, Rayleigh-, and Love-waves both inside an elastic medium as well as at the free surface. It is shown that single-station 6-C data comprised of three components of rotational motion and three components of translational motion provide the opportunity to unambiguously identify the wave type, propagation direction, and local P- and S-wave velocities at the receiver location by use of polarization analysis. To extract such information by conventional processing of three-component (3-C) translational data would require large and dense receiver arrays. The additional rotational components allow the extension of the rank of the coherency matrix used for polarization analysis. This enables us to accurately determine the wave type and wave parameters (propagation direction and velocity) of seismic phases, even if more than one wave is present in the analysis time window. This is not possible with standard, pure-translational 3-C recordings. In order to identify modes of vibration and to extract the accompanying wave parameters, we adapt the multiple signal classification algorithm (MUSIC). Due to the strong nonlinearity of the MUSIC estimator function, it can be used to detect the presence of specific wave types within the analysis time window at very high resolution. We show how the extracted wavefield properties can be used, in a fully automated way, to separate the wavefield into its different wave modes using only a single 6-C recording station. As an example, we apply the method to remove surface wave energy while preserving the underlying reflection signal and to suppress energy originating from undesired directions, such as side-scattered waves.
Evolution has shaped the life forms for billion of years. Domestication is an accelerated process that can be used as a model for evolutionary changes. The aim of this thesis project has been to carry out extensive bioinformatic analyses of whole genome sequencing data to reveal SNPs, InDels and selective sweeps in the chicken, pig and dog genome. Pig genome sequencing revealed loci under selection for elongation of back and increased number of vertebrae, associated with the NR6A1, PLAG1,...
Karimzadeh, Mehran; Hoffman, Michael M
Investing in documenting your bioinformatics software well can increase its impact and save your time. To maximize the effectiveness of your documentation, we suggest following a few guidelines we propose here. We recommend providing multiple avenues for users to use your research software, including a navigable HTML interface with a quick start, useful help messages with detailed explanation and thorough examples for each feature of your software. By following these guidelines, you can assure that your hard work maximally benefits yourself and others. © The Author 2017. Published by Oxford University Press.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Errea, L.F.; Gomez-Llorente, J.M.; Mendez, L.; Riera, A.
An illustration is reported on the use of the Euclidean norm as a criterion of the quality of translation factors in the molecular model of atomic collisions. The relation between our norm and the deviation vector of Chang and Rapp (J. Chem. Phys. 59, 572 (1973)), and the computational simplicity of the calculation and minimization of the former quantity, are very appealing features of our approach. To show how the norm method can be applied, the He/sup 2 +/+H(1s)..-->..He/sup +/(2s,2p )+H/sup +/ reaction is treated.
He, Yongqun; Xiang, Zuoshuang
Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Bioinformatics curation and ontological representation of Brucella vaccines
Background Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. Results VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Conclusions Bioinformatics curation and ontological
Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo
We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.
Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.
Colucci, S; Donini, F M; Di Sciascio, E
Comparison of resources is a frequent task in different bio-informatics applications, including drug-target interaction, drug repositioning and mechanism of action understanding, among others. This paper proposes a general method for the logical comparison of resources modeled in Resource Description Framework and shows its distinguishing features with reference to the comparison of drugs. In particular, the method returns a description of the commonalities between resources, rather than a numerical value estimating their similarity and/or relatedness. The approach is domain-independent and may be flexibly adapted to heterogeneous use cases, according to a process for setting parameters which is completely explicit. The paper also presents an experiment using the dataset Bioportal as knowledge source; the experiment is fully reproducible, thanks to the elicitation of criteria and values for parameter customization. Copyright © 2017 Elsevier Inc. All rights reserved.
The reinforcement learning techniques developed at Ames Research Center are being applied to proximity and docking operations using the Shuttle and Solar Maximum Mission (SMM) satellite simulation. In utilizing these fuzzy learning techniques, we also use the Approximate Reasoning based Intelligent Control (ARIC) architecture, and so we use two terms interchangeable to imply the same. This activity is carried out in the Software Technology Laboratory utilizing the Orbital Operations Simulator (OOS). This report is the deliverable D3 in our project activity and provides the test results of the fuzzy learning translational controller. This report is organized in six sections. Based on our experience and analysis with the attitude controller, we have modified the basic configuration of the reinforcement learning algorithm in ARIC as described in section 2. The shuttle translational controller and its implementation in fuzzy learning architecture is described in section 3. Two test cases that we have performed are described in section 4. Our results and conclusions are discussed in section 5, and section 6 provides future plans and summary for the project.
Full Text Available The dichotomy of function and content words has for so long, precisely since Fries (1952, positioned the preposition as a subject of marginal interest in linguistic studies from the perspective of both formal and functional school of linguistics. If any, such studies have generally resulted merely in the description of its function and position. Yet, in English for instance, function words are not stressed in utterances and, therefore, considered to play a minor and an unimportant role in conveying messages in a communication. The paper does not discuss all types of preposition but focuses its discussion on the spatial preposition. This paper discusses (i what cognitive aspects drive and motivate the emergence of the lexical meaning of spatial preposition, (ii how English and Bahasa Indonesia differ and share the use of spatial preposition, and (iii how TEFL and teaching translation can take advantage of the answer of the second question. The first question forms the theoretical foundation of the discussion based on the Cognitive Linguistics perspective. The second question discusses the differences and the similarities of the spatial preposition in English and Bahasa Indonesia based on the above theoretical foundation. The third question is related to how TEFL and teaching translation can benefit from this comparative study between English and Indonesia spatial preposition. Finally, the discussion also shows that the lexical meaning of spatial preposition demonstrate how language, culture, and mind are intertwined.
With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course 'Applied Bioinformatics Course' (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the 'How to teach' section. The contents of the course are briefly described in the 'What to teach' section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities. © The Author 2013. Published by Oxford University Press.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
The automatic classification of GPCRs by bioinformatics methodology can provide functional information for new GPCRs in the whole 'GPCR proteome' and this information is important for the development of novel drugs. Since GPCR proteome is classified hierarchically, general ways for GPCR function prediction are based on hierarchical classification. Various computational tools have been developed to predict GPCR functions; those tools use not simple sequence searches but more powerful methods, such as alignment-free methods, statistical model methods, and machine learning methods used in protein sequence analysis, based on learning datasets. The first stage of hierarchical function prediction involves the discrimination of GPCRs from non-GPCRs and the second stage involves the classification of the predicted GPCR candidates into family, subfamily, and sub-subfamily levels. Then, further classification is performed according to their protein-protein interaction type: binding G-protein type, oligomerized partner type, etc. Those methods have achieved predictive accuracies of around 90 %. Finally, I described the future subject of research of the bioinformatics technique about functional prediction of GPCR.
Full Text Available This study focuses on strategies and statistical considerations for assessment of translation in language (e.g. translation of case report forms in multinational clinical trials, information (e.g. translation of basic discoveries to the clinic and technology (e.g. translation of Chinese diagnostic techniques to well-established clinical study endpoints in pharmaceutical/clinical research and development. However, most of our efforts will be directed to statistical considerations for translation in information. Translational medicine has been defined as bench-to-bedside research, where a basic laboratory discovery becomes applicable to the diagnosis, treatment or prevention of a specific disease, and is brought forth by either a physician—scientist who works at the interface between the research laboratory and patient care, or by a team of basic and clinical science investigators. Statistics plays an important role in translational medicine to ensure that the translational process is accurate and reliable with certain statistical assurance. Statistical inference for the applicability of an animal model to a human model is also discussed. Strategies for selection of clinical study endpoints (e.g. absolute changes, relative changes, or responder-defined, based on either absolute or relative change are reviewed.
Taboada, Eduardo N; Graham, Morag R; Carriço, João A; Van Domselaar, Gary
Public health labs and food regulatory agencies globally are embracing whole genome sequencing (WGS) as a revolutionary new method that is positioned to replace numerous existing diagnostic and microbial typing technologies with a single new target: the microbial draft genome. The ability to cheaply generate large amounts of microbial genome sequence data, combined with emerging policies of food regulatory and public health institutions making their microbial sequences increasingly available and public, has served to open up the field to the general scientific community. This open data access policy shift has resulted in a proliferation of data being deposited into sequence repositories and of novel bioinformatics software designed to analyze these vast datasets. There also has been a more recent drive for improved data sharing to achieve more effective global surveillance, public health and food safety. Such developments have heightened the need for enhanced analytical systems in order to process and interpret this new type of data in a timely fashion. In this review we outline the emergence of genomics, bioinformatics and open data in the context of food safety. We also survey major efforts to translate genomics and bioinformatics technologies out of the research lab and into routine use in modern food safety labs. We conclude by discussing the challenges and opportunities that remain, including those expected to play a major role in the future of food safety science.
Eduardo N. Taboada
Full Text Available Public health labs and food regulatory agencies globally are embracing whole genome sequencing (WGS as a revolutionary new method that is positioned to replace numerous existing diagnostic and microbial typing technologies with a single new target: the microbial draft genome. The ability to cheaply generate large amounts of microbial genome sequence data, combined with emerging policies of food regulatory and public health institutions making their microbial sequences increasingly available and public, has served to open up the field to the general scientific community. This open data access policy shift has resulted in a proliferation of data being deposited into sequence repositories and of novel bioinformatics software designed to analyze these vast datasets. There also has been a more recent drive for improved data sharing to achieve more effective global surveillance, public health and food safety. Such developments have heightened the need for enhanced analytical systems in order to process and interpret this new type of data in a timely fashion. In this review we outline the emergence of genomics, bioinformatics and open data in the context of food safety. We also survey major efforts to translate genomics and bioinformatics technologies out of the research lab and into routine use in modern food safety labs. We conclude by discussing the challenges and opportunities that remain, including those expected to play a major role in the future of food safety science.
Suh, K. Stephen; Sarojini, Sreeja; Youssif, Maher; Nalley, Kip; Milinovikj, Natasha; Elloumi, Fathi; Russell, Steven; Pecora, Andrew; Schecter, Elyssa; Goy, Andre
Personalized medicine promises patient-tailored treatments that enhance patient care and decrease overall treatment costs by focusing on genetics and “-omics” data obtained from patient biospecimens and records to guide therapy choices that generate good clinical outcomes. The approach relies on diagnostic and prognostic use of novel biomarkers discovered through combinations of tissue banking, bioinformatics, and electronic medical records (EMRs). The analytical power of bioinformatic platforms combined with patient clinical data from EMRs can reveal potential biomarkers and clinical phenotypes that allow researchers to develop experimental strategies using selected patient biospecimens stored in tissue banks. For cancer, high-quality biospecimens collected at diagnosis, first relapse, and various treatment stages provide crucial resources for study designs. To enlarge biospecimen collections, patient education regarding the value of specimen donation is vital. One approach for increasing consent is to offer publically available illustrations and game-like engagements demonstrating how wider sample availability facilitates development of novel therapies. The critical value of tissue bank samples, bioinformatics, and EMR in the early stages of the biomarker discovery process for personalized medicine is often overlooked. The data obtained also require cross-disciplinary collaborations to translate experimental results into clinical practice and diagnostic and prognostic use in personalized medicine. PMID:23818899
Full Text Available N-glycosylation is one of the most important post-translational modifications that influence protein polymorphism, including protein structures and their functions. Although this important biological process has been extensively studied in mammals, only limited knowledge exists regarding glycosylation in algae. The current research is focused on the red microalga Porphyridium sp., which is a potentially valuable source for various applications, such as skin therapy, food, and pharmaceuticals. The enzymes involved in the biosynthesis and processing of N-glycans remain undefined in this species, and the mechanism(s of their genetic regulation is completely unknown. In this study, we describe our pioneering attempt to understand the endoplasmic reticulum N-Glycosylation pathway in Porphyridium sp., using a bioinformatic approach. Homology searches, based on sequence similarities with genes encoding proteins involved in the ER N-glycosylation pathway (including their conserved parts were conducted using the TBLASTN function on the algae DNA scaffold contigs database. This approach led to the identification of 24 encoded-genes implicated with the ER N-glycosylation pathway in Porphyridium sp. Homologs were found for almost all known N-glycosylation protein sequences in the ER pathway of Porphyridium sp.; thus, suggesting that the ER-pathway is conserved; as it is in other organisms (animals, plants, yeasts, etc..
The ICNP BaT has been developed as a web application to support the collaborative translation of different versions of the ICNP into different languages. A prototype of a web service is described that could reuse the translations in the database of the ICNP BaT to provide automatic translations of nursing content based on the ICNP terminology globally. The translation web service is based on a service-oriented architecture making it easy to interoperate with different applications. Such a global translation server would free individual institutions from the maintenance costs of realizing their own translation services.
Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H
Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology. © The Author 2015. Published by Oxford University Press on behalf of the European
japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... biology and genetics studies. We present an improved Lotus genome assembly and annotation, a catalog of natural variation based on re-sequencing of 29 accessions, and describe the involvement of small RNAs in the plant-bacteria symbiosis. Blueberries contain anthocyanins, other pigments and various...... polyphenolic compounds, which have been linked to protection against diabetes, cardiovascular disease and age-related cognitive decline. We present the first genome- guided approach in blueberry to identify genes involved in the synthesis of health-protective compounds. Using RNA-Seq data from five stages...
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Wang, Xiran; Jiang, Leiyu; Tang, Haoru
GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.
Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.
Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150
Beck, Tim N; Chikwem, Adaeze J; Solanki, Nehal R; Golemis, Erica A
Bioinformatic approaches are intended to provide systems level insight into the complex biological processes that underlie serious diseases such as cancer. In this review we describe current bioinformatic resources, and illustrate how they have been used to study a clinically important example: epithelial-to-mesenchymal transition (EMT) in lung cancer. Lung cancer is the leading cause of cancer-related deaths and is often diagnosed at advanced stages, leading to limited therapeutic success. While EMT is essential during development and wound healing, pathological reactivation of this program by cancer cells contributes to metastasis and drug resistance, both major causes of death from lung cancer. Challenges of studying EMT include its transient nature, its molecular and phenotypic heterogeneity, and the complicated networks of rewired signaling cascades. Given the biology of lung cancer and the role of EMT, it is critical to better align the two in order to advance the impact of precision oncology. This task relies heavily on the application of bioinformatic resources. Besides summarizing recent work in this area, we use four EMT-associated genes, TGF-β (TGFB1), NEDD9/HEF1, β-catenin (CTNNB1) and E-cadherin (CDH1), as exemplars to demonstrate the current capacities and limitations of probing bioinformatic resources to inform hypothesis-driven studies with therapeutic goals. Copyright © 2014 the American Physiological Society.
J. Köster (Johannes)
textabstractWe present Rust-Bio, the first general purpose bioinformatics library for the innovative Rust programming language. Rust-Bio leverages the unique combination of speed, memory safety and high-level syntax offered by Rust to provide a fast and safe set of bioinformatics algorithms and data
The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...
Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...
Buttigieg, Pier Luigi
Using live presentation to communicate the interdisciplinary and abstract content of bioinformatics to its educationally diverse studentship is a sizeable challenge. This review collects a number of perspectives on multimedia presentation, visual communication and pedagogy. The aim is to encourage educators to reflect on the great potential of live presentation in facilitating bioinformatics education.
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.
Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Daniel, C; Choquet, R
To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and
Full Text Available Background: Berlin Questionnaire (BQ, an English language screening tool for obstructive sleep apnea (OSA in primary care, has been applied in tertiary settings, with variable results. Aims: Development of BQ Portuguese version and evaluation of its utility in a sleep disordered breathing clinic (SDBC. Material and methods: BQ was translated using back translation methodology and prospectively applied, previously to cardiorespiratory sleep study, to 95 consecutive subjects, referred to a SDBC, with OSA suspicion. OSA risk assessment was based on responses in 10 items, organized in 3 categories: snoring and witnessed apneas (category 1, daytime sleepiness (category 2, high blood pressure (HBP/obesity (category 3. Results: In the studied sample, 67.4% were males, with a mean age of 51 Â± 13 years. Categories 1, 2 and 3 were positive in 91.6, 24.2 and 66.3%, respectively. BQ identified 68.4% of the patients as being in the high risk group for OSA and the remaining 31.6% in the low risk. BQ sensitivity and specificity were 72.1 and 50%, respectively, for an apnea-hipopnea index (AHI > 5, 82.6 and 44.8% for AHI > 15, 88.4 and 39.1% for AHI > 30. Being in the high risk group for OSA did not infl uence significantly the probability of having the disease (positive likelihood ratio [LR] between 1.44-1.49. Only the items related to snoring loudness, witnessed apneas and HBP/obesity presented a statistically positive association with AHI, with the model constituted by their association presenting a greater discrimination capability, especially for an AHI > 5 (sensitivity 65.2%, specificity 80%, positive LR 3.26. Conclusions: The BQ is not an appropriate screening tool for OSA in a SDBC, although snoring loudness, witnessed apneas, HBP/obesity have demonstrated being significant questionnaire elements in this population. Resumo: IntroduÃ§Ã£o: O QuestionÃ¡rio de Berlim (QB, originalmente desenvolvido em lÃngua inglesa como um instrumento de
Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems.
Near-infrared light is already successfully used for a variety of applications in medical health care, such as pulse oximetry, optical coherence tomography, and near-infrared fluorescence. This thesis examines the potential of near-infrared light used by NIRS in the detection of (patho)physiological
Alina Buzarna-Tihenea (Galbeaza
Full Text Available The aim of this paper is to provide an analysis of the textile industry vocabulary, in order tohighlight the variety of the terms used to describe the respective field and to emphasize thedifficulties that hinder the translation of a specialized text. Firstly, this paper briefly tackles severalgeneral elements related to the translation process, such as the definition of translation, thedifference between general and specialized translation, translation methods and techniques. Thesecond part of our study is focused on the difficulties triggered by the specialized translation fromthe field of textile manufacture and industry. For the purpose of our analysis, we tackled the issuesraised by the application of several direct translation techniques, such as, borrowing, calque, andliteral translation, described in the first part of the study.
Pawełkowicz, Magdalena E.; Skarzyńska, Agnieszka; Posyniak, Kacper; ZiÄ bska, Karolina; PlÄ der, Wojciech; Przybecki, Zbigniew
An important computational challenge is finding the regulatory elements across the promotor region. In this work we present the advantages and disadvantages from the application of different bioinformatics programs for localization of transcription factor binding sites in the upstream region of genes connected with sex determination in cucumber. We use PlantCARE, PlantPAN and SignalScan to find motifs in the promotor regions. The results have been compared and possible function of chosen motifs has been described.
Pawełkowicz, Magdalena; Nowak, Robert; Osipowski, Paweł; Rymuszka, Jacek; Świerkula, Katarzyna; Wojcieszek, Michał; Przybecki, Zbigniew
A major focus of sequencing project is to identify genes in genomes. However it is necessary to define the variety of genes and the criteria for identifying them. In this work we present discrepancies and dependencies from the application of different bioinformatic programs for structural annotation performed on the cucumber data set from Polish Consortium of Cucumber Genome Sequencing. We use Fgenesh, GenScan and GeneMark to automated structural annotation, the results have been compared to reference annotation.
Schjoldager, Anne; Christensen, Tina Paulsen; Flanagan, Marian
technology research as a subdiscipline of TS, and we define and discuss some basic concepts and models of the field that we use in the rest of the paper. Based on a small-scale study of papers published in TS journals between 2006 and 2016, Section 3 attempts to map relevant developments of translation......Due to the growing uptake of translation technology in the language industry and its documented impact on the translation profession, translation students and scholars need in-depth and empirically founded knowledge of the nature and influences of translation technology (e.g. Christensen....../Schjoldager 2010, 2011; Christensen 2011). Unfortunately, the increasing professional use of translation technology has not been mirrored within translation studies (TS) by a similar increase in research projects on translation technology (Munday 2009: 15; O’Hagan 2013; Doherty 2016: 952). The current thematic...
Daniel P. Faith
Full Text Available Biodiversity conservation addresses information challenges through estimations encapsulated in measures of diversity. A quantitative measure of phylogenetic diversity, “PD”, has been defined as the minimum total length of all the phylogenetic branches required to span a given set of taxa on the phylogenetic tree (Faith 1992a. While a recent paper incorrectly characterizes PD as not including information about deeper phylogenetic branches, PD applications over the past decade document the proper incorporation of shared deep branches when assessing the total PD of a set of taxa. Current PD applications to macroinvertebrate taxa in streams of New South Wales, Australia illustrate the practical importance of this definition. Phylogenetic lineages, often corresponding to new, “cryptic”, taxa, are restricted to a small number of stream localities. A recent case of human impact causing loss of taxa in one locality implies a higher PD value for another locality, because it now uniquely represents a deeper branch. This molecular-based phylogenetic pattern supports the use of DNA barcoding programs for biodiversity conservation planning. Here, PD assessments side-step the contentious use of barcoding-based “species” designations. Bio-informatics challenges include combining different phylogenetic evidence, optimization problems for conservation planning, and effective integration of phylogenetic information with environmental and socio-economic data.
Pabinger, Stephan; Rader, Robert; Agren, Rasmus; Nielsen, Jens; Trajanoski, Zlatko
Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. MEMOSys (MEtabolic MOdel research and development System) is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys.
Full Text Available Abstract Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys.
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.
Izak, Dariusz; Klim, Joanna; Kaczanowski, Szymon
Malaria remains one of the highest mortality infectious diseases. Malaria is caused by parasites from the genus Plasmodium. Most deaths are caused by infections involving Plasmodium falciparum, which has a complex life cycle. Malaria parasites are extremely well adapted for interactions with their host and their host's immune system and are able to suppress the human immune system, erase immunological memory and rapidly alter exposed antigens. Owing to this rapid evolution, parasites develop drug resistance and express novel forms of antigenic proteins that are not recognized by the host immune system. There is an emerging need for novel interventions, including novel drugs and vaccines. Designing novel therapies requires knowledge about host-parasite interactions, which is still limited. However, significant progress has recently been achieved in this field through the application of bioinformatics analysis of parasite genome sequences. In this review, we describe the main achievements in 'malarial' bioinformatics and provide examples of successful applications of protein sequence analysis. These examples include the prediction of protein functions based on homology and the prediction of protein surface localization via domain and motif analysis. Additionally, we describe PlasmoDB, a database that stores accumulated experimental data. This tool allows data mining of the stored information and will play an important role in the development of malaria science. Finally, we illustrate the application of bioinformatics in the development of population genetics research on malaria parasites, an approach referred to as reverse ecology.
Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter
Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: email@example.com PMID:23479348
This dissertation is a comparative study of "translation theory" and "translation studies" in China and the West. Its focus is to investigate whether there is translation theory in the Chinese tradition. My study begins with an examination of the debate in China over whether there has already existed a system of translation…
Keerthikumar, Shivakumar; Gangoda, Lahiru; Gho, Yong Song; Mathivanan, Suresh
Extracellular vesicles (EVs) are a class of membranous vesicles that are released by multiple cell types into the extracellular environment. This unique class of extracellular organelles which play pivotal role in intercellular communication are conserved across prokaryotes and eukaryotes. Depending upon the cell origin and the functional state, the molecular cargo including proteins, lipids, and RNA within the EVs are modulated. Owing to this, EVs are considered as a subrepertoire of the host cell and are rich reservoirs of disease biomarkers. In addition, the availability of EVs in multiple bodily fluids including blood has created significant interest in biomarker and signaling research. With the advancement in high-throughput techniques, multiple EV studies have embarked on profiling the molecular cargo. To benefit the scientific community, existing free Web-based resources including ExoCarta, EVpedia, and Vesiclepedia catalog multiple datasets. These resources aid in elucidating molecular mechanism and pathophysiology underlying different disease conditions from which EVs are isolated. Here, the existing bioinformatics tools to perform integrated analysis to identify key functional components in the EV datasets are discussed.
Basyuni, M.; Wasilah, M.; Sumardi
This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.
Horbach, D.Y.; Usanov, S.A.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Neerincx, Pieter B T; Leunissen, Jack A M
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.
Full Text Available This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of translation practice-oriented corpora will be presented, and the use of corpora in translation will be positioned within two general models of translation competence. Special consideration will then be given to the design and application of so-called Do-it-yourself (DIY corpora, which are compiled ad hoc with the aim of completing a specific translation task. In this context, possible sources for retrieving corpus texts will be presented and evaluated and it will be argued that, owing to time and availability constraints in real-life translation, the Internet should be used as a major source of corpus data. After a brief discussion of possible Internet research techniques for targeted and quality-focused corpus compilation, the possible use of the Internet itself as a macro-corpus will be elaborated. The article concludes with a brief presentation of corpus use in translation teaching in the MA in Specialised Translation Programme offered at Cologne University of Applied Sciences, Germany.
Tenenbaum, Jessica D; Bhuvaneshwar, Krithika; Gagliardi, Jane P; Fultz Hollis, Kate; Jia, Peilin; Ma, Liang; Nagarajan, Radhakrishnan; Rakesh, Gopalkumar; Subbian, Vignesh; Visweswaran, Shyam; Zhao, Zhongming; Rozenblit, Leon
Mental illness is increasingly recognized as both a significant cost to society and a significant area of opportunity for biological breakthrough. As -omics and imaging technologies enable researchers to probe molecular and physiological underpinnings of multiple diseases, opportunities arise to explore the biological basis for behavioral health and disease. From individual investigators to large international consortia, researchers have generated rich data sets in the area of mental health, including genomic, transcriptomic, metabolomic, proteomic, clinical and imaging resources. General data repositories such as the Gene Expression Omnibus (GEO) and Database of Genotypes and Phenotypes (dbGaP) and mental health (MH)-specific initiatives, such as the Psychiatric Genomics Consortium, MH Research Network and PsychENCODE represent a wealth of information yet to be gleaned. At the same time, novel approaches to integrate and analyze data sets are enabling important discoveries in the area of mental and behavioral health. This review will discuss and catalog into an organizing framework the increasingly diverse set of MH data resources available, using schizophrenia as a focus area, and will describe novel and integrative approaches to molecular biomarker discovery that make use of mental health data. © The Author 2017. Published by Oxford University Press.
Revote, Jerico; Watson-Haigh, Nathan S.; Quenette, Steve; Bethwaite, Blair; McGrath, Annette
Abstract The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. PMID:27084333
Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...
Michael R Clay
Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.
BASIC QUALIFICATIONS To be considered for this position, you must minimally meet the knowledge, skills, and abilities listed below: Bachelor’s degree in life science/bioinformatics/math/physics/computer related field from an accredited college or university according to the Council for Higher Education Accreditation (CHEA). (Additional qualifying experience may be substituted for the required education). Foreign degrees must be evaluated for U.S. equivalency. In addition to the educational requirements, a minimum of five (5) years of progressively responsible relevant experience. Must be able to obtain and maintain a security clearance. PREFERRED QUALIFICATIONS Candidates with these desired skills will be given preferential consideration: A Masters’ or PhD degree in any quantitative science is preferred. Commitment to solving biological problems and communicating these solutions. Ability to multi-task across projects. Experience in submitting data sets to public repositories. Management of large genomic data sets including integration with data available from public sources. Prior customer-facing role. Record of scientific achievements including journal publications and conference presentations. Expected Competencies: Deep understanding of and experience in processing high throughput biomedical data: data cleaning, normalization, analysis, interpretation and visualization. Ability to understand and analyze data from complex experimental designs. Proficiency in at least two of the following programming languages: Perl, Python, R, Java and C/C++. Experience in at least two of the following areas: metagenomics, ChIPSeq, RNASeq, ExomeSeq, DHS-Seq, microarray analysis. Familiarity with public databases: NCBI, Ensembl, TCGA, cBioPortal, Broad FireHose. Knowledge of working in a cluster environment.
Oliver, J; Pisano, M E; Alonso, T; Roca, P
Statistics provides essential tool in Bioinformatics to interpret the results of a database search or for the management of enormous amounts of information provided from genomics, proteomics and metabolomics. The goal of this project was the development of a software tool that would be as simple as possible to demonstrate the use of the Bioinformatics statistics. Computer Simulation Methods (CSMs) developed using Microsoft Excel were chosen for their broad range of applications, immediate and easy formula calculation, immediate testing and easy graphics representation, and of general use and acceptance by the scientific community. The result of these endeavours is a set of utilities which can be accessed from the following URL: http://gmein.uib.es/bioinformatica/statistics. When tested on students with previous coursework with traditional statistical teaching methods, the general opinion/overall consensus was that Web-based instruction had numerous advantages, but traditional methods with manual calculations were also needed for their theory and practice. Once having mastered the basic statistical formulas, Excel spreadsheets and graphics were shown to be very useful for trying many parameters in a rapid fashion without having to perform tedious calculations. CSMs will be of great importance for the formation of the students and professionals in the field of bioinformatics, and for upcoming applications of self-learning and continuous formation.
Hodor, Paul; Chawla, Amandeep; Clark, Andrew; Neal, Lauren
: One of the solutions proposed for addressing the challenge of the overwhelming abundance of genomic sequence and other biological data is the use of the Hadoop computing framework. Appropriate tools are needed to set up computational environments that facilitate research of novel bioinformatics methodology using Hadoop. Here, we present cl-dash, a complete starter kit for setting up such an environment. Configuring and deploying new Hadoop clusters can be done in minutes. Use of Amazon Web Services ensures no initial investment and minimal operation costs. Two sample bioinformatics applications help the researcher understand and learn the principles of implementing an algorithm using the MapReduce programming pattern. Source code is available at https://bitbucket.org/booz-allen-sci-comp-team/cl-dash.git. firstname.lastname@example.org. © The Author 2015. Published by Oxford University Press.
Gentleman, R.C.; Carey, V.J.; Bates, D.M.
into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...
Gasparoviča-Asīte, M; Aleksejeva, L; Gersons, V
This article studies the possibilities of BEXA family classification algorithms – BEXA, FuzzyBexa and FuzzyBexa II in data, especially bioinformatics data, classification. Three different types of data sets were used in the study – data sets often used in the literature (like Iris data set), UCI data repository real life data sets (like breast cancer data set) and real bioinformatics data sets that have the specific character – a large number of attributes (several thousands) and a small numb...
Sánchez-Burgos, Gilma; Ramos-Castañeda, José; Cedillo-Rivera, Roberto; Dumonteil, Eric
We used T cell epitope prediction tools to identify epitopes from Dengue virus polyprotein sequences, and evaluated in vivo and in vitro the immunogenicity and antigenicity of the corresponding synthetic vaccine candidates. Twenty-two epitopes were predicted to have a high affinity for MHC class I (H-2Kd, H-2Dd, H-2Ld alleles) or class II (IAd alleles). These epitopes were conserved between the four virus serotypes, but with no similarity to human and mouse sequences. Thirteen synthetic peptides induced specific antibodies production with or without T cells activation in mice. Three synthetic peptides induced mostly IgG antibodies, and one of these from the E gene induced a neutralizing response. Ten peptides induced a combination of humoral and cellular responses by CD4+ and CD8+ T cells. Twelve peptides were novel B and T cell epitopes. These results indicate that our bioinformatics strategy is a powerful tool for the identification of novel antigens and its application to human HLA may lead to a potent epitope-based vaccine against Dengue virus and many other pathogens. (c) 2010 Elsevier B.V. All rights reserved.
Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H
Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.
Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela
Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.
Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.
Full Text Available Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g. modern genomics.This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand.We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC. Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead.We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually
Shimoda, Sandra; de Camargo, Beatriz; Horsman, John; Furlong, William; Lopes, Luiz Fernando; Seber, Adriana; Barr, Ronald D
There are few publications reporting health-related quality of life (HRQL) in developing nations. Most instruments measuring HRQL have been developed in English-speaking countries. These instruments need to be culturally adapted for use in non-English-speaking countries. The HUI2 and HUI3 are generic, preference-based systems for describing health status and HRQL. Developed in Canada, the systems have been translated into more than a dozen languages and used worldwide in hundreds of studies of clinical and general populations. The Brazilian-Portuguese translation of the HUI systems was supervised by senior HUInc staff having experience with both the HUI systems and translations. The process included two independent forward translations of the multi-attribute health status classification systems and related questionnaires, consensus between translators on a forward translation, back-translation by two independent translators of the forward translation, and review of the back-translations by original developers of the HUI. The final questionnaires were tested by surveying a sample of convenience of 50 patients recruited at the Centro de Tratamento e Pesquisa-Hospital do Cancer in São Paulo, Brazil. Fifty patients were enrolled in the study. No assessor, patient or nurse or physician, reported problems answering the HUI questionnaires. No significant differences were found in mean overall HUI2 or HUI3 utility scores among types of assessors. Variability in scores are similar to those from other studies in Latin America and Canada. Test results provide preliminary evidence that the Brazilian-Portuguese translation is acceptable, understandable, reliable and valid for assessing health-status and HRQL among survivors of cancer in childhood in Brazil.
Carl, Michael; Schaeffer, Moritz Jonas
translations we investigate the effects of cross-lingual syntactic and semantic distance on translation production times and find that non-literality makes from-scratch translation and post-editing difficult. We show that statistical machine translation systems encounter even more difficulties with non-literality....
Full Text Available Several proof translations of classical mathematics into intuitionistic mathematics have been proposed in the literature over the past century. These are normally referred to as negative translations or double-negation translations. Among those, the most commonly cited are translations due to Kolmogorov, Godel, Gentzen, Kuroda and Krivine (in chronological order. In this paper we propose a framework for explaining how these different translations are related to each other. More precisely, we define a notion of a (modular simplification starting from Kolmogorov translation, which leads to a partial order between different negative translations. In this derived ordering, Kuroda and Krivine are minimal elements. Two new minimal translations are introduced, with Godel and Gentzen translations sitting in between Kolmogorov and one of these new translations.
Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha
G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available
Sukal-Moulton, Theresa; Clancy, Theresa; Zhang, Li-Qun; Gaebler-Spira, Deborah
To determine the clinical efficacy of an ankle robotic rehabilitation protocol for patients with cerebral palsy. The clinic cohort was identified from a retrospective chart review in a before-after intervention trial design and compared with a previously published prospective research cohort. Rehabilitation hospital. Children (N=28; mean age, 8.2±3.62 y) with Gross Motor Function Classification System levels I, II, or III who were referred for ankle stretching and strengthening used a robotic ankle device in a clinic setting. Clinic results were compared with a previously published cohort of participants (N=12; mean age, 7.8±2.91 y) seen in a research laboratory-based intervention protocol. Patients in the clinic cohort were seen 2 times per week for 75-minute sessions for a total of 6 weeks. The first 30 minutes of the session were spent using the robotic ankle device for ankle stretching and strengthening, and the remaining 45 minutes were spent on functional movement activities. There was no control group. We compared pre- and postintervention measures of plantarflexor and dorsiflexor range of motion, strength, spasticity, mobility (Timed Up and Go test, 6-minute walk test, 10-m walk test), balance (Pediatric Balance Scale), Selective Control Assessment of the Lower Extremity (SCALE), and gross motor function measure (GMFM). Significant improvements were found for the clinic cohort in all main outcome measures except for the GMFM. These improvements were equivalent to those reported in the research cohort, except for larger SCALE test changes in the research cohort. These findings suggest that translation of repetitive, goal-directed biofeedback training into the clinic setting is both feasible and beneficial for patients with cerebral palsy. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Saraiva, Daniela; de Camargo, Beatriz; Davis, Aileen M
Evaluation of physical functioning is an important tool for planning rehabilitation. Instruments need to be culturally adapted for use in non-English speaking countries. The aim of this study was to culturally adapt, including translation and preliminary validation, the Toronto extremity salvage score (TESS) for Brazil, in a sample of adolescents and young adults treated for lower extremity osteosarcoma. The process included two independent forward translations of TESS questionnaire, consensus between translators on a forward translation, back-translation by two independent translators, and a review of the back-translations. Internal consistency of the TESS and known groups validity were also evaluated. Internal consistency for the 30 item TESS was high (coefficient alpha = 0.87). TESS score ranges from 0 to 100. Forty-eight patients completed the questionnaire and scores ranged from 56 to 100 (mean score: 89.6). Patients receiving no pain medications scored higher on the TESS than those who were receiving pain medication (P = 0.014), and patients using walking aids had slightly higher but not statistically different scores. Those who were treated with amputation had higher scores than those who were treated with limb salvage procedures (P = 0.003). Preliminary evidence suggests that Brazilian-Portuguese translation is acceptable, understandable, reliable, and valid for evaluating the function in adolescents and young adults with osteosarcoma in lower extremity in Brazil. (c) 2008 Wiley-Liss, Inc.
The aim of this article is to consider the issue of quality in translation. Specifically, the question under consideration is whether quality assurance in relation to translation is feasible and, if so, what some of the implications for translation theory, translation practice and the teaching of...... under the ISO 9001 standard, and section 4. discusses the implications which quality management seems to hold for the field of translation in a broad sense. Finally, section 5. concludes the article....
Pfleger, Brian; Mendez-Perez, Daniel
Disclosed are systems and methods for coupling translation of a target gene to a detectable response gene. A version of the invention includes a translation-coupling cassette. The translation-coupling cassette includes a target gene, a response gene, a response-gene translation control element, and a secondary structure-forming sequence that reversibly forms a secondary structure masking the response-gene translation control element. Masking of the response-gene translation control element inhibits translation of the response gene. Full translation of the target gene results in unfolding of the secondary structure and consequent translation of the response gene. Translation of the target gene is determined by detecting presence of the response-gene protein product. The invention further includes RNA transcripts of the translation-coupling cassettes, vectors comprising the translation-coupling cassettes, hosts comprising the translation-coupling cassettes, methods of using the translation-coupling cassettes, and gene products produced with the translation-coupling cassettes.
Cui, Zhihua; Zhang, Yi
As a promising and innovative research field, bioinformatics has attracted increasing attention recently. Beneath the enormous number of open problems in this field, one fundamental issue is about the accurate and efficient computational methodology that can deal with tremendous amounts of data. In this paper, we survey some applications of swarm intelligence to discover patterns of multiple sequences. To provide a deep insight, ant colony optimization, particle swarm optimization, artificial bee colony and artificial fish swarm algorithm are selected, and their applications to multiple sequence alignment and motif detecting problem are discussed.
Full Text Available This essay exists as a segment in a line of study and writing practice that moves between a critical theory analysis of translation studies conceptions of language, and the practical questions of what those ideas might mean for contemporary translation and writing practice. Although the underlying preoccupation of this essay, and my more general line of inquiry, is translation studies and practice, in many ways translation is merely a way into a discussion on language. For this essay, translation is the threshold of language. But the two trails of the discussion never manage to elude each other, and these concatenations have informed two experimental translation methods, referred to here as Live Translations and Series Translations. Following the essay are a number of poems in translation, all of which come from Blanco Nuclear by the contemporary Spanish poet, Esteban Pujals Gesalí. The first group, the Live Translations consist of transcriptions I made from audio recordings read in a public setting, in which the texts were translated in situ, either off the page of original Spanish-language poems, or through a process very much like that carried out by simultaneous translators, for which readings of the poems were played back to me through headphones at varying speeds to be translated before the audience. The translations collected are imperfect renderings, attesting to a moment in language practice rather than language objects. The second method involves an iterative translation process, by which three versions of any one poem are rendered, with varying levels of fluency, fidelity and servility. All three translations are presented one after the other as a series, with no version asserting itself as the primary translation. These examples, as well as the translation methods themselves, are intended as preliminary experiments within an endlessly divergent continuum of potential methods and translations, and not as a complete representation of
Littman, Bruce H; Krishna, Rajesh
..., and examples of their application to real-life drug discovery and development. The latest thinking is presented by researchers from many of the world's leading pharmaceutical companies, including Pfizer, Merck, Eli Lilly, Abbott, and Novartis, as well as from academic institutions and public- private partnerships that support translational research...
Abernethy, Amy P; Wheeler, Jane L
Translational medicine has yet to deliver on its vast potential. Obstacles, or "blocks," to translation at three phases of research have impeded the application of research findings to clinical needs and, subsequently, the implementation of newly developed interventions in patient care. Recent federal support for comparative effectiveness research focuses attention on the clinical relevance of already-developed diagnostic and therapeutic interventions and on translating interventions found to be effective into new population-level strategies for improving health-thereby overcoming blocks at one end of the translational continuum. At the other end, while there is a preponderance of federal funding underwriting basic science research, further improvement is warranted in translating results of basic research into clinical applications and in integrating the basic sciences into the translational continuum. With its focus on the human and interactional aspects of health, medical practice, and healthcare delivery systems, behavioral medicine, itself a component of translational medicine, can inform this process.
between Swedish and Danish and Swedish and Norwegian subtitles, with the company already reporting a successful return on their investment. The hybrid EBMT/SMT system used in the current research, on the other hand, remains within the confines of academic research, and the real potential of the system...... allotted to produce the subtitles have both decreased. Therefore, this market is recognised as a potential real-world application of MT. Recent publications have introduced Corpus-Based MT approaches to translate subtitles. An SMT system has been implemented in a Swedish subtitling company to translate...
The software industry has undergone rapid development since the beginning of the twenty-first century. These changes have had a profound impact on translators who, due to the evolving nature of digital content, are under increasing pressure to adapt their ways of working. Localizing Apps looks at these challenges by focusing on the localization of software applications, or apps. In each of the five core chapters, Johann Roturier examines:The role of translation and other linguistic activities in adapting software to the needs of different cultures (localization);The procedures required to prep
CERN. Geneva; Dana, Jose
Part 1: Big data for biomedical sciences (Tom Hancocks) Ten years ago witnessed the completion of the first international 'Big Biology' project that sequenced the human genome. In the years since biological sciences, have seen a vast growth in data. In the coming years advances will come from integration of experimental approaches and the translation into applied technologies is the hospital, clinic and even at home. This talk will examine the development of infrastructure, physical and virtual, that will allow millions of life scientists across Europe better access to biological data Tom studied Human Genetics at the University of Leeds and McMaster University, before completing an MSc in Analytical Genomics at the University of Birmingham. He has worked for the UK National Health Service in diagnostic genetics and in training healthcare scientists and clinicians in bioinformatics. Tom joined the EBI in 2012 and is responsible for the scientific development and delivery of training for the BioMedBridges pr...
Mees, Inger M.; Dragsted, Barbara; Gorm Hansen, Inge
On the basis of a pilot study using speech recognition (SR) software, this paper attempts to illustrate the benefits of adopting an interdisciplinary approach in translator training. It shows how the collaboration between phoneticians, translators and interpreters can (1) advance research, (2) have...... implications for the curriculum, (3) be pedagogically motivating, and (4) prepare students for employing translation technology in their future practice as translators. In a two-phase study in which 14 MA students translated texts in three modalities (sight, written, and oral translation using an SR program......), Translog was employed to measure task times. The quality of the products was assessed by three experienced translators, and the number and types of misrecognitions were identified by a phonetician. Results indicate that SR translation provides a potentially useful supplement to written translation...
Introduction: contemporary translation studies and Bible translation. J.A. Naude, C.H.J. Van der Merwe. Abstract. (Acta Theologica, Supplementum 2, 2002: 1-5). Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/actat.v22i1.5450.
The paper investigates the feasibility and some of the possible consequences of applying quality management to translation. It first gives an introduction to two different schools of translation and to (total) quality management. It then examines whether quality management may, in theory......, be applied to translation and goes on to present a case study which involves a firm in the translation industry and which illustrates quality management in practice. The paper shows that applying quality management to translation is feasible and that doing so may translate into sustained growth....
SILVA, Lucas Gonçalves; SANTOS, Leandro Olegário
The technological growth places new tools to researches and general public. Biological databases are spread out for computer’s world network and each time more had access for biological information sources. Categorize and analyze it’s information and your composition objectifying to facilitate the access brings new perspectives to scientific spreading. Databases “Biodiversity Hotspots” and “Species 2000” possess many important biological information, like other databases, molecular and geneti...
Full Text Available Some of the approaches have been developed to retrieve data automatically from one or multiple remote biological data sources. However, most of them require researchers to remain online and wait for returned results. The latter not only requires highly available network connection, but also may cause the network overload. Moreover, so far none of the existing approaches has been designed to address the following problems when retrieving the remote data in a mobile network environment: (1 the resources of mobile devices are limited; (2 network connection is relatively of low quality; and (3 mobile users are not always online. To address the aforementioned problems, we integrate an agent migration approach with a multi-agent system to overcome the high latency or limited bandwidth problem by moving their computations to the required resources or services. More importantly, the approach is fit for the mobile computing environments. Presented in this paper are also the system architecture, the migration strategy, as well as the security authentication of agent migration. As a demonstration, the remote data retrieval from GenBank was used to illustrate the feasibility of the proposed approach.
Conclusively, researches are encouraged to launch a proteomics assistance and guidance in modern molecular diagnostic approaches for understanding and controlling the mucosa lesions especially in conquering the malignant progress. Keywords: Oral squamous cell carcinoma (OSCC); oral leukoplakia (OLK), ...
OSCC) from oral leukoplakia ... study the sera proteomes of 32 healthy volunteers, 6 patients with oral mucosa leukoplakia, 28 OSCC patients, and 8 .... American Ciphergen SELDI Protein Biology System II plus (PBS. II plus) and ...
Acikgoz, Firat; Sert, Olcay
This study, in an attempt to rise above the intricacy of "being informed on the verge of globalization," is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper…
A. Ruttenberg; T. Clark (Tim); W. Bug; M. Samwald; O. Bodenreider; D. Doherty; H. Chen (Helen); K. Forsberg; Y. Gao; V. Kashyap; J. Kinoshita; J. Luciano; M.S. Marshall (Scott); C. Ogbuji; J. Rees (Jonathan); S. Stephens; E. Wu; D. Zaccagninni; T. Hongsermeier; E. Neumann; I. Herman (Ivan); K.-H. Cheung
htmlabstractA fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly
Full Text Available We introduce an open-source implementation of a machine translation API server. The aim of this software package is to enable anyone to run their own multi-engine translation server with neural machine translation engines, supporting an open API for client applications. Besides the hub with the implementation of the client API and the translation service providers running in the background we also describe an open-source demo web application that uses our software package and implements an online translation tool that supports collecting translation quality comparisons from users.
Francisco J. Vigier Moreno
Full Text Available RESUMEN: Esta experiencia docente, diseñada e implementada para la formación universitaria en traducción jurídica del alemán (como segunda lengua extranjera o lengua C al español (primera lengua o lengua A, describe las aplicaciones didácticas derivadas de la explotación de un encargo de traducción, simulado pero realista, de una cédula de citación alemana al español. Siguiendo los postulados de las teorías de género textual (Borja, 2013; Orts, 2017 aplicados al análisis pre traductológico de la situación comunicativa en traducción jurídica (Prieto, 2013, los estudiantes puedan desarrollar su competencia en traducción jurídica y elaborar un marco de toma de decisiones (Way, 2014 que les permita justificar las soluciones que ofrecen a los problemas de traducción identificados. De este modo, profundizan en su especialización en la traducción jurídica alemán-español con mayor autonomía y autoconfianza, al cobrar mayor consciencia de los factores que influyen en la idoneidad de las decisiones adoptadas por un traductor jurídico. ABSTRACT: This teaching experience was designed for and implemented in a university module on legal translator training in the German (as second foreign language or C language - Spanish (first language or A language language pair. It describes the applications of a simulated yet realistic project, whereby students were asked to translate a German court summons into Spanish. Following the main theories on textual genre (Borja, 2013; Orts, 2017, as applied to the pre-translation analysis of the communicative situation in legal translation (Prieto, 2013, trainee translators are able to develop their legal translation competence and to establish a decision-making framework (Way, 2014 that allows them to justify their solutions to the translation problems identified. Accordingly, they can continue their specialization in the translation of legal texts from German into Spanish with greater autonomy and
Moretti, F.; Belardelli, F.; Romero, M.
This multidisciplinary international meeting is organized by the Istituto Superiore di Sanita, in collaboration with Alleanza Contro il Cancro (Alliance Against Cancer, the Network of the Italian Comprehensive Cancer Centres) and EATRIS (European Advanced Translational Research Infrastructure in Medicine). The primary goal of the meeting is to provide a scientific forum to discuss the recent progress in translational research. Moreover, a particular focus will be devoted to the identification of needs, obstacles and new opportunities to promote translational research in biomedicine. The scientific programme will cover a broad range of fields including: cancer; neurosciences; rare diseases; cardiovascular diseases and infectious and autoimmune diseases. Furthermore, special attention will be given to the discussion of how comprehensive initiatives for addressing critical regulatory issues for First-In-Man - Phase I clinical studies can potentially improve the efficiency and quality of biomedical and translational research at an international level [it
Lue, Jaw-Chyng (Inventor); Fang, Wai-Chi (Inventor)
A system with applications in pattern recognition, or classification, of DNA assay samples. Because DNA reference and sample material in wells of an assay may be caused to fluoresce depending upon dye added to the material, the resulting light may be imaged onto an embodiment comprising an array of photodetectors and an adaptive neural network, with applications to DNA analysis. Other embodiments are described and claimed.
Jaramillo, Claudia A Castro; Belli, Sara; Cascais, Anne-Christine; Dudal, Sherri; Edelmann, Martin R; Haak, Markus; Brun, Marie-Elise; Otteneder, Michael B; Ullah, Mohammed; Funk, Christoph; Schuler, Franz; Simon, Silke
Monoclonal antibodies (mAbs) are a rapidly growing drug class for which great efforts have been made to optimize certain molecular features to achieve the desired pharmacokinetic (PK) properties. One approach is to engineer the interactions of the mAb with the neonatal Fc receptor (FcRn) by introducing specific amino acid sequence mutations, and to assess their effect on the PK profile with in vivo studies. Indeed, FcRn protects mAbs from intracellular degradation, thereby prolongs antibody circulation time in plasma and modulates its systemic clearance. To allow more efficient and focused mAb optimization, in vitro input that helps to identify and quantitatively predict the contribution of different processes driving non-target mediated mAb clearance in vivo and supporting translational PK modeling activities is essential. With this aim, we evaluated the applicability and in vivo-relevance of an in vitro cellular FcRn-mediated transcytosis assay to explain the PK behavior of 25 mAbs in rat or monkey. The assay was able to capture species-specific differences in IgG-FcRn interactions and overall correctly ranked Fc mutants according to their in vivo clearance. However, it could not explain the PK behavior of all tested IgGs, indicating that mAb disposition in vivo is a complex interplay of additional processes besides the FcRn interaction. Overall, the transcytosis assay was considered suitable to rank mAb candidates for their FcRn-mediated clearance component before extensive in vivo testing, and represents a first step toward a multi-factorial in vivo clearance prediction approach based on in vitro data.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly
Lykke Jakobsen, Arnt
Exaugural presentation. A retrospect of my personal itinerary from literature, across translation studies to translation process research and a look ahead. In the retrospect, I range over diverse topics, all of which have sprung from my concern with the phenomenon of translation. I reflect on how......, as humans, we generate meaning, interpret meaning, and reformulate or translate meaning. I also reflect on the way computing has influenced research into these phenomena as seen e.g. in my creation of the Translog program and in projects I have been involved in, such as OFT (Translation of Professional...... for global communication purposes, and for improving research into translation, the phenomenon of translation and the world of translation in which we all live....
Degani, Tamar; Prior, Anat; Eddington, Chelsea M.; Arêas da Luz Fontes, Ana B.; Tokowicz, Natasha
Ambiguity in translation is highly prevalent, and has consequences for second-language learning and for bilingual lexical processing. To better understand this phenomenon, the current study compared the determinants of translation ambiguity across four sets of translation norms from English to Spanish, Dutch, German and Hebrew. The number of translations an English word received was correlated across these different languages, and was also correlated with the number of senses the word has in English, demonstrating that translation ambiguity is partially determined by within-language semantic ambiguity. For semantically-ambiguous English words, the probability of the different translations in Spanish and Hebrew was predicted by the meaning-dominance structure in English, beyond the influence of other lexical and semantic factors, for bilinguals translating from their L1, and translating from their L2. These findings are consistent with models postulating direct access to meaning from L2 words for moderately-proficient bilinguals. PMID:27882188
Nagy Imola Katalin
Full Text Available The problem of translation in foreign language classes cannot be dealt with unless we attempt to make an overview of what translation meant for language teaching in different periods of language pedagogy. From the translation-oriented grammar-translation method through the complete ban on translation and mother tongue during the times of the audio-lingual approaches, we have come today to reconsider the role and status of translation in ESL classes. This article attempts to advocate for translation as a useful ESL class activity, which can completely fulfil the requirements of communicativeness. We also attempt to identify some activities and games, which rely on translation in some books published in the 1990s and the 2000s.
Karikari, Thomas K; Quansah, Emmanuel; Mohamed, Wael M Y
Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases) that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Thomas K. Karikari
Full Text Available Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
Stephan, Klaas E; Iglesias, Sandra; Heinzle, Jakob; Diaconescu, Andreea O
Functional neuroimaging has made fundamental contributions to our understanding of brain function. It remains challenging, however, to translate these advances into diagnostic tools for psychiatry. Promising new avenues for translation are provided by computational modeling of neuroimaging data. This article reviews contemporary frameworks for computational neuroimaging, with a focus on forward models linking unobservable brain states to measurements. These approaches-biophysical network models, generative models, and model-based fMRI analyses of neuromodulation-strive to move beyond statistical characterizations and toward mechanistic explanations of neuroimaging data. Focusing on schizophrenia as a paradigmatic spectrum disease, we review applications of these models to psychiatric questions, identify methodological challenges, and highlight trends of convergence among computational neuroimaging approaches. We conclude by outlining a translational neuromodeling strategy, highlighting the importance of openly available datasets from prospective patient studies for evaluating the clinical utility of computational models. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available This paper examines the work of Samuel Beckett in the light of his early work as a translator of the works of other writers. In his translations for Negro: An Anthology (1934, the Anthology of Mexican Poetry (1958, or commissioned translations for journals such as “This Quarter”, early pre-figurings of Beckett’s own thematic and linguistic concerns abound. Rarely viewed as more than acts of raising money for himself, Beckett’s acts of translation, examined chronologically, demonstrate a writer discovering his craft, and developing his unique voice, unencumbered by the expectations of originality. This essay posits that Beckett’s works, with their distinctive voice and characterisation, owe much to the global perspective he gained through translating across cultural, continental divides, as well as experimenting with form, which became a staple of Beckett’s own work. Without formal training or theoretical grounding in translation, Beckett utilises the act of translation as a means of finding himself, revisiting it as a means of shaping his own unique literary voice.
Full Text Available Dedukti is a logical framework based on the lambda-Pi-calculus modulo rewriting, which extends the lambda-Pi-calculus with rewrite rules. In this paper, we show how to translate the proofs of a family of HOL proof assistants to Dedukti. The translation preserves binding, typing, and reduction. We implemented this translation in an automated tool and used it to successfully translate the OpenTheory standard library.
Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges. Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.
The characteristics and capabilities of existing machine translation systems were examined and procurement recommendations were developed. Four systems, SYSTRAN, GLOBALINK, PC TRANSLATOR, and STYLUS, were determined to meet the NASA requirements for a machine translation system. Initially, four language pairs were selected for implementation. These are Russian-English, French-English, German-English, and Japanese-English.
Translation studies stem from comparative literature and contrastive analysis. It involves the transfer of messages between two different language systems and cultures, and Munday (2001, p.1) notes that translation "by its nature" "is multilingual and also interdisciplinary". Translation subjects are the texts in various…
Phillips, J. C.
Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.
Leclère, Valérie; Weber, Tilmann; Jacques, Philippe
This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes and the decip......This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes...
de Miranda Antonio B
Full Text Available Abstract Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Full Text Available Translational medicine has been defined as bench-to-bedside research, where a basic laboratory discovery becomes applicable to the diagnosis, treatment or prevention of a specific disease, and is brought forth by either a physician/scientist who works at the interface between the research laboratory and patient care, or by a team of basic and clinical science investigators. Statistics plays an important role in translational medicine to ensure that the translational process is accurate and reliable, with statistical assurance. For this purpose, statistical criteria for assessment of one-way and two-way translation are proposed. Under a well established and validated translational model, statistical tests for one-way and two-way translation are discussed. Some discussion on lost in translation is also given.
Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.
Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics
Dragsted, Barbara; Mees, Inger M.; Gorm Hansen, Inge
In this article we discuss the translation processes and products of 14 MA students who produced translations from Danish (L1) into English (L2) under different working conditions: (1) written translation, (2) sight translation, and (3) sight translation with a speech recognition (SR) tool. Audio......, since students were dictating in their L2, we looked into the number and types of error that occurred when using the SR software. Items that were misrecognised by the program could be divided into three categories: homophones, hesitations, and incorrectly pronounced words. Well over fifty per cent...
Hedegaard, Steffen; Simonsen, Jakob Grue
of translated texts. Our results suggest (i) that frame-based classifiers are usable for author attribution of both translated and untranslated texts; (ii) that framebased classifiers generally perform worse than the baseline classifiers for untranslated texts, but (iii) perform as well as, or superior...... to the baseline classifiers on translated texts; (iv) that—contrary to current belief—naïve classifiers based on lexical markers may perform tolerably on translated texts if the combination of author and translator is present in the training set of a classifier....
Full Text Available I copied the title from Foucault’s text, "Qu'est-ce qu'un auteur" in Dits et écrits , Paris, Gallimard, 1994, that I read in French, then in English in Donald F. Bouchard’s and Sherry Simon’s translation, and finally in Spanish in Yturbe Corina’s translation, and applied for the translator some of the analysis that Foucault presents to define the author. Foucault suggests that if we cannot define an author, at least we can see where their function is reflected. My purpose in this paper is to present those surfaces where the function of the translator is reflected or where it can be revealed, and to analyse the categories that could lead us to the elaboration of a suitable definition of a Translator. I dare already give a compound noun for the translator: Translator-Function.
Martha Martha Pulido
Full Text Available I copied the title from Foucault’s text, "Qu'est-ce qu'un auteur" in Dits et écrits , Paris, Gallimard, 1994, that I read in French, then in English in Donald F. Bouchard’s and Sherry Simon’s translation, and finally in Spanish in Yturbe Corina’s translation, and applied for the translator some of the analysis that Foucault presents to define the author. Foucault suggests that if we cannot define an author, at least we can see where their function is reflected. My purpose in this paper is to present those surfaces where the function of the translator is reflected or where it can be revealed, and to analyse the categories that could lead us to the elaboration of a suitable definition of a Translator. I dare already give a compound noun for the translator: Translator-Function.
Full Text Available In order to achieve visibility in the media and a position recognized by both the public and their peers, translators are compelled to take advantage of spaces of enunciation such as those provided by prefaces, criticism, or biographical notes. Thanks to these spaces, in which translators deploy discursive and institutional strategies that allow them to position themselves and their translation project, translators acquire the status of translator-auctoritas, that is, a level of symbolic authority capable of endowing them with a public image. Through the detailed analysis of the editorial strategies and institutional calculations implemented by Baudelaire in order to position his project of translating Edgar Allan Poe, we show how the poet achieves the status of translator-auctoritas and the role the latter played in the construction of his own literary identity.
Granas, Anne Gerd; Nørgaard, Lotte Stig; Sporrong, Sofia Kälvemark
OBJECTIVE: The "Beliefs about Medicines Questionnaire" (BMQ) assess balance of necessity and concern of medicines. The BMQ has been translated from English to many languages. However, the original meaning of statements, such as "My medicine is a mystery to me", may be lost in translation. The aim...... of this study is to compare three Scandinavian translations of the BMQ. (1) How reliable are the translations? (2) Are they still valid after translation? METHODS: Translated Norwegian, Swedish and Danish versions of the BMQ were scrutinized by three native Scandinavian researchers. Linguistic differences...... and ambiguities in the 5-point Likert scale and the BMQ statements were compared. RESULTS: In the Scandinavian translations, the Likert scale expanded beyond the original version at one endpoint (Swedish) or both endpoints (Danish). In the BMQ statements, discrepancies ranged from smaller inaccuracies toward...
Tao, Lin; Wang, Bohua; Zhong, Yafen; Pow, Siok Hoon; Zeng, Xian; Qin, Chu; Zhang, Peng; Chen, Shangying; He, Weidong; Tan, Ying; Liu, Hongxia; Jiang, Yuyang; Chen, Weiping; Chen, Yu Zong
Probiotics have been widely explored for health benefits, animal cares, and agricultural applications. Recent advances in microbiome, microbiota, and microbial dark matter research have fueled greater interests in and paved ways for the study of the mechanisms of probiotics and the discovery of new probiotics from uncharacterized microbial sources. A probiotics database named PROBIO was developed to facilitate these efforts and the need for the information on the known probiotics, which provides the comprehensive information about the probiotic functions of 448 marketed, 167 clinical trial/field trial, and 382 research probiotics for use or being studied for use in humans, animals, and plants. The potential applications of the probiotics data are illustrated by several literature-reported investigations, which have used the relevant information for probing the function and mechanism of the probiotics and for discovering new probiotics. PROBIO can be accessed free of charge at http://bidd2.nus.edu.sg/probio/homepage.htm .
The notion of paratext is an unquestionably important consideration for many lines of research in translation studies: the history of translation, literary translation, audiovisual translation, and the analysis of ideological discourse in translation or self-translation. This inexplicable short-age of studies on paratexts in translations was one of the reasons why the Department of Translation and Interpreting at the Universitat Autònoma de Barcelona decided to organise the 7th International ...
Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes, and the automated analysis and comparison of biomacromolecules in atomic detail. The determination of macromolecular structure from experimental data (for example coming from nuclear magnetic resonance, X-ray crystallography or small angle X-ray scattering) has close ties with the field of structural bioinformatics. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure determination methods that are based on inference. Although this review is not exhaustive, I believe the selected topics give a good impression of the exciting new, probabilistic road the field of structural bioinformatics is taking.
Jungck, John R; Weisstein, Anton E
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.
A Refresher Course on 'Bioinformatics in Modern Biology' for graduate and postgraduate college/university teachers will be held at School of Life Sciences, Manipal University, Manipal for two weeks from 5 to 17 May 2014. The objective of this Course is to improvise on teaching methodologies incorporating online teaching ...
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas
"Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB).
van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures
Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...
A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 2. Science Academies' Refresher Course on Bioinformatics in Modern Biology. Information and Announcements Volume 19 Issue 2 February 2014 pp 192-192. Fulltext. Click here to view fulltext PDF. Permanent link:
Weisstein, Anton E.
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621
May 2, 2011 ... A virus-neutralizing antibody by a virus-specific synthetic peptide. J. Virol. 55(3): 836-839. Geourjon C, Deléage G (1995). SOPMA: significant improvements in protein secondary structure prediction by consensus prediction from multiple alignments. Bioinformatics, 11(6): 681-684. Guex N, Peitsch MC ...
Lima, Andre O. S.; Garces, Sergio P. S.
Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Kappa casein (CSN3) gene is a variant of the milk protein highly conserved in mammalian species. Genetic variations in CSN3 gene of six mammalian livestock species were investigated using bioinformatics approach. A total of twenty-seven CSN3 gene sequences with corresponding amino acids belonging to the six ...
Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul
Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…
Jun 26, 2013 ... 2Bioinformatics and Biotechnology, DES, FBAS International Islamic University, Islamabad, Pakistan. Accepted 26 April, 2013. The Tp73 ... New discoveries about the control and function of p73 are still in progress and it is ..... modern research for diagnostics and evolutionary history of p73. REFERENCES.
Perry, William L
Wright, Victoria Ann; Vaughan, Brendan W; Laurent, Thomas; Lopez, Rodrigo; Brooksbank, Cath; Schneider, Maria Victoria
Today's molecular life scientists are well educated in the emerging experimental tools of their trade, but when it comes to training on the myriad of resources and tools for dealing with biological data, a less ideal situation emerges. Often bioinformatics users receive no formal training on how to make the most of the bioinformatics resources and tools available in the public domain. The European Bioinformatics Institute, which is part of the European Molecular Biology Laboratory (EMBL-EBI), holds the world's most comprehensive collection of molecular data, and training the research community to exploit this information is embedded in the EBI's mission. We have evaluated eLearning, in parallel with face-to-face courses, as a means of training users of our data resources and tools. We anticipate that eLearning will become an increasingly important vehicle for delivering training to our growing user base, so we have undertaken an extensive review of Learning Content Management Systems (LCMSs). Here, we describe the process that we used, which considered the requirements of trainees, trainers and systems administrators, as well as taking into account our organizational values and needs. This review describes the literature survey, user discussions and scripted platform testing that we performed to narrow down our choice of platform from 36 to a single platform. We hope that it will serve as guidance for others who are seeking to incorporate eLearning into their bioinformatics training programmes.
Nehm, Ross H.; Budd, Ann F.
NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …
Pal, Sankar K; Ganivada, Avatharam
This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...
The aim of the graduation thesis is to develop an application for smartphones with the Android operating system, which will enable translation of text from a selected language into a target language. The basic idea of the thesis it to transform the experience of the existing language translator (Bing Translator) into a functional smartphone application. The database contains 12,000 most commonly used words in everyday English. On application installation the users are limited to smaller nu...
Carl, Michael; Schaeffer, Moritz
Tirkkonen-Condit (2005: 407–408) argues that “It looks as if literal translation is [the result of] a default rendering procedure”. As a corollary, more literal translations should be easier to process, and less literal ones should be associated with more cognitive effort. In order to assess...... this hypothesis, we operationalize translation literality as 1. the word-order similarity of the source and the target text and 2. the number of possible different translation renderings. We develop a literality metric and apply it on a set of manually word and sentence aligned alternative translations. Drawing...... on the monitor hypothesis (Tirkkonen-Condit 2005) and a model of shared syntax (Hartsuiker et al. 2004) we develop a model of translation effort based on priming strength: shared combinatorial nodes and meaning representations are activated through automatized bilingual priming processes where more strongly...
Obed Madsen, Søren
This paper shows empirical how actors have difficulties with translating strategy texts. The paper uses four cases as different examples of what happens, and what might be difficult, when actors translate organizational texts. In order to explore this, it draws on a translation training method from...... translation theory. The study shows that for those who have produced the text, it is difficult to translate a strategy where they have to change the words so others who don’t understand the language in the text can understand it. It also shows that for those who haven’t been a part of the production, it very...... challenge the notion that actors understand all texts and that managers per se can translate a text....
Schlesinger, William H
Translational ecology--a special discipline aimed to improve the accessibility of science to policy makers--will help hydrogeologists contribute to the solution of pressing environmental problems. Patterned after translational medicine, translational ecology is a partnership to ensure that the right science gets done in a timely fashion, so that it can be communicated to those who need it. © 2013, National Ground Water Association.
Mercedes Amanda Case
The Metalanguage of Translation, sections of which contain materials originally published in volume nineteen of the international translation studies journal, Target (2007, presents a compilation of eleven position articles, written by eleven contributors who draw attention to the often diametric variations between the practice and conceptualization of translation studies and the language we use to describe it. This volume provides a multiplicity of metalinguistic topics covering everything from terminology and bibliography to epistemology and localization.
Zhang, Biao; Xiong, Deyi; Su, Jinsong; Duan, Hong; Zhang, Min
Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences al...
Gompel, M. van; Bosch, A.P.J. van den
In this paper we present new research in translation assistance. We describe a system capable of translating native language (L1) fragments to foreign language (L2) fragments in an L2 context. Practical applications of this research can be framed in the context of second language learning. The type
Full Text Available This paper presents an analysis of the structures the discourse marker vajon forms in translated Hungarian fiction. Although translation data has been deployed in the study of discourse markers (Aijmer & Simon- Vandenbergen, 2004, such studies do not account for translation-specific phenomena which can influence the data of their analysis. In addition, translated discourse markers could offer insights into the idiosyncratic properties of translated texts as well as the culturally defined norms of translation that guide the creation of target texts. The analysis presented in this paper extends the cross-linguistic approach beyond contrastive analysis with a detailed investigation of two corpora of translated texts in order to identify patterns which could be a sign of translation or genre norms impacting the target texts. As a result, a distinct, diverging pattern emerges between the two corpora: patterns of explicit polarity show a marked difference. However, further research is needed to clarify whether these are due to language, genre, or translation norms.
Full Text Available The aim of this paper is to analyze the role that translation plays as cultural mediator, as it already widely accepted that translation involves not just two languages, but two cultures, two worlds that are brought into close contact with each other. Obviously, between the two cultures, the two worlds that translation compares and contrasts there are both similarities and dissimilarities. What is of interest to us is the way in which dissimilarities should be approached in the process of translation, whether they should be domesticated or foreignized as Venuti put it, whether the reader should be brought closer to the text or the text closer to the reader.
Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany
We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…
Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.
Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
Shachak, Aviv; Ophir, Ron; Rubin, Eitan
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…
Burgess Shane C
Full Text Available Abstract Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences. Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation.
Roche-Lima, Abiel; Thulasiram, Ruppa K
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another
Full Text Available Translating Calvino: challenges and opportunities. Five proposals for Calvino’s translators in the new millennium. This Paper suggests an application of Calvino’s concepts described in Six memos For the Next Millenium at the translation of three of his novels – Le città invisibili, Il sentiero dei nidi di ragno e Ultimo viene il corvo. Each of Calvino’s concepts (lightness, quickness, exactitude, visibility, multiplicity and consistancy is being defined according to the Italian writer, then identified in his practical manifestations in the three novels and afterwards adapted at the translation process. The result is a possible mechanism of translation and a set of suggestions for Calvino’s translators.
Iwaki, Aya; Ohnuki, Shinsuke; Suga, Yohei; Izawa, Shingo; Ohya, Yoshikazu
Vanillin, generated by acid hydrolysis of lignocellulose, acts as a potent inhibitor of the growth of the yeast Saccharomyces cerevisiae. Here, we investigated the cellular processes affected by vanillin using high-content, image-based profiling. Among 4,718 non-essential yeast deletion mutants, the morphology of those defective in the large ribosomal subunit showed significant similarity to that of vanillin-treated cells. The defects in these mutants were clustered in three domains of the ribosome: the mRNA tunnel entrance, exit and backbone required for small subunit attachment. To confirm that vanillin inhibited ribosomal function, we assessed polysome and messenger ribonucleoprotein granule formation after treatment with vanillin. Analysis of polysome profiles showed disassembly of the polysomes in the presence of vanillin. Processing bodies and stress granules, which are composed of non-translating mRNAs and various proteins, were formed after treatment with vanillin. These results suggest that vanillin represses translation in yeast cells.
Trutschl, Marjan; Dinkova, Tzvetanka D; Rhoads, Robert E
The relationships between genes in neighboring clusters in a self-organizing map (SOM) and properties attributed to them are sometimes difficult to discern, especially when heterogeneous datasets are used. We report a novel approach to identify correlations between heterogeneous datasets. One dataset, derived from microarray analysis of polysomal distribution, contained changes in the translational efficiency of Caenorhabditis elegans mRNAs resulting from loss of specific eIF4E isoform. The other dataset contained expression patterns of mRNAs across all developmental stages. Two algorithms were applied to these datasets: a classical scatter plot and an SOM. The outputs were linked using a two-dimensional color scale. This revealed that an mRNA's eIF4E-dependent translational efficiency is strongly dependent on its expression during development. This correlation was not detectable with a traditional one-dimensional color scale.
Vetrivel, Umashankar; Pilla, Kalabharath
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
Li, Xiao; Zhang, Yizheng
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.
Rogowski, Wolf; John, Jürgen; IJzerman, Maarten Joost; Scheffler, Richard M.
Translational health economics (THE) can be defined as the use of theoretical concepts and empirical methods in health economics to bridge the gap between the decision to fund and use a new health technology in clinical practice (the backend of translational medicine) and the decision to invest into
specifications. However, to take advantage of the automated analysis of Alloy, the model-oriented VDM specifications must be translated into a constraint-based Alloy specifications. We describe how a sub- set of VDM can be translated into Alloy and how assertions can be expressed in VDM and checked by the Alloy...
Culhane, P. T.
Recent experiments in machine translation have given the semantic elements of collocation in Russian more objective criteria. Soviet linguists in search of semantic relationships have attempted to devise a semantic synthesis for construction of a basic language for machine translation. One such effort is summarized. (CHK)
The challenges of intercultural communication are an integral part of many undergraduate business communication courses. Marketing gaffes clearly illustrate the pitfalls of translation and underscore the importance of a knowledge of the culture with which one is attempting to communicate. A good way to approach the topic of translation pitfalls in…
Bossé, Michael J.; Adu-Gyamfi, Kwaku; Chandler, Kayla
Understanding how students translate between mathematical representations is of both practical and theoretical importance. This study examined students' processes in their generation of symbolic and graphic representations of given polynomial functions. The purpose was to investigate how students perform these translations. The result of the study…
Objectifying the cultural diversity of visual fieldmethods - and the analysis of balancing the cultural known and unknown through anthropological analysis (aided by the analytical concept translation (Edwin Ardener 1989))......Objectifying the cultural diversity of visual fieldmethods - and the analysis of balancing the cultural known and unknown through anthropological analysis (aided by the analytical concept translation (Edwin Ardener 1989))...
Horner, Bruce; Tetreault, Laura
This article explores translation as a useful point of departure and framework for taking a translingual approach to writing engaging globalization. Globalization and the knowledge economy are putting renewed emphasis on translation as a key site of contest between a dominant language ideology of monolingualism aligned with fast capitalist…
Babaee, Siamak; Wan Yahya, Wan Roselezam; Babaee, Ruzbeh
Some scholars (Bassnett-McGuire, Catford, Brislin) suggest that a good piece of translation should be a strict reflection of the style of the original text while some others (Gui, Newmark, Wilss) consider the original text untranslatable unless it is reproduced. Opposing views by different critics suggest that translation is still a challenging…
This article looks at issues affecting Robert Garioch's translation into Scots of a sonnet from Giuseppe Gioachino Belli's Romaneschi collection. It begins with the discussion of a problem involved in writing in dialects with no settled written standard. This 'standardizing' poetry is then looked at in terms of translation and theories of the…
Henrique de Oliveira Lee
Full Text Available This article will question the pertinence of understanding interculturality in terms of translation between cultures. I shall study this hypothesis in two ways : 1 / the cosmopolitan horizon, which the idea of translation may implicate ; 2 / the critique of the premises of unique origin and homogeneity of cultures which this hypothesis makes possible.
Mees, Inger M.; Dragsted, Barbara; Gorm Hansen, Inge
On the basis of a pilot study using speech recognition (SR) software, this paper attempts to illustrate the benefits of adopting an interdisciplinary approach in translator training. It shows how the collaboration between phoneticians, translators and interpreters can (1) advance research, (2) ha...
Perez-Diaz, Sonia; Shen, Liyong
The algebraic translational surface is a typical modeling surface in computer aided design and architecture industry. In this paper, we give a necessary and sufficient condition for that algebraic surface having a standard parametric representation and our proof is constructive. If the given algebraic surface is translational, then we can compute a standard parametric representation for the surface.
Zheng, Li-Wei; Wang, Qi; Zhou, Xue-Dong
Over the last decade, as tremendous innovations have been achieved in scientific technology, translational medicine has come into the focus of academic medicine, and significant intellectual and financial efforts have been made to initiate a multitude of bench-to-bedside projects. The concept of translational medicine is described as the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans, meanwhile, translational medicine also is described as a patient-oriented population research and the translation of results from clinical studies into everyday clinical practice and health decision making. Translational medicine is a hot spot in recent academic field, and it is crucial for improving the living standard of population and renewing the research idea and technology. It has, however, significant obstacles during the approach of translational medicine. We here review the background, concept, current situation of translational dental medicine, key components and obstacles of translational medicine.
Bentires-Alj, Mohamed; Rajan, Abinaya; van Harten, Wim
Translational research leaves no-one indifferent and everyone expects a particular benefit. We as EU-LIFE (www.eu-life.eu), an alliance of 13 research institutes in European life sciences, would like to share our experience in an attempt to identify measures to promote translational research...
Full Text Available With the ever increasing availability of linked multilingual lexical resources, there is a renewed interest in extending Natural Language Processing (NLP applications so that they can make use of the vast set of lexical knowledge bases available in the Semantic Web. In the case of Machine Translation, MT systems can potentially benefit from such a resource. Unknown words and ambiguous translations are among the most common sources of error. In this paper, we attempt to minimise these types of errors by interfacing Statistical Machine Translation (SMT models with Linked Open Data (LOD resources such as DBpedia and BabelNet. We perform several experiments based on the SMT system Moses and evaluate multiple strategies for exploiting knowledge from multilingual linked data in automatically translating named entities. We conclude with an analysis of best practices for multilingual linked data sets in order to optimise their benefit to multilingual and cross-lingual applications.
Full Text Available "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i the exchange and development of ideas and expertise; (ii career development; (iii coordinated funding activities; (iv interactions and engagement with professionals from other fields; and (v other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB and the 12th European Conference on Computational Biology (ECCB.
Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.