WorldWideScience

Sample records for translational bioinformatics applications

  1. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  3. Chapter 16: text mining for translational bioinformatics.

    Science.gov (United States)

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  4. Data mining for bioinformatics applications

    CERN Document Server

    Zengyou, He

    2015-01-01

    Data Mining for Bioinformatics Applications provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems, including problem definition, data collection, data preprocessing, modeling, and validation. The text uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems, containing 45 bioinformatics problems that have been investigated in recent research. For each example, the entire data mining process is described, ranging from data preprocessing to modeling and result validation. Provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems Uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems Contains 45 bioinformatics problems that have been investigated in recent research.

  5. Neonatal Informatics: Transforming Neonatal Care Through Translational Bioinformatics

    Science.gov (United States)

    Palma, Jonathan P.; Benitz, William E.; Tarczy-Hornoch, Peter; Butte, Atul J.; Longhurst, Christopher A.

    2012-01-01

    The future of neonatal informatics will be driven by the availability of increasingly vast amounts of clinical and genetic data. The field of translational bioinformatics is concerned with linking and learning from these data and applying new findings to clinical care to transform the data into proactive, predictive, preventive, and participatory health. As a result of advances in translational informatics, the care of neonates will become more data driven, evidence based, and personalized. PMID:22924023

  6. Genomics and bioinformatics resources for translational science in Rosaceae.

    Science.gov (United States)

    Jung, Sook; Main, Dorrie

    2014-01-01

    Recent technological advances in biology promise unprecedented opportunities for rapid and sustainable advancement of crop quality. Following this trend, the Rosaceae research community continues to generate large amounts of genomic, genetic and breeding data. These include annotated whole genome sequences, transcriptome and expression data, proteomic and metabolomic data, genotypic and phenotypic data, and genetic and physical maps. Analysis, storage, integration and dissemination of these data using bioinformatics tools and databases are essential to provide utility of the data for basic, translational and applied research. This review discusses the currently available genomics and bioinformatics resources for the Rosaceae family.

  7. Application of machine learning methods in bioinformatics

    Science.gov (United States)

    Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen

    2018-05-01

    Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

  8. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    , and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...

  9. A Review of Recent Advances in Translational Bioinformatics: Bridges from Biology to Medicine.

    Science.gov (United States)

    Vamathevan, J; Birney, E

    2017-08-01

    Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.

  10. Molecular bioinformatics: algorithms and applications

    National Research Council Canada - National Science Library

    Schulze-Kremer, S

    1996-01-01

    ... on molecular biology, especially D N A sequence analysis and protein structure prediction. These two issues are also central to this book. Other application areas covered here are: interpretation of spectroscopic data and discovery of structure-function relationships in D N A and proteins. Figure 1 depicts the interdependence of computer science,...

  11. Facilitating the use of large-scale biological data and tools in the era of translational bioinformatics

    DEFF Research Database (Denmark)

    Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren

    2014-01-01

    As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...

  12. Application of Bioinformatics in Chronobiology Research

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2013-01-01

    Full Text Available Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.

  13. Bioinformatics and its application in animal health: a review | Soetan ...

    African Journals Online (AJOL)

    Bioinformatics is an interdisciplinary subject, which uses computer application, statistics, mathematics and engineering for the analysis and management of biological information. It has become an important tool for basic and applied research in veterinary sciences. Bioinformatics has brought about advancements into ...

  14. Combining multiple decisions: applications to bioinformatics

    International Nuclear Information System (INIS)

    Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods

  15. The development and application of bioinformatics core competencies to improve bioinformatics training and education.

    Science.gov (United States)

    Mulder, Nicola; Schwartz, Russell; Brazas, Michelle D; Brooksbank, Cath; Gaeta, Bruno; Morgan, Sarah L; Pauley, Mark A; Rosenwald, Anne; Rustici, Gabriella; Sierk, Michael; Warnow, Tandy; Welch, Lonnie

    2018-02-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans.

  16. The development and application of bioinformatics core competencies to improve bioinformatics training and education

    Science.gov (United States)

    Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie

    2018-01-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004

  17. Report on emerging technologies for translational bioinformatics: a symposium on gene expression profiling for archival tissues

    Directory of Open Access Journals (Sweden)

    Waldron Levi

    2012-03-01

    Full Text Available Abstract Background With over 20 million formalin-fixed, paraffin-embedded (FFPE tissue samples archived each year in the United States alone, archival tissues remain a vast and under-utilized resource in the genomic study of cancer. Technologies have recently been introduced for whole-transcriptome amplification and microarray analysis of degraded mRNA fragments from FFPE samples, and studies of these platforms have only recently begun to enter the published literature. Results The Emerging Technologies for Translational Bioinformatics symposium on gene expression profiling for archival tissues featured presentations of two large-scale FFPE expression profiling studies (each involving over 1,000 samples, overviews of several smaller studies, and representatives from three leading companies in the field (Illumina, Affymetrix, and NuGEN. The meeting highlighted challenges in the analysis of expression data from archival tissues and strategies being developed to overcome them. In particular, speakers reported higher rates of clinical sample failure (from 10% to 70% than are typical for fresh-frozen tissues, as well as more frequent probe failure for individual samples. The symposium program is available at http://www.hsph.harvard.edu/ffpe. Conclusions Multiple solutions now exist for whole-genome expression profiling of FFPE tissues, including both microarray- and sequencing-based platforms. Several studies have reported their successful application, but substantial challenges and risks still exist. Symposium speakers presented novel methodology for analysis of FFPE expression data and suggestions for improving data recovery and quality assessment in pre-analytical stages. Research presentations emphasized the need for careful study design, including the use of pilot studies, replication, and randomization of samples among batches, as well as careful attention to data quality control. Regardless of any limitations in quantitave transcriptomics for

  18. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Architecture exploration of FPGA based accelerators for bioinformatics applications

    CERN Document Server

    Varma, B Sharat Chandra; Balakrishnan, M

    2016-01-01

    This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.

  20. Decoding options and accuracy of translation of developmentally regulated UUA codon in Streptomyces: bioinformatic analysis.

    Science.gov (United States)

    Rokytskyy, Ihor; Koshla, Oksana; Fedorenko, Victor; Ostash, Bohdan

    2016-01-01

    The gene bldA for leucyl [Formula: see text] is known for almost 30 years as a key regulator of morphogenesis and secondary metabolism in genus Streptomyces. Codon UUA is the rarest one in Streptomyces genomes and is present exclusively in genes with auxiliary functions. Delayed accumulation of translation-competent [Formula: see text] is believed to confine the expression of UUA-containing transcripts to stationary phase. Implicit to the regulatory function of UUA codon is the assumption about high accuracy of its translation, e.g. the latter should not occur in the absence of cognate [Formula: see text]. However, a growing body of facts points to the possibility of mistranslation of UUA-containing transcripts in the bldA-deficient mutants. It is not known what type of near-cognate tRNA(s) may decode UUA in the absence of cognate tRNA in Streptomyces, and whether UUA possesses certain inherent properties (such as increased/decreased accuracy of decoding) that would favor its use for regulatory purposes. Here we took bioinformatic approach to address these questions. We catalogued the entire complement of tRNA genes from several relevant Streptomyces and identified genes for posttranscriptional modifications of tRNA that might be involved in UUA decoding by cognate and near-cognate tRNAs. Based on tRNA gene content in Streptomyces genomes, we propose possible scenarios of UUA codon mistranslation. UUA is not associated with an increased rate of missense errors as compared to other leucyl codons, contrasting general belief that low-abundant codons are more error-prone than the high-abundant ones.

  1. Biowep: a workflow enactment portal for bioinformatics applications.

    Science.gov (United States)

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of

  2. Biowep: a workflow enactment portal for bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Romano Paolo

    2007-03-01

    Full Text Available Abstract Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS, can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical

  3. Translational Bioinformatics for Diagnostic and Prognostic Prediction of Prostate Cancer in the Next-Generation Sequencing Era

    Directory of Open Access Journals (Sweden)

    Jiajia Chen

    2013-01-01

    Full Text Available The discovery of prostate cancer biomarkers has been boosted by the advent of next-generation sequencing (NGS technologies. Nevertheless, many challenges still exist in exploiting the flood of sequence data and translating them into routine diagnostics and prognosis of prostate cancer. Here we review the recent developments in prostate cancer biomarkers by high throughput sequencing technologies. We highlight some fundamental issues of translational bioinformatics and the potential use of cloud computing in NGS data processing for the improvement of prostate cancer treatment.

  4. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  5. Concepts Of Bioinformatics And Its Application In Veterinary ...

    African Journals Online (AJOL)

    Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...

  6. WeBIAS: a web server for publishing bioinformatics applications.

    Science.gov (United States)

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  7. Bioinformatics for Precision Medicine in Oncology: principles and application to the SHIVA clinical trial

    Directory of Open Access Journals (Sweden)

    Nicolas eServant

    2014-05-01

    Full Text Available Precision medicine (PM requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of i warranting the integration and the traceability of data, ii ensuring the correct processing and analyses of genomic data and iii applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application.

  8. 9th International Conference on Practical Applications of Computational Biology and Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan

    2015-01-01

    This proceedings presents recent practical applications of Computational Biology and  Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...

  9. Application of bioinformatics on the detection of pathogens by Pcr

    International Nuclear Information System (INIS)

    Rezig, Slim; Sakhri, Saber

    2007-01-01

    Salmonellas are the main responsible agent for the frequent food-borne gastrointestinal diseases. Their detection using classical methods are laborious and their results take a lot of time to be revealed. In this context, we tried to set up a revealing technique of the invA virulence gene, found in the majority of Salmonella species. After amplification with PCR using specific primers created and verified by bioinformatics programs, two couples of primers were set up and they appeared to be very specific and sensitive for the detection of invA gene. (Author)

  10. An overview of topic modeling and its current applications in bioinformatics.

    Science.gov (United States)

    Liu, Lin; Tang, Lin; Dong, Wen; Yao, Shaowen; Zhou, Wei

    2016-01-01

    With the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics. This paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications. Topic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.

  11. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  12. XML schemas for common bioinformatic data types and their application in workflow systems.

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  13. XML schemas for common bioinformatic data types and their application in workflow systems

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  14. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  15. An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics

    International Nuclear Information System (INIS)

    Taylor, Ronald C.

    2010-01-01

    Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.

  16. Application of LSP texts in translator training

    Directory of Open Access Journals (Sweden)

    Larisa Ilynska

    2017-06-01

    Full Text Available The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate the efficiency of this methodology in developing thematic, linguistic and cultural competences of the students, following Bloom’s revised taxonomy and European Master in Translation Network (EMT translator training competences. The methodology has been tested on the students of a professional Master study programme called Technical Translation implemented by the Institute of Applied Linguistics, Riga Technical University, Latvia. The group of students included representatives of different nationalities, translating from English into Latvian, Russian and French. Analysis of popular LSP texts provides an opportunity to structure student background knowledge and expand it to account for linguistic innovation. Application of popular LSP texts instead of purely technical or scientific texts characterised by neutral style and rigid genre conventions provides an opportunity for student translators to develop advanced text processing and decoding skills, to develop awareness of expressive resources of the source and target languages and to develop understanding of socio-pragmatic language use.

  17. GeneDig: a web application for accessing genomic and bioinformatics knowledge.

    Science.gov (United States)

    Suciu, Radu M; Aydin, Emir; Chen, Brian E

    2015-02-28

    With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.

  18. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    Science.gov (United States)

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  19. 6th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics

    2012-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the  6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.

  20. Experimental Design and Bioinformatics Analysis for the Application of Metagenomics in Environmental Sciences and Biotechnology.

    Science.gov (United States)

    Ju, Feng; Zhang, Tong

    2015-11-03

    Recent advances in DNA sequencing technologies have prompted the widespread application of metagenomics for the investigation of novel bioresources (e.g., industrial enzymes and bioactive molecules) and unknown biohazards (e.g., pathogens and antibiotic resistance genes) in natural and engineered microbial systems across multiple disciplines. This review discusses the rigorous experimental design and sample preparation in the context of applying metagenomics in environmental sciences and biotechnology. Moreover, this review summarizes the principles, methodologies, and state-of-the-art bioinformatics procedures, tools and database resources for metagenomics applications and discusses two popular strategies (analysis of unassembled reads versus assembled contigs/draft genomes) for quantitative or qualitative insights of microbial community structure and functions. Overall, this review aims to facilitate more extensive application of metagenomics in the investigation of uncultured microorganisms, novel enzymes, microbe-environment interactions, and biohazards in biotechnological applications where microbial communities are engineered for bioenergy production, wastewater treatment, and bioremediation.

  1. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    Science.gov (United States)

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  2. A bioinformatics potpourri.

    Science.gov (United States)

    Schönbach, Christian; Li, Jinyan; Ma, Lan; Horton, Paul; Sjaugi, Muhammad Farhan; Ranganathan, Shoba

    2018-01-19

    The 16th International Conference on Bioinformatics (InCoB) was held at Tsinghua University, Shenzhen from September 20 to 22, 2017. The annual conference of the Asia-Pacific Bioinformatics Network featured six keynotes, two invited talks, a panel discussion on big data driven bioinformatics and precision medicine, and 66 oral presentations of accepted research articles or posters. Fifty-seven articles comprising a topic assortment of algorithms, biomolecular networks, cancer and disease informatics, drug-target interactions and drug efficacy, gene regulation and expression, imaging, immunoinformatics, metagenomics, next generation sequencing for genomics and transcriptomics, ontologies, post-translational modification, and structural bioinformatics are the subject of this editorial for the InCoB2017 supplement issues in BMC Genomics, BMC Bioinformatics, BMC Systems Biology and BMC Medical Genomics. New Delhi will be the location of InCoB2018, scheduled for September 26-28, 2018.

  3. Application of bioinformatics tools and databases in microbial dehalogenation research (a review).

    Science.gov (United States)

    Satpathy, R; Konkimalla, V B; Ratha, J

    2015-01-01

    Microbial dehalogenation is a biochemical process in which the halogenated substances are catalyzed enzymatically in to their non-halogenated form. The microorganisms have a wide range of organohalogen degradation ability both explicit and non-specific in nature. Most of these halogenated organic compounds being pollutants need to be remediated; therefore, the current approaches are to explore the potential of microbes at a molecular level for effective biodegradation of these substances. Several microorganisms with dehalogenation activity have been identified and characterized. In this aspect, the bioinformatics plays a key role to gain deeper knowledge in this field of dehalogenation. To facilitate the data mining, many tools have been developed to annotate these data from databases. Therefore, with the discovery of a microorganism one can predict a gene/protein, sequence analysis, can perform structural modelling, metabolic pathway analysis, biodegradation study and so on. This review highlights various methods of bioinformatics approach that describes the application of various databases and specific tools in the microbial dehalogenation fields with special focus on dehalogenase enzymes. Attempts have also been made to decipher some recent applications of in silico modeling methods that comprise of gene finding, protein modelling, Quantitative Structure Biodegradibility Relationship (QSBR) study and reconstruction of metabolic pathways employed in dehalogenation research area.

  4. How the strengths of Lisp-family languages facilitate building complex and flexible bioinformatics applications.

    Science.gov (United States)

    Khomtchouk, Bohdan B; Weitz, Edmund; Karp, Peter D; Wahlestedt, Claes

    2016-12-31

    We present a rationale for expanding the presence of the Lisp family of programming languages in bioinformatics and computational biology research. Put simply, Lisp-family languages enable programmers to more quickly write programs that run faster than in other languages. Languages such as Common Lisp, Scheme and Clojure facilitate the creation of powerful and flexible software that is required for complex and rapidly evolving domains like biology. We will point out several important key features that distinguish languages of the Lisp family from other programming languages, and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSLs): languages that are specialized to a particular area, and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the 'programmable programming language'. We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and artificial intelligence research in bioinformatics and computational biology. © The Author 2016. Published by Oxford University Press.

  5. A P2P Framework for Developing Bioinformatics Applications in Dynamic Cloud Environments

    Directory of Open Access Journals (Sweden)

    Chun-Hung Richard Lin

    2013-01-01

    Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.

  6. Application of proteomics to translational research

    International Nuclear Information System (INIS)

    Liotta, L.A.; Petricoin, E.; Garaci, E.; De Maria, R.; Belluco, C.

    2009-01-01

    Deriving public benefit from basic biomedical research requires a dedicated and highly coordinated effort between basic scientists, physicians, bioinformaticians, clinical trial coordinators, MD and PhD trainees and fellows, and a host of other skilled participants. The Istituto Superiore di Sanita/George Mason University US-Italy Oncoproteomics program, established in 2005, is a successful example of a synergistic creative collaboration between basic scientists and clinical investigators conducting translational research. This program focuses on the application of the new field of proteomics to three urgent and fundamental clinical needs in cancer medicine: 1.) Biomarkers for early diagnosis of cancer, when it is still treatable, 2.) Individualizing patient therapy for molecular targeted inhibitors that block signal pathways driving cancer pathogenesis and 3.) Cancer Progenitor Cells (CSCs): When do the lethal progenitors of cancer first emerge, and how can we treat these CSCs with molecular targeted inhibitors

  7. Impact of Online Versus Hardcopy Dictionaries‟ Application on Translation Quality of Iranian M. A. Translation Students

    Directory of Open Access Journals (Sweden)

    Sheida Zarei

    2017-12-01

    Full Text Available The study aimed at investigating the impact of online versus hardcopy dictionaries‟ application on translation quality of senior M.A. students of translation based on Bleu model introduced by Papineni et al. (2002. To this end, using Oxford Proficiency test 50 (out of 70 female senior M.A. students of translation were selected and they were assigned to two groups: Online and hardcopy, using systematic sampling. Next, an English text was selected as the reference text. This reference text was given to three translators: 1 A male English translation expert with a Ph.D. degree in Computational Linguistics (Ref. 1; 2 A female English translation expert with an M.A. degree working at an English Translation Center and with more than 5 years of experience (Ref. 2, and 3 A male Ph.D. candidate in English translation (Ref. 3. These three versions were used as reference Persian standard translations to be entered into Bleu. Later, the English text was given to the hardcopy and online groups. Then, the translations of the participants were compared with the three reference Persian translations using Bleu. The time taken by each student to translate the text into Persian was also recorded. The results indicated that there was no statistically significant difference between the translations of the hardcopy and online groups from fluency/precision points of view. Comparison of the speed of translation in the two groups indicated that the online group was meaningfully faster. The possible beneficiaries of the findings of this research can be university professors, policy makers, and students in the realm of translation.

  8. A generally applicable lightweight method for calculating a value structure for tools and services in bioinformatics infrastructure projects.

    Science.gov (United States)

    Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin

    2017-10-30

    Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.

  9. Application of LSP Texts in Translator Training

    Science.gov (United States)

    Ilynska, Larisa; Smirnova, Tatjana; Platonova, Marina

    2017-01-01

    The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes) texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate…

  10. Application and Translation of Idioms in Chinese Advertisements

    Institute of Scientific and Technical Information of China (English)

    齐秋月

    2014-01-01

    In modern society,advertising has become more and more important and indispensable.It exists everywhere and influences everyone.There is more and more acute marketing competition today,and under this condition,the application of proper advertising language will be necessary if manufactures want to make their commodity blooming and attractive to the consumer,so this research mainly discusses the application o idioms in Chinese advertisements,and then through the analyses of some advertisements both in Chinese and English,it can be found that the methods usually used in advertisement translation are loan translation,literal translation and free translation.

  11. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2011-08-01

    Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of

  12. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Science.gov (United States)

    2011-01-01

    Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP

  13. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    Science.gov (United States)

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  14. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Xiaohui Lin

    2017-12-01

    Full Text Available Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  15. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    Science.gov (United States)

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-01-01

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods. PMID:27529225

  16. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications.

    Science.gov (United States)

    Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén

    2016-08-11

    Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.

  17. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications

    Directory of Open Access Journals (Sweden)

    Lucas Antón Pastur-Romay

    2016-08-01

    Full Text Available Over the past decade, Deep Artificial Neural Networks (DNNs have become the state-of-the-art algorithms in Machine Learning (ML, speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs. All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS, Quantitative Structure–Activity Relationship (QSAR research, protein structure prediction and genomics (and other omics data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.

  18. COMPARISON OF POPULAR BIOINFORMATICS DATABASES

    OpenAIRE

    Abdulganiyu Abdu Yusuf; Zahraddeen Sufyanu; Kabir Yusuf Mamman; Abubakar Umar Suleiman

    2016-01-01

    Bioinformatics is the application of computational tools to capture and interpret biological data. It has wide applications in drug development, crop improvement, agricultural biotechnology and forensic DNA analysis. There are various databases available to researchers in bioinformatics. These databases are customized for a specific need and are ranged in size, scope, and purpose. The main drawbacks of bioinformatics databases include redundant information, constant change, data spread over m...

  19. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Challenge Based Innovation: Translating Fundamental Research into Societal Applications

    Science.gov (United States)

    Kurikka, Joona; Utriainen, Tuuli; Repokari, Lauri

    2016-01-01

    This paper is based on work done at IdeaSquare, a new innovation experiment at CERN, the European Organization for Nuclear Research. The paper explores the translation of fundamental research into societal applications with the help of multidisciplinary student teams, project- and problem-based learning and design thinking methods. The theme is…

  1. Pengembangan Smart Application Translation Aneka Bahasa Sulawesi Berbasis Android

    Directory of Open Access Journals (Sweden)

    Maslan - Maslan

    2016-04-01

    Full Text Available The Indonesian people have diverse tribes. Similarly, the regional language that is widespread in Indonesia, each of the tribes in Indonesia have different languages included in South Sulawesi. The number of local and foreign travelers to visit tourist attractions in the city, so that will indirectly communicate with the locals at the time of tourist visits. This research aims to develop smart applications translators Indonesian to Sulawesi region based on Android. This application can translate the three regional languages, namely the language Konjo, makasar and Bugis. The method used is research development by using the Linear Sequential Model. Applications can run well and have gone through the testing phase by inserting a vocabulary of 100 vocabularies for each language area. So that these applications can be used by people in Indonesia who can get through the Google Play Store

  2. Genetic Algorithms for Optimization of Machine-learning Models and their Applications in Bioinformatics

    KAUST Repository

    Magana-Mora, Arturo

    2017-04-29

    Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve

  3. 8th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Santana, Juan

    2014-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...

  4. 7th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Nanni, Loris; Rocha, Miguel; Fdez-Riverola, Florentino

    2013-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable and the trend is to increase its pace. In fact, the need for computational techniques that can efficiently handle the huge amounts of data produced by the new experimental techniques in Biology is still increasing driven by new advances in Next Generation Sequencing, several types of the so called omics data and image acquisition, just to name a few. The analysis of the datasets that produces and its integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Within this scenario of increasing data availability, Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. Indeed, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we ...

  5. 11th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Mohamad, Mohd; Rocha, Miguel; Paz, Juan; Pinto, Tiago

    2017-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next-generation sequencing technologies, together with novel and constantly evolving, distinct types of omics data technologies, have created an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information and requires tools from the computational sciences. In the last few years, we have seen the rise of a new generation of interdisciplinary scientists with a strong background in the biological and computational sciences. In this context, the interaction of r...

  6. 10th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Mayo, Francisco; Paz, Juan

    2016-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...

  7. 47 CFR 74.1233 - Processing FM translator and booster station applications.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Processing FM translator and booster station... SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1233 Processing FM translator and booster station applications. (a) Applications for FM translator and booster stations are...

  8. Identification and Evaluation of Medical Translator Mobile Applications Using an Adapted APPLICATIONS Scoring System.

    Science.gov (United States)

    Khander, Amrin; Farag, Sara; Chen, Katherine T

    2017-12-22

    With an increasing number of patients requiring translator services, many providers are turning to mobile applications (apps) for assistance. However, there have been no published reviews of medical translator apps. To identify and evaluate medical translator mobile apps using an adapted APPLICATIONS scoring system. A list of apps was identified from the Apple iTunes and Google Play stores, using the search term, "medical translator." Apps not found on two different searches, not in an English-based platform, not used for translation, or not functional after purchase, were excluded. The remaining apps were evaluated using an adapted APPLICATIONS scoring system, which included both objective and subjective criteria. App comprehensiveness was a weighted score defined by the number of non-English languages included in each app relative to the proportion of non-English speakers in the United States. The Apple iTunes and Google Play stores. Medical translator apps identified using the search term "medical translator." Main Outcomes and Measures: Compilation of medical translator apps for provider usage. A total of 524 apps were initially found. After applying the exclusion criteria, 20 (8.2%) apps from the Google Play store and 26 (9.2%) apps from the Apple iTunes store remained for evaluation. The highest scoring apps, Canopy Medical Translator, Universal Doctor Speaker, and Vocre Translate, scored 13.5 out of 18.7 possible points. A large proportion of apps initially found did not function as medical translator apps. Using the APPLICATIONS scoring system, we have identified and evaluated medical translator apps for providers who care for non-English speaking patients.

  9. Future Translational Applications From the Contemporary Genomics Era

    Science.gov (United States)

    Fox, Caroline S.; Hall, Jennifer L.; Arnett, Donna K.; Ashley, Euan A.; Delles, Christian; Engler, Mary B.; Freeman, Mason W.; Johnson, Julie A.; Lanfear, David E.; Liggett, Stephen B.; Lusis, Aldons J.; Loscalzo, Joseph; MacRae, Calum A.; Musunuru, Kiran; Newby, L. Kristin; O’Donnell, Christopher J.; Rich, Stephen S.; Terzic, Andre

    2016-01-01

    The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardiovascular clinical care. In brief, this scientific statement assesses the current timeline for effective translation of basic discoveries to clinical advances, highlighting past successes. Current discoveries in the area of genetics and genomics are covered next, followed by future expectations, tools, and competencies for achieving the goal of improving clinical care. PMID:25882488

  10. Sign Language Translator Application Using OpenCV

    Science.gov (United States)

    Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.

    2018-03-01

    This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.

  11. Bioinformatics for Exploration

    Science.gov (United States)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  12. The Development of Bayesian Theory and Its Applications in Business and Bioinformatics

    Science.gov (United States)

    Zhang, Yifei

    2018-03-01

    Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.

  13. Bioinformatics-Aided Venomics

    Directory of Open Access Journals (Sweden)

    Quentin Kaas

    2015-06-01

    Full Text Available Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future.

  14. The Application of Equivalence Theory to Advertising Translation

    Institute of Scientific and Technical Information of China (English)

    张颖

    2017-01-01

    Through analyzing equivalence theory, the author tries to find a solution to the problems arising in the process of ad?vertising translation. These problems include cultural diversity, language diversity and special requirement of advertisement. The author declares that Nida''s functional equivalence is one of the most appropriate theories to deal with these problems. In this pa?per, the author introduces the principles of advertising translation and culture divergences in advertising translation, and then gives some advertising translation practices to explain and analyze how to create good advertising translation by using functional equivalence. At last, the author introduces some strategies in advertising translation.

  15. Percussive drilling application of translational motion permanent magnet machine

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Shujun

    2012-07-01

    It is clear that percussive drills are very promising since they can increase the rate of penetration in hard rock formations. Any small improvements on the percussive drills can make a big contribution to lowering the drilling costs since drilling a well for the oil and gas industry is very costly. This thesis presents a percussive drilling system mainly driven by a tubular reciprocating translational motion permanent magnet synchronous motor (RTPMSM), which efficiently converts electric energy to kinetic energy for crushing the hard rock since there is no mechanical media. The thesis starts from state-of-the-art of percussive drilling techniques, reciprocating translational motion motors, and self-sensing control of electric motors and its implementation issues. The following chapters present modeling the hard rock, modeling the drill, the design issues of the drill, the RTPMSM and its control. A single-phase RTPMSM prototype is tested for the hard rock drilling. The presented variable voltage variable frequency control is also validated on it. The space vector control and self-sensing control are also explored on a three-phase RTPMSM prototype. The results show that the percussive drill can be implemented to the hard rock drilling applications. A detailed summarisation of contributions and future work is presented at the end of the thesis.(Author)

  16. Translational Applications of Molecular Imaging and Radionuclide Therapy

    International Nuclear Information System (INIS)

    Welch, Michael J.; Eckelman, William C.; Vera, David

    2005-01-01

    Molecular imaging is becoming a larger part of imaging research and practice. The Office of Biological and Environmental Research of the Department of Energy funds a significant number of researchers in this area. The proposal is to partially fund a workshop to inform scientists working in nuclear medicine and nuclear medicine practitioners of the recent advances of molecular imaging in nuclear medicine as well as other imaging modalities. A limited number of topics related to radionuclide therapy will also be discussed. The proposal is to request partial funds for the workshop entitled ''Translational Applications of Molecular Imaging and Radionuclide Therapy'' to be held prior to the Society of Nuclear Medicine Annual Meeting in Toronto, Canada in June 2005. The meeting will be held on June 17-18. This will allow scientists interested in all aspects of nuclear medicine imaging to attend. The chair of the organizing group is Dr. Michael J. Welch. The organizing committee consists of Dr. Welch, Dr. William C. Eckelman and Dr. David Vera. The goal is to invite speakers to discuss the most recent advances of modern molecular imaging and therapy. Speakers will present advances made in in vivo tagging imaging assays, technical aspects of small animal imaging, in vivo imaging and bench to bedside translational study; and the role of a diagnostic scan on therapy selection. This latter topic will include discussions on therapy and new approaches to dosimetry. Several of these topics are those funded by the Department of Energy Office of Biological and Environmental Research

  17. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  18. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  19. 47 CFR 73.3521 - Mutually exclusive applications for low power television, television translators and television...

    Science.gov (United States)

    2010-10-01

    ... television, television translators and television booster stations. 73.3521 Section 73.3521 Telecommunication... Applicable to All Broadcast Stations § 73.3521 Mutually exclusive applications for low power television, television translators and television booster stations. When there is a pending application for a new low...

  20. Translational Applications of Nanodiamonds: From Biocompatibility to Theranostics

    Science.gov (United States)

    Moore, Laura Kent

    Nanotechnology marks the next phase of development for drug delivery, contrast agents and gene therapy. For these novel systems to achieve success in clinical translation we must see that they are both effective and safe. Diamond nanoparticles, also known as nanodiamonds (NDs), have been gaining popularity as molecular delivery vehicles over the last decade. The uniquely faceted, carbon nanoparticles possess a number of beneficial properties that are being harnessed for applications ranging from small-molecule drug delivery to biomedical imaging and gene therapy. In addition to improving the effectiveness of a variety of therapeutics and contrast agents, initial studies indicate that NDs are biocompatible. In this work we evaluate the translational potential of NDs by demonstrating efficacy in molecular delivery and scrutinizing particle tolerance. Previous work has demonstrated that NDs are effective vehicles for the delivery of anthracycline chemotherapeutics and gadolinium(III) based contrast agents. We have sought to enhance the gains made in both areas through the addition of active targeting. We find that ND-mediated targeted delivery of epirubicin to triple negative breast cancers induces tumor regression and virtually eliminates drug toxicities. Additionally, ND-mediated delivery of the MRI contrast agent ProGlo boosts the per gadolinium relaxivity four fold, eliminates water solubility issues and effectively labels progesterone receptor expressing breast cancer cells. Both strategies open the door to the development of targeted, theranostic constructs based on NDs, capable of treating and labeling breast cancers at the same time. Although we have seen that NDs are effective vehicles for molecular delivery, for any nanoparticle to achieve clinical utility it must be biocompatible. Preliminary research has shown that NDs are non-toxic, however only a fraction of the ND-subtypes have been evaluated. Here we present an in depth analysis of the cellular

  1. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  2. Federated querying architecture with clinical & translational health IT application.

    Science.gov (United States)

    Livne, Oren E; Schultz, N Dustin; Narus, Scott P

    2011-10-01

    We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.

  3. 47 CFR 1.572 - Processing TV broadcast and translator station applications.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Processing TV broadcast and translator station applications. 1.572 Section 1.572 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND... and translator station applications. See § 73.3572. ...

  4. 47 CFR 1.573 - Processing FM broadcast and translator station applications.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Processing FM broadcast and translator station applications. 1.573 Section 1.573 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND... and translator station applications. See § 73.3573. ...

  5. Fuzzy Logic in Medicine and Bioinformatics

    Directory of Open Access Journals (Sweden)

    Angela Torres

    2006-01-01

    Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.

  6. The evolution and practical application of machine translation system (1)

    Science.gov (United States)

    Tominaga, Isao; Sato, Masayuki

    This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.

  7. Emergent Computation Emphasizing Bioinformatics

    CERN Document Server

    Simon, Matthew

    2005-01-01

    Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...

  8. Parallel translation in warped product spaces: application to the Reissner-Nordstroem spacetime

    International Nuclear Information System (INIS)

    Raposo, A P; Del Riego, L

    2005-01-01

    A formal treatment of the parallel translation transformations in warped product manifolds is presented and related to those parallel translation transformations in each of the factor manifolds. A straightforward application to the Schwarzschild and Reissner-Nordstroem geometries, considered here as particular examples, explains some apparently surprising properties of the holonomy in these manifolds

  9. The Application of Mobile Devices in the Translation Classroom

    Science.gov (United States)

    Bahri, Hossein; Mahadi, Tengku Sepora Tengku

    2016-01-01

    While the presence of mobile electronic devices in the classroom has posed real challenges to instructors, a growing number of teachers believe they should seize the chance to improve the quality of instruction. The advent of new mobile technologies (laptops, smartphones, tablets, etc.) in the translation classroom has opened up new opportunities…

  10. Application of speckle decorrelation method for small translation measurements

    Czech Academy of Sciences Publication Activity Database

    Horváth, Pavel; Hrabovský, Miroslav; Šmíd, Petr

    2004-01-01

    Roč. 34, č. 2 (2004), s. 203-218 ISSN 0078-5466 R&D Projects: GA MŠk LN00A015 Institutional research plan: CEZ:AV0Z1010921 Keywords : speckle * decorrelation * in-plane and normal translation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.308, year: 2004

  11. Application of the Localisation Platform Crowdin in Translator Education

    Science.gov (United States)

    Kudla, Dominik

    2017-01-01

    This article is an attempt to briefly describe the potential of the online localisation platform Crowdin for the education of the future translators at universities and in private courses or workshops. This description is provided on the basis of the information gathered through the user experience of the author and uses the example of the Khan…

  12. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  13. Biggest challenges in bioinformatics.

    Science.gov (United States)

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-04-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.

  14. Biggest challenges in bioinformatics

    OpenAIRE

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-01-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.

  15. The applicability of Lean and Six Sigma techniques to clinical and translational research.

    Science.gov (United States)

    Schweikhart, Sharon A; Dembe, Allard E

    2009-10-01

    Lean and Six Sigma are business management strategies commonly used in production industries to improve process efficiency and quality. During the past decade, these process improvement techniques increasingly have been applied outside the manufacturing sector, for example, in health care and in software development. This article concerns the potential use of Lean and Six Sigma in improving the processes involved in clinical and translational research. Improving quality, avoiding delays and errors, and speeding up the time to implementation of biomedical discoveries are prime objectives of the National Institutes of Health (NIH) Roadmap for Medical Research and the NIH's Clinical and Translational Science Award program. This article presents a description of the main principles, practices, and methods used in Lean and Six Sigma. Available literature involving applications of Lean and Six Sigma to health care, laboratory science, and clinical and translational research is reviewed. Specific issues concerning the use of these techniques in different phases of translational research are identified. Examples of Lean and Six Sigma applications that are being planned at a current Clinical and Translational Science Award site are provided, which could potentially be replicated elsewhere. We describe how different process improvement approaches are best adapted for particular translational research phases. Lean and Six Sigma process improvement methods are well suited to help achieve NIH's goal of making clinical and translational research more efficient and cost-effective, enhancing the quality of the research, and facilitating the successful adoption of biomedical research findings into practice.

  16. Bioinformatics and Cancer

    Science.gov (United States)

    Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.

  17. Design Application Translates 2-D Graphics to 3-D Surfaces

    Science.gov (United States)

    2007-01-01

    Fabric Images Inc., specializing in the printing and manufacturing of fabric tension architecture for the retail, museum, and exhibit/tradeshow communities, designed software to translate 2-D graphics for 3-D surfaces prior to print production. Fabric Images' fabric-flattening design process models a 3-D surface based on computer-aided design (CAD) specifications. The surface geometry of the model is used to form a 2-D template, similar to a flattening process developed by NASA's Glenn Research Center. This template or pattern is then applied in the development of a 2-D graphic layout. Benefits of this process include 11.5 percent time savings per project, less material wasted, and the ability to improve upon graphic techniques and offer new design services. Partners include Exhibitgroup/Giltspur (end-user client: TAC Air, a division of Truman Arnold Companies Inc.), Jack Morton Worldwide (end-user client: Nickelodeon), as well as 3D Exhibits Inc., and MG Design Associates Corp.

  18. Machine Translation and Other Translation Technologies.

    Science.gov (United States)

    Melby, Alan

    1996-01-01

    Examines the application of linguistic theory to machine translation and translator tools, discusses the use of machine translation and translator tools in the real world of translation, and addresses the impact of translation technology on conceptions of language and other issues. Findings indicate that the human mind is flexible and linguistic…

  19. 47 CFR 74.789 - Broadcast regulations applicable to digital low power television and television translator stations.

    Science.gov (United States)

    2010-10-01

    ... power television and television translator stations. 74.789 Section 74.789 Telecommunication FEDERAL... AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.789 Broadcast regulations applicable to digital low power television and television translator...

  20. Improving Utility of GPU in Accelerating Industrial Applications with User-centred Automatic Code Translation

    DEFF Research Database (Denmark)

    Yang, Po; Dong, Feng; Codreanu, Valeriu

    2018-01-01

    design and hard-to-use. Little attentions have been paid to the applicability, usability and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system, (GPSME) for inexperienced users to utilize GPU capability in accelerating general...... SME applications. This system designs and implements a directive programming model with new kernel generation scheme and memory management hierarchy to optimize its performance. A web service interface is designed for inexperienced users to easily and flexibly invoke the automatic resource translator...

  1. MODEL OF MOBILE TRANSLATOR APPLICATION OF ENGLISH TO BAHASA INDONESIA WITH RULE-BASED AND J2ME

    Directory of Open Access Journals (Sweden)

    Dian Puspita Tedjosurya

    2014-05-01

    Full Text Available Along with the development of information technology in recent era, a number of new applications emerge, especially on mobile phones. The use of mobile phones, besides as communication media, is also as media of learning, such as translator application. Translator application can be a tool to learn a language, such as English to Bahasa Indonesia translator application. The purpose of this research is to allow user to be able to translate English to Bahasa Indonesia on mobile phone easily. Translator application on this research was developed using Java programming language (especially J2ME because of its advantage that can run on various operating systems and its open source that can be easily developed and distributed. In this research, data collection was done through literature study, observation, and browsing similar application. Development of the system used object-oriented analysis and design that can be described by using case diagrams, class diagrams, sequence diagrams, and activity diagrams. The translation process used rule-based method. Result of this research is the application of Java-based translator which can translate English sentence into Indonesian sentence. The application can be accessed using a mobile phone with Internet connection. The application has spelling check feature that is able to check the wrong word and provide alternative word that approaches the word input. Conclusion of this research is the application can translate sentence in daily conversation quite well with the sentence structure corresponds and is close to its original meaning.

  2. Agmatine: clinical applications after 100 years in translation.

    Science.gov (United States)

    Piletz, John E; Aricioglu, Feyza; Cheng, Juei-Tang; Fairbanks, Carolyn A; Gilad, Varda H; Haenisch, Britta; Halaris, Angelos; Hong, Samin; Lee, Jong Eun; Li, Jin; Liu, Ping; Molderings, Gerhard J; Rodrigues, Ana Lúcia S; Satriano, Joseph; Seong, Gong Je; Wilcox, George; Wu, Ning; Gilad, Gad M

    2013-09-01

    Agmatine (decarboxylated arginine) has been known as a natural product for over 100 years, but its biosynthesis in humans was left unexplored owing to long-standing controversy. Only recently has the demonstration of agmatine biosynthesis in mammals revived research, indicating its exceptional modulatory action at multiple molecular targets, including neurotransmitter systems, nitric oxide (NO) synthesis and polyamine metabolism, thus providing bases for broad therapeutic applications. This timely review, a concerted effort by 16 independent research groups, draws attention to the substantial preclinical and initial clinical evidence, and highlights challenges and opportunities, for the use of agmatine in treating a spectrum of complex diseases with unmet therapeutic needs, including diabetes mellitus, neurotrauma and neurodegenerative diseases, opioid addiction, mood disorders, cognitive disorders and cancer. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Comments on lightlike translations and applications in relativistic quantum field theory

    International Nuclear Information System (INIS)

    Driessler, W.

    1975-01-01

    In the algebraic framework of quantum field theory we consider one parameter subgroups of lightlike translations. After establishing a few preliminary properties we prove a certain cluster property and then exhibit the close connection between such subgroups and a class of type III factors. A few applications of this connection are also discussed. (orig.) [de

  4. Shielding Flowers Developing under Stress: Translating Theory to Field Application

    Directory of Open Access Journals (Sweden)

    Noam Chayut

    2014-07-01

    Full Text Available Developing reproductive organs within a flower are sensitive to environmental stress. A higher incidence of environmental stress during this stage of a crop plants’ developmental cycle will lead to major breaches in food security. Clearly, we need to understand this sensitivity and try and overcome it, by agricultural practices and/or the breeding of more tolerant cultivars. Although passion fruit vines initiate flowers all year round, flower primordia abort during warm summers. This restricts the season of fruit production in regions with warm summers. Previously, using controlled chambers, stages in flower development that are sensitive to heat were identified. Based on genetic analysis and physiological experiments in controlled environments, gibberellin activity appeared to be a possible point of horticultural intervention. Here, we aimed to shield flowers of a commercial cultivar from end of summer conditions, thus allowing fruit production in new seasons. We conducted experiments over three years in different settings, and our findings consistently show that a single application of an inhibitor of gibberellin biosynthesis to vines in mid-August can cause precocious flowering of ~2–4 weeks, leading to earlier fruit production of ~1 month. In this case, knowledge obtained on phenology, environmental constraints and genetic variation, allowed us to reach a practical solution.

  5. Introduction to bioinformatics.

    Science.gov (United States)

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  6. Managing Innovation to Maximize Value Along the Discovery-Translation-Application Continuum.

    Science.gov (United States)

    Waldman, S A; Terzic, A

    2017-01-01

    Success in pharmaceutical development led to a record 51 drugs approved in the past year, surpassing every previous year since 1950. Technology innovation enabled identification and exploitation of increasingly precise disease targets ensuring next generation diagnostic and therapeutic products for patient management. The expanding biopharmaceutical portfolio stands, however, in contradistinction to the unsustainable costs that reflect remarkable challenges of clinical development programs. This annual Therapeutic Innovations issue juxtaposes advances in translating molecular breakthroughs into transformative therapies with essential considerations for lowering attrition and improving the cost-effectiveness of the drug-development paradigm. Realizing the discovery-translation-application continuum mandates a congruent approval, adoption, and access triad. © 2016 ASCPT.

  7. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  8. Conical Refraction of Elastic Waves by Anisotropic Metamaterials and Application for Parallel Translation of Elastic Waves.

    Science.gov (United States)

    Ahn, Young Kwan; Lee, Hyung Jin; Kim, Yoon Young

    2017-08-30

    Conical refraction, which is quite well-known in electromagnetic waves, has not been explored well in elastic waves due to the lack of proper natural elastic media. Here, we propose and design a unique anisotropic elastic metamaterial slab that realizes conical refraction for horizontally incident longitudinal or transverse waves; the single-mode wave is split into two oblique coupled longitudinal-shear waves. As an interesting application, we carried out an experiment of parallel translation of an incident elastic wave system through the anisotropic metamaterial slab. The parallel translation can be useful for ultrasonic non-destructive testing of a system hidden by obstacles. While the parallel translation resembles light refraction through a parallel plate without angle deviation between entry and exit beams, this wave behavior cannot be achieved without the engineered metamaterial because an elastic wave incident upon a dissimilar medium is always split at different refraction angles into two different modes, longitudinal and shear.

  9. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  10. Advance in structural bioinformatics

    CERN Document Server

    Wei, Dongqing; Zhao, Tangzhen; Dai, Hao

    2014-01-01

    This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform

  11. Crowdsourcing for bioinformatics.

    Science.gov (United States)

    Good, Benjamin M; Su, Andrew I

    2013-08-15

    Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.

  12. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  13. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    Science.gov (United States)

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  14. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  15. Closing the gap between knowledge and clinical application: challenges for genomic translation.

    Science.gov (United States)

    Burke, Wylie; Korngiebel, Diane M

    2015-01-01

    Despite early predictions and rapid progress in research, the introduction of personal genomics into clinical practice has been slow. Several factors contribute to this translational gap between knowledge and clinical application. The evidence available to support genetic test use is often limited, and implementation of new testing programs can be challenging. In addition, the heterogeneity of genomic risk information points to the need for strategies to select and deliver the information most appropriate for particular clinical needs. Accomplishing these tasks also requires recognition that some expectations for personal genomics are unrealistic, notably expectations concerning the clinical utility of genomic risk assessment for common complex diseases. Efforts are needed to improve the body of evidence addressing clinical outcomes for genomics, apply implementation science to personal genomics, and develop realistic goals for genomic risk assessment. In addition, translational research should emphasize the broader benefits of genomic knowledge, including applications of genomic research that provide clinical benefit outside the context of personal genomic risk.

  16. Future Translational Applications From the Contemporary Genomics Era: A Scientific Statement From the American Heart Association

    OpenAIRE

    Fox, Caroline S.; Hall, Jennifer L.; Arnett, Donna K.; Ashley, Euan A.; Delles, Christian; Engler, Mary B.; Freeman, Mason W.; Johnson, Julie A.; Lanfear, David E.; Liggett, Stephen B.; Lusis, Aldons J.; Loscalzo, Joseph; MacRae, Calum A.; Musunuru, Kiran; Newby, L. Kristin

    2015-01-01

    The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardi...

  17. The GMOD Drupal Bioinformatic Server Framework

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  18. The GMOD Drupal bioinformatic server framework.

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G

    2010-12-15

    Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.

  19. Bioinformatics approaches for identifying new therapeutic bioactive peptides in food

    Directory of Open Access Journals (Sweden)

    Nora Khaldi

    2012-10-01

    Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.

  20. Using Corpus Management Tools in Public Service Translator Training: An Example of Its Application in the Translation of Judgments

    Science.gov (United States)

    Sánchez Ramos, María Del Mar; Vigier Moreno, Francisco J.

    2016-01-01

    As stated by Valero-Garcés (2006, p. 38), the new scenario including public service providers and users who are not fluent in the language used by the former has opened up new ways of linguistic and cultural mediation in current multicultural and multilingual societies. As a consequence, there is an ever increasing need for translators and…

  1. Experimental psychopathology paradigms for alcohol use disorders: Applications for translational research.

    Science.gov (United States)

    Bujarski, Spencer; Ray, Lara A

    2016-11-01

    In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Future translational applications from the contemporary genomics era: a scientific statement from the American Heart Association.

    Science.gov (United States)

    Fox, Caroline S; Hall, Jennifer L; Arnett, Donna K; Ashley, Euan A; Delles, Christian; Engler, Mary B; Freeman, Mason W; Johnson, Julie A; Lanfear, David E; Liggett, Stephen B; Lusis, Aldons J; Loscalzo, Joseph; MacRae, Calum A; Musunuru, Kiran; Newby, L Kristin; O'Donnell, Christopher J; Rich, Stephen S; Terzic, Andre

    2015-05-12

    The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardiovascular clinical care. In brief, this scientific statement assesses the current timeline for effective translation of basic discoveries to clinical advances, highlighting past successes. Current discoveries in the area of genetics and genomics are covered next, followed by future expectations, tools, and competencies for achieving the goal of improving clinical care. © 2015 American Heart Association, Inc.

  3. Data Integration Tool: From Permafrost Data Translation Research Tool to A Robust Research Application

    Science.gov (United States)

    Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.

    2016-12-01

    The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets

  4. Bioinformatics education dissemination with an evolutionary problem solving perspective.

    Science.gov (United States)

    Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J

    2010-11-01

    Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education.

  5. Translating Behavioral Science into Practice: A Framework to Determine Science Quality and Applicability for Police Organizations.

    Science.gov (United States)

    McClure, Kimberley A; McGuire, Katherine L; Chapan, Denis M

    2018-05-07

    Policy on officer-involved shootings is critically reviewed and errors in applying scientific knowledge identified. Identifying and evaluating the most relevant science to a field-based problem is challenging. Law enforcement administrators with a clear understanding of valid science and application are in a better position to utilize scientific knowledge for the benefit of their organizations and officers. A recommended framework is proposed for considering the validity of science and its application. Valid science emerges via hypothesis testing, replication, extension and marked by peer review, known error rates, and general acceptance in its field of origin. Valid application of behavioral science requires an understanding of the methodology employed, measures used, and participants recruited to determine whether the science is ready for application. Fostering a science-practitioner partnership and an organizational culture that embraces quality, empirically based policy, and practices improves science-to-practice translation. © 2018 American Academy of Forensic Sciences.

  6. Yves Le Grand on matrices in optics with application to vision: Translation and critical analysis

    Directory of Open Access Journals (Sweden)

    William Frith Harris

    2013-12-01

    Full Text Available An appendix to Le Grand’s 1945 book, Optique Physiologique: Tome Premier: La Dioptrique de l’Œil et Sa Correction, briefly dealt with the application of matrices in optics.  However the appendix was omitted from the well-known English translation, Physiological Optics, which appeared in 1980.  Consequently the material is all but forgotten.  This is unfortunate in view of the importance of the dioptric power matrix and the ray transference which entered the optometric literature many years later.  Motivated by the perception that there has not been enough care in optometry to attribute concepts appropriately this paper attempts a careful analysis of Le Grand’s thinking as reflected in his appendix.  A translation into English is provided in the appendix to this paper.  The paper opens with a summary of the basics of Gaussian and linear optics sufficient for the interpretation of Le Grand’s appendix which follows.  The paper looks more particularly at what Le Grand says in relation to the transference and the dioptric power matrix though many other issues are also touched on including the conditions under which distant objects will map to clear images on the retina and, more particularly, to clear images that are undistorted.  Detailed annotations of Le Grand’s translated appendix are provided. (S Afr Optom 2013 72(4 145-166

  7. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal; Salama, Khaled N.; Zidan, Mohammed A.

    2012-01-01

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we

  8. Translating research into practice through user-centered design: An application for osteoarthritis healthcare planning.

    Science.gov (United States)

    Carr, Eloise Cj; Babione, Julie N; Marshall, Deborah

    2017-08-01

    To identify the needs and requirements of the end users, to inform the development of a user-interface to translate an existing evidence-based decision support tool into a practical and usable interface for health service planning for osteoarthritis (OA) care. We used a user-centered design (UCD) approach that emphasized the role of the end-users and is well-suited to knowledge translation (KT). The first phase used a needs assessment focus group (n=8) and interviews (n=5) with target users (health care planners) within a provincial health care organization. The second phase used a participatory design approach, with two small group sessions (n=6) to explore workflow, thought processes, and needs of intended users. The needs assessment identified five design recommendations: ensuring the user-interface supports the target user group, allowing for user-directed data explorations, input parameter flexibility, clear presentation, and provision of relevant definitions. The second phase identified workflow insights from a proposed scenario. Graphs, the need for a visual overview of the data, and interactivity were key considerations to aid in meaningful use of the model and knowledge translation. A UCD approach is well suited to identify health care planners' requirements when using a decision support tool to improve health service planning and management of OA. We believe this is one of the first applications to be used in planning for health service delivery. We identified specific design recommendations that will increase user acceptability and uptake of the user-interface and underlying decision support tool in practice. Our approach demonstrated how UCD can be used to enable knowledge translation. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Promoting synergistic research and education in genomics and bioinformatics.

    Science.gov (United States)

    Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  10. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  11. Interdisciplinary Introductory Course in Bioinformatics

    Science.gov (United States)

    Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.

    2010-01-01

    Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…

  12. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  13. Pharmaco-EEG Studies in Animals: A History-Based Introduction to Contemporary Translational Applications.

    Science.gov (United States)

    Drinkenburg, Wilhelmus H I M; Ahnaou, Abdallah; Ruigt, Gé S F

    2015-01-01

    Current research on the effects of pharmacological agents on human neurophysiology finds its roots in animal research, which is also reflected in contemporary animal pharmaco-electroencephalography (p-EEG) applications. The contributions, present value and translational appreciation of animal p-EEG-based applications are strongly interlinked with progress in recording and neuroscience analysis methodology. After the pioneering years in the late 19th and early 20th century, animal p-EEG research flourished in the pharmaceutical industry in the early 1980s. However, around the turn of the millennium the emergence of structurally and functionally revealing imaging techniques and the increasing application of molecular biology caused a temporary reduction in the use of EEG as a window into the brain for the prediction of drug efficacy. Today, animal p-EEG is applied again for its biomarker potential - extensive databases of p-EEG and polysomnography studies in rats and mice hold EEG signatures of a broad collection of psychoactive reference and test compounds. A multitude of functional EEG measures has been investigated, ranging from simple spectral power and sleep-wake parameters to advanced neuronal connectivity and plasticity parameters. Compared to clinical p-EEG studies, where the level of vigilance can be well controlled, changes in sleep-waking behaviour are generally a prominent confounding variable in animal p-EEG studies and need to be dealt with. Contributions of rodent pharmaco-sleep EEG research are outlined to illustrate the value and limitations of such preclinical p-EEG data for pharmacodynamic and chronopharmacological drug profiling. Contemporary applications of p-EEG and pharmaco-sleep EEG recordings in animals provide a common and relatively inexpensive window into the functional brain early in the preclinical and clinical development of psychoactive drugs in comparison to other brain imaging techniques. They provide information on the impact of

  14. Functional proteomics with new mass spectrometric and bioinformatics tools

    International Nuclear Information System (INIS)

    Kesners, P.W.A.

    2001-01-01

    A comprehensive range of mass spectrometric tools is required to investigate todays life science applications and a strong focus is on addressing the needs of functional proteomics. Application examples are given showing the streamlined process of protein identification from low femtomole amounts of digests. Sample preparation is achieved with a convertible robot for automated 2D gel picking, and MALDI target dispensing. MALDI-TOF or ESI-MS subsequent to enzymatic digestion. A choice of mass spectrometers including Q-q-TOF with multipass capability, MALDI-MS/MS with unsegmented PSD, Ion Trap and FT-MS are discussed for their respective strengths and applications. Bioinformatics software that allows both database work and novel peptide mass spectra interpretation is reviewed. The automated database searching uses either entire digest LC-MS n ESI Ion Trap data or MALDI MS and MS/MS spectra. It is shown how post translational modifications are interactively uncovered and de-novo sequencing of peptides is facilitated

  15. An Overview of Bioinformatics Tools and Resources in Allergy.

    Science.gov (United States)

    Fu, Zhiyan; Lin, Jing

    2017-01-01

    The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.

  16. Microbial bioinformatics 2020.

    Science.gov (United States)

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  17. Multiobjective optimization in bioinformatics and computational biology.

    Science.gov (United States)

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  18. Bioinformatics in the Netherlands: the value of a nationwide community.

    Science.gov (United States)

    van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap

    2017-09-15

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.

  19. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  20. When process mining meets bioinformatics

    NARCIS (Netherlands)

    Jagadeesh Chandra Bose, R.P.; Aalst, van der W.M.P.; Nurcan, S.

    2011-01-01

    Process mining techniques can be used to extract non-trivial process related knowledge and thus generate interesting insights from event logs. Similarly, bioinformatics aims at increasing the understanding of biological processes through the analysis of information associated with biological

  1. Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic

    Science.gov (United States)

    Leucht, Kurt W.; Semmel, Glenn S.

    2008-01-01

    The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.

  2. Modeling workflow to design machine translation applications for public health practice.

    Science.gov (United States)

    Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2015-02-01

    Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    Science.gov (United States)

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  4. Bioinformatics on the Cloud Computing Platform Azure

    Science.gov (United States)

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  5. Translational Application of Microfluidics and Bioprinting for Stem Cell-Based Cartilage Repair

    Directory of Open Access Journals (Sweden)

    Silvia Lopa

    2018-01-01

    Full Text Available Cartilage defects can impair the most elementary daily activities and, if not properly treated, can lead to the complete loss of articular function. The limitations of standard treatments for cartilage repair have triggered the development of stem cell-based therapies. In this scenario, the development of efficient cell differentiation protocols and the design of proper biomaterial-based supports to deliver cells to the injury site need to be addressed through basic and applied research to fully exploit the potential of stem cells. Here, we discuss the use of microfluidics and bioprinting approaches for the translation of stem cell-based therapy for cartilage repair in clinics. In particular, we will focus on the optimization of hydrogel-based materials to mimic the articular cartilage triggered by their use as bioinks in 3D bioprinting applications, on the screening of biochemical and biophysical factors through microfluidic devices to enhance stem cell chondrogenesis, and on the use of microfluidic technology to generate implantable constructs with a complex geometry. Finally, we will describe some new bioprinting applications that pave the way to the clinical use of stem cell-based therapies, such as scaffold-free bioprinting and the development of a 3D handheld device for the in situ repair of cartilage defects.

  6. Translational Application of Microfluidics and Bioprinting for Stem Cell-Based Cartilage Repair

    Science.gov (United States)

    Mondadori, Carlotta; Mainardi, Valerio Luca; Talò, Giuseppe; Candrian, Christian; Święszkowski, Wojciech

    2018-01-01

    Cartilage defects can impair the most elementary daily activities and, if not properly treated, can lead to the complete loss of articular function. The limitations of standard treatments for cartilage repair have triggered the development of stem cell-based therapies. In this scenario, the development of efficient cell differentiation protocols and the design of proper biomaterial-based supports to deliver cells to the injury site need to be addressed through basic and applied research to fully exploit the potential of stem cells. Here, we discuss the use of microfluidics and bioprinting approaches for the translation of stem cell-based therapy for cartilage repair in clinics. In particular, we will focus on the optimization of hydrogel-based materials to mimic the articular cartilage triggered by their use as bioinks in 3D bioprinting applications, on the screening of biochemical and biophysical factors through microfluidic devices to enhance stem cell chondrogenesis, and on the use of microfluidic technology to generate implantable constructs with a complex geometry. Finally, we will describe some new bioprinting applications that pave the way to the clinical use of stem cell-based therapies, such as scaffold-free bioprinting and the development of a 3D handheld device for the in situ repair of cartilage defects. PMID:29535776

  7. Taking Bioinformatics to Systems Medicine.

    Science.gov (United States)

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  8. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  9. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  10. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers

    DEFF Research Database (Denmark)

    Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude

    2012-01-01

    and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...

  11. Optimum design of 6-DOF parallel manipulator with translational/rotational workspaces for haptic device application

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jung Won; Hwang, Yoon Kwon [Gyeongsang National University, Jinju (Korea, Republic of); Ryu, Je Ha [Gwangju Institute of Science and Technology, Gwangju (Korea, Republic of)

    2010-05-15

    This paper proposes an optimum design method that satisfies the desired orientation workspace at the boundary of the translation workspace while maximizing the mechanism isotropy for parallel manipulators. A simple genetic algorithm is used to obtain the optimal linkage parameters of a six-degree-of-freedom parallel manipulator that can be used as a haptic device. The objective function is composed of a desired spherical shape translation workspace and a desired orientation workspace located on the boundaries of the desired translation workspace, along with a global conditioning index based on a homogeneous Jacobian matrix. The objective function was optimized to satisfy the desired orientation workspace at the boundary positions as translated from a neutral position of the increased entropy mechanism. An optimization result with desired translation and orientation workspaces for a haptic device was obtained to show the effectiveness of the suggested scheme, and the kinematic performances of the proposed model were compared with those of a preexisting base model

  12. Optimum design of 6-DOF parallel manipulator with translational/rotational workspaces for haptic device application

    International Nuclear Information System (INIS)

    Yoon, Jung Won; Hwang, Yoon Kwon; Ryu, Je Ha

    2010-01-01

    This paper proposes an optimum design method that satisfies the desired orientation workspace at the boundary of the translation workspace while maximizing the mechanism isotropy for parallel manipulators. A simple genetic algorithm is used to obtain the optimal linkage parameters of a six-degree-of-freedom parallel manipulator that can be used as a haptic device. The objective function is composed of a desired spherical shape translation workspace and a desired orientation workspace located on the boundaries of the desired translation workspace, along with a global conditioning index based on a homogeneous Jacobian matrix. The objective function was optimized to satisfy the desired orientation workspace at the boundary positions as translated from a neutral position of the increased entropy mechanism. An optimization result with desired translation and orientation workspaces for a haptic device was obtained to show the effectiveness of the suggested scheme, and the kinematic performances of the proposed model were compared with those of a preexisting base model

  13. Metabolomics and Type 2 Diabetes: Translating Basic Research into Clinical Application.

    Science.gov (United States)

    Klein, Matthias S; Shearer, Jane

    2016-01-01

    Type 2 diabetes (T2D) and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites) is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids and α-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine.

  14. Metabolomics and Type 2 Diabetes: Translating Basic Research into Clinical Application

    Directory of Open Access Journals (Sweden)

    Matthias S. Klein

    2016-01-01

    Full Text Available Type 2 diabetes (T2D and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids and α-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine.

  15. Translating Knowledge: The role of Shared Learning in Bridging the Science-Application Divide

    Science.gov (United States)

    Moench, M.

    2014-12-01

    As the organizers of this session state: "Understanding and managing our future relation with the Earth requires research and knowledge spanning diverse fields, and integrated, societally-relevant science that is geared toward solutions." In most cases, however, integration is weak and scientific outputs do not match decision maker requirements. As a result, while scientific results may be highly relevant to society that relevance is operationally far from clear. This paper explores the use of shared learning processes to bridge the gap between the evolving body of scientific information on climate change and its relevance for resilience planning in cities across Asia. Examples related to understanding uncertainty, the evolution of scientific knowledge from different sources, and data extraction and presentation are given using experiences generated over five years of work as part of the Rockefeller Foundation supported Asian Cities Climate Change Resilience Network and other programs. Results suggest that processes supporting effective translation of knowledge between different sources and different applications are essential for the identification of solutions that respond to the dynamics and uncertainties inherent in global change processes.

  16. Novel Bioinformatics-Based Approach for Proteomic Biomarkers Prediction of Calpain-2 & Caspase-3 Protease Fragmentation: Application to βII-Spectrin Protein

    Science.gov (United States)

    El-Assaad, Atlal; Dawy, Zaher; Nemer, Georges; Kobeissy, Firas

    2017-01-01

    The crucial biological role of proteases has been visible with the development of degradomics discipline involved in the determination of the proteases/substrates resulting in breakdown-products (BDPs) that can be utilized as putative biomarkers associated with different biological-clinical significance. In the field of cancer biology, matrix metalloproteinases (MMPs) have shown to result in MMPs-generated protein BDPs that are indicative of malignant growth in cancer, while in the field of neural injury, calpain-2 and caspase-3 proteases generate BDPs fragments that are indicative of different neural cell death mechanisms in different injury scenarios. Advanced proteomic techniques have shown a remarkable progress in identifying these BDPs experimentally. In this work, we present a bioinformatics-based prediction method that identifies protease-associated BDPs with high precision and efficiency. The method utilizes state-of-the-art sequence matching and alignment algorithms. It starts by locating consensus sequence occurrences and their variants in any set of protein substrates, generating all fragments resulting from cleavage. The complexity exists in space O(mn) as well as in O(Nmn) time, where N, m, and n are the number of protein sequences, length of the consensus sequence, and length per protein sequence, respectively. Finally, the proposed methodology is validated against βII-spectrin protein, a brain injury validated biomarker.

  17. Translation Theory 'Translated'

    DEFF Research Database (Denmark)

    Wæraas, Arild; Nielsen, Jeppe

    2016-01-01

    Translation theory has proved to be a versatile analytical lens used by scholars working from different traditions. On the basis of a systematic literature review, this study adds to our understanding of the ‘translations’ of translation theory by identifying the distinguishing features of the most...... common theoretical approaches to translation within the organization and management discipline: actor-network theory, knowledge-based theory, and Scandinavian institutionalism. Although each of these approaches already has borne much fruit in research, the literature is diverse and somewhat fragmented......, but also overlapping. We discuss the ways in which the three versions of translation theory may be combined and enrich each other so as to inform future research, thereby offering a more complete understanding of translation in and across organizational settings....

  18. Peer Mentoring for Bioinformatics presentation

    OpenAIRE

    Budd, Aidan

    2014-01-01

    A handout used in a HUB (Heidelberg Unseminars in Bioinformatics) meeting focused on career development for bioinformaticians. It describes an activity for use to help introduce the idea of peer mentoring, potnetially acting as an opportunity to create peer-mentoring groups.

  19. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  20. Taking Bioinformatics to Systems Medicine

    NARCIS (Netherlands)

    van Kampen, Antoine H. C.; Moerland, Perry D.

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically

  1. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  2. Bioinformatics of genomic association mapping

    NARCIS (Netherlands)

    Vaez Barzani, Ahmad

    2015-01-01

    In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping

  3. The structural bioinformatics library: modeling in biomolecular science and beyond.

    Science.gov (United States)

    Cazals, Frédéric; Dreyfus, Tom

    2017-04-01

    Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Technological Devices Improving System of Translating Languages: What About their Usefulness on the Applicability in Medicine and Health Sciences?

    Directory of Open Access Journals (Sweden)

    Adilia Maria Pires Sciarra

    2015-12-01

    Full Text Available ABSTRACT INTRODUCTION: In a world in which global communication is becoming ever more important and in which English is increasingly positioned as the pre-eminent international language, that is, English as a Lingua Franca refers to the use of English as a medium of communication between peoples of different languages. It is important to highlight the positive advances in communication in health, provided by technology. OBJECTIVE: To present an overview on some technological devices of translating languages provided by the Web as well as to point out some advantages and disadvantages specially using Google Translate in Medicine and Health Sciences. METHODS: A bibliographical survey was performed to provide an overview on the usefulness of online translators for applicability using written and spoken languages. RESULTS: As we have to consider this question to be further surely answered, this study could present some advantages and disadvantages in using translating online devices. CONCLUSION: Considering Medicine and Health Sciences as expressive into the human scientific knowledge to be spread worldwidely; technological devices available on communication should be used to overcome some language barriers either written or spoken, but with some caution depending on the context of their applicability.

  5. Technological Devices Improving System of Translating Languages: What About their Usefulness on the Applicability in Medicine and Health Sciences?

    Science.gov (United States)

    Sciarra, Adilia Maria Pires; Batigália, Fernando; Oliveira, Marcos Aurélio Barboza de

    2015-01-01

    In a world in which global communication is becoming ever more important and in which English is increasingly positioned as the pre-eminent international language, that is, English as a Lingua Franca refers to the use of English as a medium of communication between peoples of different languages. It is important to highlight the positive advances in communication in health, provided by technology. To present an overview on some technological devices of translating languages provided by the Web as well as to point out some advantages and disadvantages specially using Google Translate in Medicine and Health Sciences. A bibliographical survey was performed to provide an overview on the usefulness of online translators for applicability using written and spoken languages. As we have to consider this question to be further surely answered, this study could present some advantages and disadvantages in using translating online devices. Considering Medicine and Health Sciences as expressive into the human scientific knowledge to be spread worldwidely; technological devices available on communication should be used to overcome some language barriers either written or spoken, but with some caution depending on the context of their applicability.

  6. Bioinformatics for cancer immunotherapy target discovery

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein

    2014-01-01

    therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...

  7. EURASIP journal on bioinformatics & systems biology

    National Research Council Canada - National Science Library

    2006-01-01

    "The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...

  8. Cambio: a file format translation and analysis application for the nuclear response emergency community

    International Nuclear Information System (INIS)

    Lasche, George P.

    2009-01-01

    Cambio is an application intended to automatically read and display any spectrum file of any format in the world that the nuclear emergency response community might encounter. Cambio also provides an analysis capability suitable for HPGe spectra when detector response and scattering environment are not well known. Why is Cambio needed: (1) Cambio solves the following problem - With over 50 types of formats from instruments used in the field and new format variations appearing frequently, it is impractical for every responder to have current versions of the manufacturer's software from every instrument used in the field; (2) Cambio converts field spectra to any one of several common formats that are used for analysis, saving valuable time in an emergency situation; (3) Cambio provides basic tools for comparing spectra, calibrating spectra, and isotope identification with analysis suited especially for HPGe spectra; and (4) Cambio has a batch processing capability to automatically translate a large number of archival spectral files of any format to one of several common formats, such as the IAEA SPE or the DHS N42. Currently over 540 analysts and members of the nuclear emergency response community worldwide are on the distribution list for updates to Cambio. Cambio users come from all levels of government, university, and commercial partners around the world that support efforts to counter terrorist nuclear activities. Cambio is Unclassified Unlimited Release (UUR) and distributed by internet downloads with email notifications whenever a new build of Cambio provides for new formats, bug fixes, or new or improved capabilities. Cambio is also provided as a DLL to the Karlsruhe Institute for Transuranium Elements so that Cambio's automatic file-reading capability can be included at the Nucleonica web site.

  9. Preface to Introduction to Structural Bioinformatics

    NARCIS (Netherlands)

    Feenstra, K. Anton; Abeln, Sanne

    2018-01-01

    While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which

  10. Applications of social constructivist learning theories in knowledge translation for healthcare professionals: a scoping review.

    Science.gov (United States)

    Thomas, Aliki; Menon, Anita; Boruff, Jill; Rodriguez, Ana Maria; Ahmed, Sara

    2014-05-06

    Use of theory is essential for advancing the science of knowledge translation (KT) and for increasing the likelihood that KT interventions will be successful in reducing existing research-practice gaps in health care. As a sociological theory of knowledge, social constructivist theory may be useful for informing the design and evaluation of KT interventions. As such, this scoping review explored the extent to which social constructivist theory has been applied in the KT literature for healthcare professionals. Searches were conducted in six databases: Ovid MEDLINE (1948 - May 16, 2011), Ovid EMBASE, CINAHL, ERIC, PsycInfo, and AMED. Inclusion criteria were: publications from all health professions, research methodologies, as well as conceptual and theoretical papers related to KT. To be included in the review, key words such as constructivism, social constructivism, or social constructivist theories had to be included within the title or abstract. Papers that discussed the use of social constructivist theories in the context of undergraduate learning in academic settings were excluded from the review. An analytical framework of quantitative (numerical) and thematic analysis was used to examine and combine study findings. Of the 514 articles screened, 35 papers published between 1992 and 2011 were deemed eligible and included in the review. This review indicated that use of social constructivist theory in the KT literature was limited and haphazard. The lack of justification for the use of theory continues to represent a shortcoming of the papers reviewed. Potential applications and relevance of social constructivist theory in KT in general and in the specific studies were not made explicit in most papers. For the acquisition, expression and application of knowledge in practice, there was emphasis on how the social constructivist theory supports clinicians in expressing this knowledge in their professional interactions. This scoping review was the first to examine

  11. Applications of social constructivist learning theories in knowledge translation for healthcare professionals: a scoping review

    Science.gov (United States)

    2014-01-01

    Background Use of theory is essential for advancing the science of knowledge translation (KT) and for increasing the likelihood that KT interventions will be successful in reducing existing research-practice gaps in health care. As a sociological theory of knowledge, social constructivist theory may be useful for informing the design and evaluation of KT interventions. As such, this scoping review explored the extent to which social constructivist theory has been applied in the KT literature for healthcare professionals. Methods Searches were conducted in six databases: Ovid MEDLINE (1948 – May 16, 2011), Ovid EMBASE, CINAHL, ERIC, PsycInfo, and AMED. Inclusion criteria were: publications from all health professions, research methodologies, as well as conceptual and theoretical papers related to KT. To be included in the review, key words such as constructivism, social constructivism, or social constructivist theories had to be included within the title or abstract. Papers that discussed the use of social constructivist theories in the context of undergraduate learning in academic settings were excluded from the review. An analytical framework of quantitative (numerical) and thematic analysis was used to examine and combine study findings. Results Of the 514 articles screened, 35 papers published between 1992 and 2011 were deemed eligible and included in the review. This review indicated that use of social constructivist theory in the KT literature was limited and haphazard. The lack of justification for the use of theory continues to represent a shortcoming of the papers reviewed. Potential applications and relevance of social constructivist theory in KT in general and in the specific studies were not made explicit in most papers. For the acquisition, expression and application of knowledge in practice, there was emphasis on how the social constructivist theory supports clinicians in expressing this knowledge in their professional interactions. Conclusions This

  12. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  13. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  14. Translation, cross-cultural adaptation and applicability of the Brazilian version of the Frontotemporal Dementia Rating Scale (FTD-FRS

    Directory of Open Access Journals (Sweden)

    Thais Bento Lima-Silva

    Full Text Available ABSTRACT Background: Staging scales for dementia have been devised for grading Alzheimer's disease (AD but do not include the specific symptoms of frontotemporal lobar degeneration (FTLD. Objective: To translate and adapt the Frontotemporal Dementia Rating Scale (FTD-FRS to Brazilian Portuguese. Methods: The cross-cultural adaptation process consisted of the following steps: translation, back-translation (prepared by independent translators, discussion with specialists, and development of a final version after minor adjustments. A pilot application was carried out with 12 patients diagnosed with bvFTD and 11 with AD, matched for disease severity (CDR=1.0. The evaluation protocol included: Addenbrooke's Cognitive Examination-Revised (ACE-R, Mini-Mental State Examination (MMSE, Executive Interview (EXIT-25, Neuropsychiatric Inventory (NPI, Frontotemporal Dementia Rating Scale (FTD-FRS and Clinical Dementia Rating scale (CDR. Results: The Brazilian version of the FTD-FRS seemed appropriate for use in this country. Preliminary results revealed greater levels of disability in bvFTD than in AD patients (bvFTD: 25% mild, 50% moderate and 25% severe; AD: 36.36% mild, 63.64% moderate. It appears that the CDR underrates disease severity in bvFTD since a relevant proportion of patients rated as having mild dementia (CDR=1.0 in fact had moderate or severe levels of disability according to the FTD-FRS. Conclusion: The Brazilian version of the FTD-FRS seems suitable to aid staging and determining disease progression.

  15. Establishing bioinformatics research in the Asia Pacific

    OpenAIRE

    Ranganathan, Shoba; Tammi, Martti; Gribskov, Michael; Tan, Tin Wee

    2006-01-01

    Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-...

  16. Bioinformatics Methods for Interpreting Toxicogenomics Data: The Role of Text-Mining

    NARCIS (Netherlands)

    Hettne, K.M.; Kleinjans, J.; Stierum, R.H.; Boorsma, A.; Kors, J.A.

    2014-01-01

    This chapter concerns the application of bioinformatics methods to the analysis of toxicogenomics data. The chapter starts with an introduction covering how bioinformatics has been applied in toxicogenomics data analysis, and continues with a description of the foundations of a specific

  17. Computer Programming and Biomolecular Structure Studies: A Step beyond Internet Bioinformatics

    Science.gov (United States)

    Likic, Vladimir A.

    2006-01-01

    This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…

  18. Foreign currency-related translation complexities in cross-border healthcare applications.

    Science.gov (United States)

    Kumar, Anand; Rodrigues, Jean M

    2009-01-01

    International cross-border private hospital chains need to apply the standards for foreign currency translation in order to consolidate the balance sheet and income statements. This not only exposes such chains to exchange rate fluctuations in different ways, but also creates added requirements for enterprise-level IT systems especially when they produce parameters which are used to measure the financial and operational performance of the foreign subsidiary or the parent hospital. Such systems would need to come to terms with the complexities involved in such currency-related translations in order to provide the correct data for performance benchmarking.

  19. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    Science.gov (United States)

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  20. Bioinformatic and Biometric Methods in Plant Morphology

    Directory of Open Access Journals (Sweden)

    Surangi W. Punyasena

    2014-08-01

    Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.

  1. CROSSWORK for Glycans: Glycan Identificatin Through Mass Spectrometry and Bioinformatics

    DEFF Research Database (Denmark)

    Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter

      We have developed "GLYCANthrope " - CROSSWORKS for glycans:  a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...

  2. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  3. Lost in Translation: The Gap in Scientific Advancements and Clinical Application

    Directory of Open Access Journals (Sweden)

    Joseph eFernandez-Moure

    2016-06-01

    Full Text Available The evolution of medicine and medical technology hinges on the successful translation of basic science research from the bench to clinical implementation at the bedside. Born out of the increasing need to facilitate the transfer of scientific knowledge to patients, translational research has emerged. Significant leaps in improving global health such as antibiotics, vaccinations, and cancer therapies have all seen successes under this paradigm yet today it has become increasingly difficult to realize this ideal scenario. As hospital revenue demand increase, and financial support declines, clinician protected research time has been limited. Researchers, likewise, have been forced to abandon time and resource consuming translational research to focus on publication generating work to maintain funding and professional advancement. Compared to the surge in scientific innovation and new fields of science have surged, realization of transformational scientific findings in device development and materials sciences has significantly lagged behind. Herein, we describe: how the current scientific paradigm struggles in the new health-care landscape; the obstacles met by translational researchers; and solutions, both public and private, to overcoming those obstacles. We must rethink the old dogma of academia and reinvent the traditional pathways of research in order to truly impact the health-care arena and ultimately those that matter most: the patient.

  4. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.

    Science.gov (United States)

    Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R

    2011-05-01

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.

  5. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Nowling, Ronald J.; Vyas, Jay [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States); Weatherby, Gerard [Western New England College, Department of Computer Science/Information Technology (United States); Fenwick, Matthew W. [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States); Ellis, Heidi J. C. [Western New England College, Department of Computer Science/Information Technology (United States); Gryk, Michael R., E-mail: gryk@uchc.edu [University of Connecticut Health Center, Department of Molecular, Microbial and Structural Biology (United States)

    2011-05-15

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.nethttp://connjur.sourceforge.net for inspection and extension.

  6. Biomedical informatics and translational medicine

    Directory of Open Access Journals (Sweden)

    Sarkar Indra

    2010-02-01

    Full Text Available Abstract Biomedical informatics involves a core set of methodologies that can provide a foundation for crossing the "translational barriers" associated with translational medicine. To this end, the fundamental aspects of biomedical informatics (e.g., bioinformatics, imaging informatics, clinical informatics, and public health informatics may be essential in helping improve the ability to bring basic research findings to the bedside, evaluate the efficacy of interventions across communities, and enable the assessment of the eventual impact of translational medicine innovations on health policies. Here, a brief description is provided for a selection of key biomedical informatics topics (Decision Support, Natural Language Processing, Standards, Information Retrieval, and Electronic Health Records and their relevance to translational medicine. Based on contributions and advancements in each of these topic areas, the article proposes that biomedical informatics practitioners ("biomedical informaticians" can be essential members of translational medicine teams.

  7. Translational genomics

    Directory of Open Access Journals (Sweden)

    Martin Kussmann

    2014-09-01

    Full Text Available The term “Translational Genomics” reflects both title and mission of this new journal. “Translational” has traditionally been understood as “applied research” or “development”, different from or even opposed to “basic research”. Recent scientific and societal developments have triggered a re-assessment of the connotation that “translational” and “basic” are either/or activities: translational research nowadays aims at feeding the best science into applications and solutions for human society. We therefore argue here basic science to be challenged and leveraged for its relevance to human health and societal benefits. This more recent approach and attitude are catalyzed by four trends or developments: evidence-based solutions; large-scale, high dimensional data; consumer/patient empowerment; and systems-level understanding.

  8. Definition, translation and application of user-oriented languages as extensions of PL/1 in the CAD-systems REGENT

    International Nuclear Information System (INIS)

    Enderle, G.

    1975-09-01

    The integrated CAD-system REGENT serves for modular development of programs for different application areas, for management of data storage and data transfer and easy and safe application of programs by means of user-oriented languages. Programs, data and language for an application area form a REGENT-subsystem. The problem language system of REGENT, PLS (Problem Language System), provides facilities for the development of a problem oriented language for every subsystem as extensions of the base language PL/1. In this paper the translation of problem oriented languages by a precompiler into PL/1 and the definition of language extensions and datastructures for subsystems are described. Development and application of the language for a fluiddynamic-subsystem is shown. (orig.) [de

  9. Understanding Translation

    DEFF Research Database (Denmark)

    Schjoldager, Anne Gram; Gottlieb, Henrik; Klitgård, Ida

    Understanding Translation is designed as a textbook for courses on the theory and practice of translation in general and of particular types of translation - such as interpreting, screen translation and literary translation. The aim of the book is to help you gain an in-depth understanding...... of the phenomenon of translation and to provide you with a conceptual framework for the analysis of various aspects of professional translation. Intended readers are students of translation and languages, but the book will also be relevant for others who are interested in the theory and practice of translation...... - translators, language teachers, translation users and literary, TV and film critics, for instance. Discussions focus on translation between Danish and English....

  10. Translation Techniques

    OpenAIRE

    Marcia Pinheiro

    2015-01-01

    In this paper, we discuss three translation techniques: literal, cultural, and artistic. Literal translation is a well-known technique, which means that it is quite easy to find sources on the topic. Cultural and artistic translation may be new terms. Whilst cultural translation focuses on matching contexts, artistic translation focuses on matching reactions. Because literal translation matches only words, it is not hard to find situations in which we should not use this technique.  Because a...

  11. The semantic web in translational medicine: current applications and future directions.

    Science.gov (United States)

    Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M

    2015-01-01

    Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. © The Author 2013. Published by Oxford University Press.

  12. The AppComposer Web application for school teachers: A platform for translating and adapting educational web applications

    NARCIS (Netherlands)

    Rodriguez-Gil, Luis; Orduna, Pablo; Bollen, Lars; Govaerts, Sten; Holzer, Adrian; Gillet, Dennis; Lopez-de-Ipina, Diego; Garcia-Zubia, Javier

    2015-01-01

    Developing educational apps that cover a wide range of learning contexts and languages is a challenging task. In this paper, we introduce the AppComposer Web app to address this issue. The AppComposer aims at empowering teachers to easily translate and adapt existing apps that fit their educational

  13. 47 CFR 74.780 - Broadcast regulations applicable to translators, low power, and booster stations.

    Science.gov (United States)

    2010-10-01

    ... assignments. Section 73.3561—Staff consideration of applications requiring Commission action. Section 73.3562... 73.3540—Application for voluntary assignment of transfer of control. Section 73.3541—Application for involuntary assignment or transfer of control. Section 73.3542—Application for emergency authorization...

  14. Establishing bioinformatics research in the Asia Pacific

    Directory of Open Access Journals (Sweden)

    Tammi Martti

    2006-12-01

    Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.

  15. Emerging strengths in Asia Pacific bioinformatics.

    Science.gov (United States)

    Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee

    2008-12-12

    The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.

  16. Combining medical informatics and bioinformatics toward tools for personalized medicine.

    Science.gov (United States)

    Sarachan, B D; Simmons, M K; Subramanian, P; Temkin, J M

    2003-01-01

    Key bioinformatics and medical informatics research areas need to be identified to advance knowledge and understanding of disease risk factors and molecular disease pathology in the 21 st century toward new diagnoses, prognoses, and treatments. Three high-impact informatics areas are identified: predictive medicine (to identify significant correlations within clinical data using statistical and artificial intelligence methods), along with pathway informatics and cellular simulations (that combine biological knowledge with advanced informatics to elucidate molecular disease pathology). Initial predictive models have been developed for a pilot study in Huntington's disease. An initial bioinformatics platform has been developed for the reconstruction and analysis of pathways, and work has begun on pathway simulation. A bioinformatics research program has been established at GE Global Research Center as an important technology toward next generation medical diagnostics. We anticipate that 21 st century medical research will be a combination of informatics tools with traditional biology wet lab research, and that this will translate to increased use of informatics techniques in the clinic.

  17. The secondary metabolite bioinformatics portal

    DEFF Research Database (Denmark)

    Weber, Tilmann; Kim, Hyun Uk

    2016-01-01

    . In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...... analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work......Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly...

  18. Bioclipse: an open source workbench for chemo- and bioinformatics

    Directory of Open Access Journals (Sweden)

    Wagener Johannes

    2007-02-01

    Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  19. The application of observational data in translational medicine: analyzing tobacco-use behaviors of adolescents

    Directory of Open Access Journals (Sweden)

    Siciliano Valeria

    2012-05-01

    Full Text Available Abstract Background Translational Medicine focuses on “bench to bedside”, converting experimental results into clinical use. The “bedside to bench” transition remains challenging, requiring clinicians to define true clinical need for laboratory study. In this study, we show how observational data (an eleven-year data survey program on adolescent smoking behaviours, can identify knowledge gaps and research questions leading directly to clinical implementation and improved health care. We studied gender-specific trends (2000–2010 in Italian students to evaluate the specific impact of various anti-smoking programs, including evaluation of perceptions of access to cigarettes and health risk. Methods The study used, ESPAD-Italia® (European School Survey Project on Alcohol and other Drugs, is a nationally representative sample of high-school students. The permutation test for joinpoint regression was used to calculate the annual percent change in smoking. Changes in smoking habits by age, perceived availability and risk over a 11-year period were tested using a gender-specific logistic model and a multinomial model. Results Gender-stratified analysis showed 1 decrease of lifetime prevalence, then stabilization (both genders; 2 decrease in last month and occasional use (both genders; 3 reduction of moderate use (females; 4 no significant change in moderate use (males and in heavy use (both genders. Perceived availability positively associates with prevalence, while perceived risk negatively associates, but interact with different effects depending on smoking patterns. In addition, government implementation of public policies concerning access to tobacco products in this age group during this period presented a unique background to examine their specific impact on behaviours. Conclusion Large observational databases are a rich resource in support of translational research. From these observations, key clinically relevant issues can be

  20. Biology in 'silico': The Bioinformatics Revolution.

    Science.gov (United States)

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  1. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  2. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  3. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  4. Translational Creativity

    DEFF Research Database (Denmark)

    Nielsen, Sandro

    2010-01-01

    A long-established approach to legal translation focuses on terminological equivalence making translators strictly follow the words of source texts. Recent research suggests that there is room for some creativity allowing translators to deviate from the source texts. However, little attention...... is given to genre conventions in source texts and the ways in which they can best be translated. I propose that translators of statutes with an informative function in expert-to-expert communication may be allowed limited translational creativity when translating specific types of genre convention....... This creativity is a result of translators adopting either a source-language or a target-language oriented strategy and is limited by the pragmatic principle of co-operation. Examples of translation options are provided illustrating the different results in target texts. The use of a target-language oriented...

  5. Rising Strengths Hong Kong SAR in Bioinformatics.

    Science.gov (United States)

    Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy

    2017-06-01

    Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.

  6. Bioinformatics clouds for big data manipulation

    Directory of Open Access Journals (Sweden)

    Dai Lin

    2012-11-01

    Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  7. The 2016 Bioinformatics Open Source Conference (BOSC).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  8. Bioinformatics clouds for big data manipulation

    KAUST Repository

    Dai, Lin

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.

  9. Bioinformatics clouds for big data manipulation.

    Science.gov (United States)

    Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  10. TRANSAUTOPHAGY: European network for multidisciplinary research and translation of autophagy knowledge

    Science.gov (United States)

    Casas, Caty; Codogno, Patrice; Pinti, Marcello; Batoko, Henri; Morán, María; Proikas-Cezanne, Tassula; Reggiori, Fulvio; Sirko, Agnieszka; Soengas, María S; Velasco, Guillermo; Lafont, Frank; Lane, Jon; Faure, Mathias; Cossarizza, Andrea

    2016-01-01

    abstract A collaborative consortium, named “TRANSAUTOPHAGY,” has been created among European research groups, comprising more than 150 scientists from 21 countries studying diverse branches of basic and translational autophagy. The consortium was approved in the framework of the Horizon 2020 Program in November 2015 as a COST Action of the European Union (COST means: CO-operation in Science and Technology), and will be sponsored for 4 years. TRANSAUTOPHAGY will form an interdisciplinary platform for basic and translational researchers, enterprises and stakeholders of diverse disciplines (including nanotechnology, bioinformatics, physics, chemistry, biology and various medical disciplines). TRANSAUTOPHAGY will establish 5 different thematic working groups, formulated to cooperate in research projects, share ideas, and results through workshops, meetings and short term exchanges of personnel (among other initiatives). TRANSAUTOPHAGY aims to generate breakthrough multidisciplinary knowledge about autophagy regulation, and to boost translation of this knowledge into biomedical and biotechnological applications. PMID:27046256

  11. TRANSAUTOPHAGY: European network for multidisciplinary research and translation of autophagy knowledge.

    Science.gov (United States)

    Casas, Caty; Codogno, Patrice; Pinti, Marcello; Batoko, Henri; Morán, María; Proikas-Cezanne, Tassula; Reggiori, Fulvio; Sirko, Agnieszka; Soengas, María S; Velasco, Guillermo; Lafont, Frank; Lane, Jon; Faure, Mathias; Cossarizza, Andrea

    2016-01-01

    A collaborative consortium, named "TRANSAUTOPHAGY," has been created among European research groups, comprising more than 150 scientists from 21 countries studying diverse branches of basic and translational autophagy. The consortium was approved in the framework of the Horizon 2020 Program in November 2015 as a COST Action of the European Union (COST means: CO-operation in Science and Technology), and will be sponsored for 4 years. TRANSAUTOPHAGY will form an interdisciplinary platform for basic and translational researchers, enterprises and stakeholders of diverse disciplines (including nanotechnology, bioinformatics, physics, chemistry, biology and various medical disciplines). TRANSAUTOPHAGY will establish 5 different thematic working groups, formulated to cooperate in research projects, share ideas, and results through workshops, meetings and short term exchanges of personnel (among other initiatives). TRANSAUTOPHAGY aims to generate breakthrough multidisciplinary knowledge about autophagy regulation, and to boost translation of this knowledge into biomedical and biotechnological applications.

  12. TRANSLATING SERVICE TECHNICAL PROSE

    African Journals Online (AJOL)

    language. The Application of Technical Service. Prose. To form a good idea of the appl ication .... cost lives. In this particular domain, translators must have a sound technical ... These semantic ... another language and often, in doing so, changing its meaning. The words ..... He will hand out tasks to each translator and after.

  13. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Shan Li

    2014-01-01

    Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  14. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  15. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  16. Machine Translation

    Indian Academy of Sciences (India)

    Research Mt System Example: The 'Janus' Translating Phone Project. The Janus ... based on laptops, and simultaneous translation of two speakers in a dialogue. For more ..... The current focus in MT research is on using machine learning.

  17. Osteochondral Allograft Transplantation in Cartilage Repair: Graft Storage Paradigm, Translational Models, and Clinical Applications

    Science.gov (United States)

    Bugbee, William D.; Pallante-Kichura, Andrea L.; Görtz, Simon; Amiel, David; Sah, Robert

    2016-01-01

    The treatment of articular cartilage injury and disease has become an increasingly relevant part of orthopaedic care. Articular cartilage transplantation, in the form of osteochondral allografting, is one of the most established techniques for restoration of articular cartilage. Our research efforts over the last two decades have supported the transformation of this procedure from experimental “niche” status to a cornerstone of orthopaedic practice. In this Kappa Delta paper, we describe our translational and clinical science contributions to this transformation: (1) to enhance the ability of tissue banks to process and deliver viable tissue to surgeons and patients, (2) to improve the biological understanding of in vivo cartilage and bone remodeling following osteochondral allograft (OCA) transplantation in an animal model system, (3) to define effective surgical techniques and pitfalls, and (4) to identify and clarify clinical indications and outcomes. The combination of coordinated basic and clinical studies is part of our continuing comprehensive academic OCA transplant program. Taken together, the results have led to the current standards for OCA processing and storage prior to implantation and also novel observations and mechanisms of the biological and clinical behavior of OCA transplants in vivo. Thus, OCA transplantation is now a successful and increasingly available treatment for patients with disabling osteoarticular cartilage pathology. PMID:26234194

  18. Kubernetes as an approach for solving bioinformatic problems.

    OpenAIRE

    Markstedt, Olof

    2017-01-01

    The cluster orchestration tool Kubernetes enables easy deployment and reproducibility of life science research by utilizing the advantages of the container technology. The container technology allows for easy tool creation, sharing and runs on any Linux system once it has been built. The applicability of Kubernetes as an approach to run bioinformatic workflows was evaluated and resulted in some examples of how Kubernetes and containers could be used within the field of life science and how th...

  19. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  20. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  1. Systems Biology and Bioinformatics in Medical Applications

    Science.gov (United States)

    2009-10-01

    identified A. baumannii, along with Aspergillus spp., extended-spectrum -lactamase-produc- ing Enterobacteriaceae, vancomycin-resistant Enterococcus fae...C-terminal to arginine and lysine residues. Mol. Cell Proteomics 3:608–614. 34. Pajkos, A., K. Vickery, and Y. Cossart. 2004. Is biofilm accumulation

  2. Bioinformatics process management: information flow via a computational journal

    Directory of Open Access Journals (Sweden)

    Lushington Gerald

    2007-12-01

    Full Text Available Abstract This paper presents the Bioinformatics Computational Journal (BCJ, a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples.

  3. Machine translation

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, M

    1982-04-01

    Each language has its own structure. In translating one language into another one, language attributes and grammatical interpretation must be defined in an unambiguous form. In order to parse a sentence, it is necessary to recognize its structure. A so-called context-free grammar can help in this respect for machine translation and machine-aided translation. Problems to be solved in studying machine translation are taken up in the paper, which discusses subjects for semantics and for syntactic analysis and translation software. 14 references.

  4. Improving Utility of GPU in Accelerating Industrial Applications With User-Centered Automatic Code Translation

    NARCIS (Netherlands)

    Yang, Po; Dong, Feng; Codreanu, Valeriu; Williams, David; Roerdink, Jos B. T. M.; Liu, Baoquan; Anvari-Moghaddam, Amjad; Min, Geyong

    Small to medium enterprises (SMEs), particularly those whose business is focused on developing innovative produces, are limited by a major bottleneck in the speed of computation in many applications. The recent developments in GPUs have been the marked increase in their versatility in many

  5. A knowledge translation intervention to enhance clinical application of a virtual reality system in stroke rehabilitation.

    Science.gov (United States)

    Levac, Danielle; Glegg, Stephanie M N; Sveistrup, Heidi; Colquhoun, Heather; Miller, Patricia A; Finestone, Hillel; DePaul, Vincent; Harris, Jocelyn E; Velikonja, Diana

    2016-10-06

    Despite increasing evidence for the effectiveness of virtual reality (VR)-based therapy in stroke rehabilitation, few knowledge translation (KT) resources exist to support clinical integration. KT interventions addressing known barriers and facilitators to VR use are required. When environmental barriers to VR integration are less amenable to change, KT interventions can target modifiable barriers related to therapist knowledge and skills. A multi-faceted KT intervention was designed and implemented to support physical and occupational therapists in two stroke rehabilitation units in acquiring proficiency with use of the Interactive Exercise Rehabilitation System (IREX; GestureTek). The KT intervention consisted of interactive e-learning modules, hands-on workshops and experiential practice. Evaluation included the Assessing Determinants of Prospective Take Up of Virtual Reality (ADOPT-VR) Instrument and self-report confidence ratings of knowledge and skills pre- and post-study. Usability of the IREX was measured with the System Usability Scale (SUS). A focus group gathered therapist experiences. Frequency of IREX use was recorded for 6 months post-study. Eleven therapists delivered a total of 107 sessions of VR-based therapy to 34 clients with stroke. On the ADOPT-VR, significant pre-post improvements in therapist perceived behavioral control (p = 0.003), self-efficacy (p = 0.005) and facilitating conditions (p =0.019) related to VR use were observed. Therapist intention to use VR did not change. Knowledge and skills improved significantly following e-learning completion (p = 0.001) and was sustained 6 months post-study. Below average perceived usability of the IREX (19 th percentile) was reported. Lack of time was the most frequently reported barrier to VR use. A decrease in frequency of perceived barriers to VR use was not significant (p = 0.159). Two therapists used the IREX sparingly in the 6 months following the study. Therapists reported

  6. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    Bioinformatics is an emerging scientific discipline that uses information ... complex biological questions. ... and computer programs for various purposes of primer ..... polymerase chain reaction: Human Immunodeficiency Virus 1 model studies.

  7. Challenge: A Multidisciplinary Degree Program in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Mudasser Fraz Wyne

    2006-06-01

    Full Text Available Bioinformatics is a new field that is poorly served by any of the traditional science programs in Biology, Computer science or Biochemistry. Known to be a rapidly evolving discipline, Bioinformatics has emerged from experimental molecular biology and biochemistry as well as from the artificial intelligence, database, pattern recognition, and algorithms disciplines of computer science. While institutions are responding to this increased demand by establishing graduate programs in bioinformatics, entrance barriers for these programs are high, largely due to the significant prerequisite knowledge which is required, both in the fields of biochemistry and computer science. Although many schools currently have or are proposing graduate programs in bioinformatics, few are actually developing new undergraduate programs. In this paper I explore the blend of a multidisciplinary approach, discuss the response of academia and highlight challenges faced by this emerging field.

  8. Deciphering psoriasis. A bioinformatic approach.

    Science.gov (United States)

    Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni

    2018-02-01

    Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  9. 'Inhabiting' the Translator's Habitus – Antjie Krog as Translator ...

    African Journals Online (AJOL)

    Drawing on the Bourdieusian concept of habitus and its applicability in the field of translation, this article discusses Antjie Krog's profile in the practice of translation in. South Africa. Bourdieu's conceptualisation of the relationship between the initiating activities of translators and the structures which constrain and enable ...

  10. Rise and demise of bioinformatics? Promise and progress.

    Directory of Open Access Journals (Sweden)

    Christos A Ouzounis

    Full Text Available The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself.

  11. Bioinformatics and Microarray Data Analysis on the Cloud.

    Science.gov (United States)

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  12. 2nd Colombian Congress on Computational Biology and Bioinformatics

    CERN Document Server

    Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan

    2014-01-01

    This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis.  .

  13. Bioinformatics for whole-genome shotgun sequencing of microbial communities.

    Directory of Open Access Journals (Sweden)

    Kevin Chen

    2005-07-01

    Full Text Available The application of whole-genome shotgun sequencing to microbial communities represents a major development in metagenomics, the study of uncultured microbes via the tools of modern genomic analysis. In the past year, whole-genome shotgun sequencing projects of prokaryotic communities from an acid mine biofilm, the Sargasso Sea, Minnesota farm soil, three deep-sea whale falls, and deep-sea sediments have been reported, adding to previously published work on viral communities from marine and fecal samples. The interpretation of this new kind of data poses a wide variety of exciting and difficult bioinformatics problems. The aim of this review is to introduce the bioinformatics community to this emerging field by surveying existing techniques and promising new approaches for several of the most interesting of these computational problems.

  14. Concepts and introduction to RNA bioinformatics

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.

    2014-01-01

    RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....

  15. Translating India

    CERN Document Server

    Kothari, Rita

    2014-01-01

    The cultural universe of urban, English-speaking middle class in India shows signs of growing inclusiveness as far as English is concerned. This phenomenon manifests itself in increasing forms of bilingualism (combination of English and one Indian language) in everyday forms of speech - advertisement jingles, bilingual movies, signboards, and of course conversations. It is also evident in the startling prominence of Indian Writing in English and somewhat less visibly, but steadily rising, activity of English translation from Indian languages. Since the eighties this has led to a frenetic activity around English translation in India's academic and literary circles. Kothari makes this very current phenomenon her chief concern in Translating India.   The study covers aspects such as the production, reception and marketability of English translation. Through an unusually multi-disciplinary approach, this study situates English translation in India amidst local and global debates on translation, representation an...

  16. Translating Inclusion

    DEFF Research Database (Denmark)

    Fallov, Mia Arp; Birk, Rasmus

    2018-01-01

    The purpose of this paper is to explore how practices of translation shape particular paths of inclusion for people living in marginalized residential areas in Denmark. Inclusion, we argue, is not an end-state, but rather something which must be constantly performed. Active citizenship, today......, is not merely a question of participation, but of learning to become active in all spheres of life. The paper draws on empirical examples from a multi-sited field work in 6 different sites of local community work in Denmark, to demonstrate how different dimensions of translation are involved in shaping active...... citizenship. We propose the following different dimensions of translation: translating authority, translating language, translating social problems. The paper takes its theoretical point of departure from assemblage urbanism, arguing that cities are heterogeneous assemblages of socio-material interactions...

  17. Translational plant proteomics: A perspective

    NARCIS (Netherlands)

    Agrawal, G.K.; Pedreschi, R.; Barkla, B.J.; Bindschedler, L.V.; Cramer, R.; Sarkar, A.; Renaut, J.; Job, D.; Rakwal, R.

    2012-01-01

    Translational proteomics is an emerging sub-discipline of the proteomics field in the biological sciences. Translational plant proteomics aims to integrate knowledge from basic sciences to translate it into field applications to solve issues related but not limited to the recreational and economic

  18. Translating induced pluripotent stem cells from bench to bedside: application to retinal diseases.

    Science.gov (United States)

    Cramer, Alona O; MacLaren, Robert E

    2013-04-01

    Induced pluripotent stem cells (iPSc) are a scientific and medical frontier. Application of reprogrammed somatic cells for clinical trials is in its dawn period; advances in research with animal and human iPSc are paving the way for retinal therapies with the ongoing development of safe animal cell transplantation studies and characterization of patient- specific and disease-specific human iPSc. The retina is an optimal model for investigation of neural regeneration; amongst other advantageous attributes, it is the most accessible part of the CNS for surgery and outcome monitoring. A recent clinical trial showing a degree of visual restoration via a subretinal electronic prosthesis implies that even a severely degenerate retina may have the capacity for repair after cell replacement through potential plasticity of the visual system. Successful differentiation of neural retina from iPSc and the recent generation of an optic cup from human ESc invitro increase the feasibility of generating an expandable and clinically suitable source of cells for human clinical trials. In this review we shall present recent studies that have propelled the field forward and discuss challenges in utilizing iPS cell derived retinal cells as reliable models for clinical therapies and as a source for clinical cell transplantation treatment for patients suffering from genetic retinal disease.

  19. Applications of the Morris water maze in translational traumatic brain injury research.

    Science.gov (United States)

    Tucker, Laura B; Velosky, Alexander G; McCabe, Joseph T

    2018-05-01

    Acquired traumatic brain injury (TBI) is frequently accompanied by persistent cognitive symptoms, including executive function disruptions and memory deficits. The Morris Water Maze (MWM) is the most widely-employed laboratory behavioral test for assessing cognitive deficits in rodents after experimental TBI. Numerous protocols exist for performing the test, which has shown great robustness in detecting learning and memory deficits in rodents after infliction of TBI. We review applications of the MWM for the study of cognitive deficits following TBI in pre-clinical studies, describing multiple ways in which the test can be employed to examine specific aspects of learning and memory. Emphasis is placed on dependent measures that are available and important controls that must be considered in the context of TBI. Finally, caution is given regarding interpretation of deficits as being indicative of dysfunction of a single brain region (hippocampus), as experimental models of TBI most often result in more diffuse damage that disrupts multiple neural pathways and larger functional networks that participate in complex behaviors required in MWM performance. Published by Elsevier Ltd.

  20. Bioinformatics and the Politics of Innovation in the Life Sciences

    Science.gov (United States)

    Zhou, Yinhua; Datta, Saheli; Salter, Charlotte

    2016-01-01

    The governments of China, India, and the United Kingdom are unanimous in their belief that bioinformatics should supply the link between basic life sciences research and its translation into health benefits for the population and the economy. Yet at the same time, as ambitious states vying for position in the future global bioeconomy they differ considerably in the strategies adopted in pursuit of this goal. At the heart of these differences lies the interaction between epistemic change within the scientific community itself and the apparatus of the state. Drawing on desk-based research and thirty-two interviews with scientists and policy makers in the three countries, this article analyzes the politics that shape this interaction. From this analysis emerges an understanding of the variable capacities of different kinds of states and political systems to work with science in harnessing the potential of new epistemic territories in global life sciences innovation. PMID:27546935

  1. A bioinformatics roadmap for the human vaccines project.

    Science.gov (United States)

    Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C

    2017-06-01

    Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.

  2. Navigating the changing learning landscape: perspective from bioinformatics.ca

    OpenAIRE

    Brazas, Michelle D.; Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...

  3. mORCA: sailing bioinformatics world with mobile devices.

    Science.gov (United States)

    Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo

    2018-03-01

    Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. ortrelles@uma.es. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.

  4. Compositional translation

    NARCIS (Netherlands)

    Appelo, Lisette; Janssen, Theo; Jong, de F.M.G.; Landsbergen, S.P.J.

    1994-01-01

    This book provides an in-depth review of machine translation by discussing in detail a particular method, called compositional translation, and a particular system, Rosetta, which is based on this method. The Rosetta project is a unique combination of fundamental research and large-scale

  5. Development of Bioinformatics Infrastructure for Genomics Research.

    Science.gov (United States)

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  6. Translational Radiomics: Defining the Strategy Pipeline and Considerations for Application-Part 2: From Clinical Implementation to Enterprise.

    Science.gov (United States)

    Shaikh, Faiq; Franc, Benjamin; Allen, Erastus; Sala, Evis; Awan, Omer; Hendrata, Kenneth; Halabi, Safwan; Mohiuddin, Sohaib; Malik, Sana; Hadley, Dexter; Shrestha, Rasu

    2018-03-01

    Enterprise imaging has channeled various technological innovations to the field of clinical radiology, ranging from advanced imaging equipment and postacquisition iterative reconstruction tools to image analysis and computer-aided detection tools. More recently, the advancement in the field of quantitative image analysis coupled with machine learning-based data analytics, classification, and integration has ushered in the era of radiomics, a paradigm shift that holds tremendous potential in clinical decision support as well as drug discovery. However, there are important issues to consider to incorporate radiomics into a clinically applicable system and a commercially viable solution. In this two-part series, we offer insights into the development of the translational pipeline for radiomics from methodology to clinical implementation (Part 1) and from that point to enterprise development (Part 2). In Part 2 of this two-part series, we study the components of the strategy pipeline, from clinical implementation to building enterprise solutions. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  7. Machine Translation from Text

    Science.gov (United States)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  8. Translational Biomedical Informatics in the Cloud: Present and Future

    Directory of Open Access Journals (Sweden)

    Jiajia Chen

    2013-01-01

    Full Text Available Next generation sequencing and other high-throughput experimental techniques of recent decades have driven the exponential growth in publicly available molecular and clinical data. This information explosion has prepared the ground for the development of translational bioinformatics. The scale and dimensionality of data, however, pose obvious challenges in data mining, storage, and integration. In this paper we demonstrated the utility and promise of cloud computing for tackling the big data problems. We also outline our vision that cloud computing could be an enabling tool to facilitate translational bioinformatics research.

  9. Translation-Memory (TM) Research

    DEFF Research Database (Denmark)

    Schjoldager, Anne Gram; Christensen, Tina Paulsen

    2010-01-01

    to be representative of the research field as a whole. Our analysis suggests that, while considerable knowledge is available about the technical side of TMs, more research is needed to understand how translators interact with TM technology and how TMs influence translators' cognitive translation processes.......  It is no exaggeration to say that the advent of translation-memory (TM) systems in the translation profession has led to drastic changes in translators' processes and workflow, and yet, though many professional translators nowadays depend on some form of TM system, this has not been the object...... of much research. Our paper attempts to find out what we know about the nature, applications and influences of TM technology, including translators' interaction with TMs, and also how we know it. An essential part of the analysis is based on a selection of empirical TM studies, which we assume...

  10. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  11. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  12. Planning bioinformatics workflows using an expert system

    Science.gov (United States)

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  13. Binary translation using peephole translation rules

    Science.gov (United States)

    Bansal, Sorav; Aiken, Alex

    2010-05-04

    An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.

  14. Precision translator

    Science.gov (United States)

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  15. Translational Epidemiology in Psychiatry

    Science.gov (United States)

    Weissman, Myrna M.; Brown, Alan S.; Talati, Ardesheer

    2012-01-01

    Translational research generally refers to the application of knowledge generated by advances in basic sciences research translated into new approaches for diagnosis, prevention, and treatment of disease. This direction is called bench-to-bedside. Psychiatry has similarly emphasized the basic sciences as the starting point of translational research. This article introduces the term translational epidemiology for psychiatry research as a bidirectional concept in which the knowledge generated from the bedside or the population can also be translated to the benches of laboratory science. Epidemiologic studies are primarily observational but can generate representative samples, novel designs, and hypotheses that can be translated into more tractable experimental approaches in the clinical and basic sciences. This bedside-to-bench concept has not been explicated in psychiatry, although there are an increasing number of examples in the research literature. This article describes selected epidemiologic designs, providing examples and opportunities for translational research from community surveys and prospective, birth cohort, and family-based designs. Rapid developments in informatics, emphases on large sample collection for genetic and biomarker studies, and interest in personalized medicine—which requires information on relative and absolute risk factors—make this topic timely. The approach described has implications for providing fresh metaphors to communicate complex issues in interdisciplinary collaborations and for training in epidemiology and other sciences in psychiatry. PMID:21646577

  16. Minimizing the Translation Error in the Application of an Oblique Single-Cut Rotation Osteotomy: Where to Cut?

    Science.gov (United States)

    Dobbe, Johannes G G; Strackee, Simon D; Streekstra, Geert J

    2018-04-01

    An oblique single cut rotation osteotomy enables correcting angular bone alignment in the coronal, sagittal, and transverse planes, with just a single oblique osteotomy, and by rotating one bone segment in the osteotomy plane. However, translational malalignment is likely to exist if the bone is curved or deformed and the location of the oblique osteotomy is not obvious. In this paper, we investigate how translational malalignment depends on the osteotomy location. We further propose and evaluate by simulation in 3-D, a method that minimizes translational malalignment by varying the osteotomy location and by sliding the distal bone segment with respect to the proximal bone segment within the oblique osteotomy plane. The method is finally compared to what three surgeons achieve by manually selecting the osteotomy location in 3-D virtual space without planning in-plane translations. The minimization method optimized for length better than the surgeons did, by 3.2 mm on average, range (0.1, 9.4) mm, in 82% of the cases. A better translation in the axial plane was achieved by 4.1 mm on average, range (0.3, 14.4) mm, in 77% of the cases. The proposed method generally performs better than subjectively choosing an osteotomy position along the bone axis. The proposed method is considered a valuable tool for future alignment planning of an oblique single-cut rotation osteotomy since it helps minimizing translational malalignment.

  17. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.

  18. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  19. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  20. Protein raftophilicity. How bioinformatics can help membranologists

    DEFF Research Database (Denmark)

    Nielsen, Henrik; Sperotto, Maria Maddalena

    )-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...

  1. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  2. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  3. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  4. Bioboxes: standardised containers for interchangeable bioinformatics software.

    Science.gov (United States)

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.

  5. Development and implementation of a bioinformatics online ...

    African Journals Online (AJOL)

    Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...

  6. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    Science.gov (United States)

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  7. Working with Corpora in the Translation Classroom

    Science.gov (United States)

    Krüger, Ralph

    2012-01-01

    This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of…

  8. eTRIKS platform: Conception and operation of a highly scalable cloud-based platform for translational research and applications development.

    Science.gov (United States)

    Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita

    2018-04-01

    We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.

  10. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  11. Shared Bioinformatics Databases within the Unipro UGENE Platform

    Directory of Open Access Journals (Sweden)

    Protsyuk Ivan V.

    2015-03-01

    Full Text Available Unipro UGENE is an open-source bioinformatics toolkit that integrates popular tools along with original instruments for molecular biologists within a unified user interface. Nowadays, most bioinformatics desktop applications, including UGENE, make use of a local data model while processing different types of data. Such an approach causes an inconvenience for scientists working cooperatively and relying on the same data. This refers to the need of making multiple copies of certain files for every workplace and maintaining synchronization between them in case of modifications. Therefore, we focused on delivering a collaborative work into the UGENE user experience. Currently, several UGENE installations can be connected to a designated shared database and users can interact with it simultaneously. Such databases can be created by UGENE users and be used at their discretion. Objects of each data type, supported by UGENE such as sequences, annotations, multiple alignments, etc., can now be easily imported from or exported to a remote storage. One of the main advantages of this system, compared to existing ones, is the almost simultaneous access of client applications to shared data regardless of their volume. Moreover, the system is capable of storing millions of objects. The storage itself is a regular database server so even an inexpert user is able to deploy it. Thus, UGENE may provide access to shared data for users located, for example, in the same laboratory or institution. UGENE is available at: http://ugene.net/download.html.

  12. Bioinformatics tools for the analysis of NMR metabolomics studies focused on the identification of clinically relevant biomarkers.

    Science.gov (United States)

    Puchades-Carrasco, Leonor; Palomino-Schätzlein, Martina; Pérez-Rambla, Clara; Pineda-Lucena, Antonio

    2016-05-01

    Metabolomics, a systems biology approach focused on the global study of the metabolome, offers a tremendous potential in the analysis of clinical samples. Among other applications, metabolomics enables mapping of biochemical alterations involved in the pathogenesis of diseases, and offers the opportunity to noninvasively identify diagnostic, prognostic and predictive biomarkers that could translate into early therapeutic interventions. Particularly, metabolomics by Nuclear Magnetic Resonance (NMR) has the ability to simultaneously detect and structurally characterize an abundance of metabolic components, even when their identities are unknown. Analysis of the data generated using this experimental approach requires the application of statistical and bioinformatics tools for the correct interpretation of the results. This review focuses on the different steps involved in the metabolomics characterization of biofluids for clinical applications, ranging from the design of the study to the biological interpretation of the results. Particular emphasis is devoted to the specific procedures required for the processing and interpretation of NMR data with a focus on the identification of clinically relevant biomarkers. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Translating Alcohol Research

    Science.gov (United States)

    Batman, Angela M.; Miles, Michael F.

    2015-01-01

    Alcohol use disorder (AUD) and its sequelae impose a major burden on the public health of the United States, and adequate long-term control of this disorder has not been achieved. Molecular and behavioral basic science research findings are providing the groundwork for understanding the mechanisms underlying AUD and have identified multiple candidate targets for ongoing clinical trials. However, the translation of basic research or clinical findings into improved therapeutic approaches for AUD must become more efficient. Translational research is a multistage process of streamlining the movement of basic biomedical research findings into clinical research and then to the clinical target populations. This process demands efficient bidirectional communication across basic, applied, and clinical science as well as with clinical practitioners. Ongoing work suggests rapid progress is being made with an evolving translational framework within the alcohol research field. This is helped by multiple interdisciplinary collaborative research structures that have been developed to advance translational work on AUD. Moreover, the integration of systems biology approaches with collaborative clinical studies may yield novel insights for future translational success. Finally, appreciation of genetic variation in pharmacological or behavioral treatment responses and optimal communication from bench to bedside and back may strengthen the success of translational research applications to AUD. PMID:26259085

  14. Translation Competence

    DEFF Research Database (Denmark)

    Vandepitte, Sonia; Mousten, Birthe; Maylath, Bruce

    2014-01-01

    After Kiraly (2000) introduced the collaborative form of translation in classrooms, Pavlovic (2007), Kenny (2008), and Huertas Barros (2011) provided empirical evidence that testifies to the impact of collaborative learning. This chapter sets out to describe the collaborative forms of learning at...

  15. Translating Harbourscapes

    DEFF Research Database (Denmark)

    Diedrich, Lisa Babette

    -specific design are proposed for all actors involved in harbour transformation. The study ends with an invitation to further investigate translation as a powerful metaphor for the way existing qualities of a site can be transformed, rather than erased or rewritten, and to explore how this metaphor can foster new...

  16. Bioinformatics and systems biology research update from the 15th International Conference on Bioinformatics (InCoB2016).

    Science.gov (United States)

    Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba

    2016-12-22

    The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.

  17. Word translation entropy in translation

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Dragsted, Barbara; Hvelplund, Kristian Tangsgaard

    2016-01-01

    This study reports on an investigation into the relationship between the number of translation alternatives for a single word and eye movements on the source text. In addition, the effect of word order differences between source and target text on eye movements on the source text is studied....... In particular, the current study investigates the effect of these variables on early and late eye movement measures. Early eye movement measures are indicative of processes that are more automatic while late measures are more indicative of conscious processing. Most studies that found evidence of target...... language activation during source text reading in translation, i.e. co-activation of the two linguistic systems, employed late eye movement measures or reaction times. The current study therefore aims to investigate if and to what extent earlier eye movement measures in reading for translation show...

  18. Minimizing the translation error in the application of an oblique single-cut rotation osteotomy: Where to cut?

    NARCIS (Netherlands)

    Dobbe, Johannes G. G.; Strackee, Simon D.; Streekstra, Geert J.

    2017-01-01

    An oblique single cut rotation osteotomy enables correcting angular bone alignment in the coronal, sagittal and transverse planes, with just a single oblique osteotomy, and by rotating one bone segment in the osteotomy plane. However, translational malalignment is likely to exist if the bone is

  19. Lost in Translation

    Science.gov (United States)

    Lass, Wiebke; Reusswig, Fritz

    2014-05-01

    Lost in Translation? Introducing Planetary Boundaries into Social Systems. Fritz Reusswig, Wiebke Lass Potsdam Institute for Climate Impact Research, Potsdam, Germany Identifying and quantifying planetary boundaries by interdisciplinary science efforts is a challenging task—and a risky one, as the 1972 Limits to Growth publication has shown. Even if we may be assured that scientific understanding of underlying processes of the Earth system has significantly improved since then, the challenge of translating these findings into the social systems of the planet remains crucial for any kind of action, and in many respects far more challenging. We would like to conceptualize what could also be termed a problem of coupling social and natural systems as a nested set of social translation processes, well aware of the limited applicability of the language-related translation metaphor. Societies must, first, perceive these boundaries, and they have to understand their relevance. This includes, among many other things, the organization of transdisciplinary scientific cooperation. They will then have to translate this understood perception into possible actions, i.e. strategies for different local bodies, actors, and institutional settings. This implies a lot of 'internal' translation processes, e.g. from the scientific subsystem to the mass media, the political and the economic subsystem. And it implies to develop subsystem-specific schemes of evaluation for these alternatives, e.g. convincing narratives, cost-benefit analyses, or ethical legitimacy considerations. And, finally, societies do have to translate chosen action alternatives into monitoring and evaluation schemes, e.g. for agricultural production or renewable energies. This process includes the continuation of observing and re-analyzing the planetary boundary concept itself, as a re-adjustment of these boundaries in the light of new scientific insights cannot be excluded. Taken all together, societies may well

  20. Bioinformatics in New Generation Flavivirus Vaccines

    Directory of Open Access Journals (Sweden)

    Penelope Koraka

    2010-01-01

    Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.

  1. The growing need for microservices in bioinformatics

    Directory of Open Access Journals (Sweden)

    Christopher L Williams

    2016-01-01

    Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework

  2. The growing need for microservices in bioinformatics.

    Science.gov (United States)

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and

  3. The growing need for microservices in bioinformatics

    Science.gov (United States)

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective

  4. Bioinformatics of cardiovascular miRNA biology.

    Science.gov (United States)

    Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas

    2015-12-01

    MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Comprehensive decision tree models in bioinformatics.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets

  6. Comprehensive decision tree models in bioinformatics.

    Science.gov (United States)

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly

  7. Penalized feature selection and classification in bioinformatics

    OpenAIRE

    Ma, Shuangge; Huang, Jian

    2008-01-01

    In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...

  8. Bioinformatics Training: A Review of Challenges, Actions and Support Requirements

    DEFF Research Database (Denmark)

    Schneider, M.V.; Watson, J.; Attwood, T.

    2010-01-01

    As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...

  9. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  10. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  11. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  12. H3ABioNet, a sustainable pan-African bioinformatics network for human heredity and health in Africa

    Science.gov (United States)

    Mulder, Nicola J.; Adebiyi, Ezekiel; Alami, Raouf; Benkahla, Alia; Brandful, James; Doumbia, Seydou; Everett, Dean; Fadlelmola, Faisal M.; Gaboun, Fatima; Gaseitsiwe, Simani; Ghazal, Hassan; Hazelhurst, Scott; Hide, Winston; Ibrahimi, Azeddine; Jaufeerally Fakim, Yasmina; Jongeneel, C. Victor; Joubert, Fourie; Kassim, Samar; Kayondo, Jonathan; Kumuthini, Judit; Lyantagaye, Sylvester; Makani, Julie; Mansour Alzohairy, Ahmed; Masiga, Daniel; Moussa, Ahmed; Nash, Oyekanmi; Ouwe Missi Oukem-Boyer, Odile; Owusu-Dabo, Ellis; Panji, Sumir; Patterton, Hugh; Radouani, Fouzia; Sadki, Khalid; Seghrouchni, Fouad; Tastan Bishop, Özlem; Tiffin, Nicki; Ulenga, Nzovu

    2016-01-01

    The application of genomics technologies to medicine and biomedical research is increasing in popularity, made possible by new high-throughput genotyping and sequencing technologies and improved data analysis capabilities. Some of the greatest genetic diversity among humans, animals, plants, and microbiota occurs in Africa, yet genomic research outputs from the continent are limited. The Human Heredity and Health in Africa (H3Africa) initiative was established to drive the development of genomic research for human health in Africa, and through recognition of the critical role of bioinformatics in this process, spurred the establishment of H3ABioNet, a pan-African bioinformatics network for H3Africa. The limitations in bioinformatics capacity on the continent have been a major contributory factor to the lack of notable outputs in high-throughput biology research. Although pockets of high-quality bioinformatics teams have existed previously, the majority of research institutions lack experienced faculty who can train and supervise bioinformatics students. H3ABioNet aims to address this dire need, specifically in the area of human genetics and genomics, but knock-on effects are ensuring this extends to other areas of bioinformatics. Here, we describe the emergence of genomics research and the development of bioinformatics in Africa through H3ABioNet. PMID:26627985

  13. What is bioinformatics? A proposed definition and overview of the field.

    Science.gov (United States)

    Luscombe, N M; Greenbaum, D; Gerstein, M

    2001-01-01

    The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.

  14. Beyond Translation

    DEFF Research Database (Denmark)

    Olwig, Mette Fog

    2013-01-01

    This article contributes to the growing scholarship on local development practitioners by re-examining conceptualizations of practitioners as ‘brokers’ strategically translating between ‘travelling’ (development institution) rationalities and ‘placed’ (recipient area) rationalities in relation...... and practice spurred by new challenges deriving from climate change anxiety, the study shows how local practitioners often make local activities fit into travelling development rationalities as a matter of habit, rather than as a conscious strategy. They may therefore cease to ‘translate’ between different...... rationalities. This is shown to have important implications for theory, research and practice concerning disaster risk reduction and climate change adaptation in which such translation is often expected....

  15. Revising Translations

    DEFF Research Database (Denmark)

    Rasmussen, Kirsten Wølch; Schjoldager, Anne

    2011-01-01

    The paper explains the theoretical background and findings of an empirical study of revision policies, using Denmark as a case in point. After an overview of important definitions, types and parameters, the paper explains the methods and data gathered from a questionnaire survey and an interview...... survey. Results clearly show that most translation companies regard both unilingual and comparative revisions as essential components of professional quality assurance. Data indicate that revision is rarely fully comparative, as the preferred procedure seems to be a unilingual revision followed by a more...... or less comparative rereading. Though questionnaire data seem to indicate that translation companies use linguistic correctness and presentation as the only revision parameters, interview data reveal that textual and communicative aspects are also considered. Generally speaking, revision is not carried...

  16. Databases and Associated Bioinformatic Tools in Studies of Food Allergens, Epitopes and Haptens – a Review

    Directory of Open Access Journals (Sweden)

    Bucholska Justyna

    2018-06-01

    Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.

  17. BioSmalltalk: a pure object system and library for bioinformatics.

    Science.gov (United States)

    Morales, Hernán F; Giovambattista, Guillermo

    2013-09-15

    We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk hernan.morales@gmail.com Supplementary data are available at Bioinformatics online.

  18. jORCA: easily integrating bioinformatics Web Services.

    Science.gov (United States)

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  19. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    Science.gov (United States)

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  20. A review of bioinformatic methods for forensic DNA analyses.

    Science.gov (United States)

    Liu, Yao-Yuan; Harbison, SallyAnn

    2018-03-01

    Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  2. Development of the Multilingual Collaboration System for Farmers of Several Counntries (1) : Application of Basic Terminology Translation Dictionary

    OpenAIRE

    Lee, Kang Oh; Nakaji, Kei; Nada, Yoichi

    2004-01-01

    In order to share agricultural information through the Internet, the multilingual collaboratioin system of agricultural productioni was developed for farmers of many countries. The basic terminology translationi dictionary was developed by using several open source programs and free software to translate the basic terminology of multilingual collaboration system. The basic terminology translationi dictionaru was composed of about 4200 terms in Japanese, Korean and English including 2700 horti...

  3. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    Science.gov (United States)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  4. Translational plant proteomics: a perspective.

    Science.gov (United States)

    Agrawal, Ganesh Kumar; Pedreschi, Romina; Barkla, Bronwyn J; Bindschedler, Laurence Veronique; Cramer, Rainer; Sarkar, Abhijit; Renaut, Jenny; Job, Dominique; Rakwal, Randeep

    2012-08-03

    Translational proteomics is an emerging sub-discipline of the proteomics field in the biological sciences. Translational plant proteomics aims to integrate knowledge from basic sciences to translate it into field applications to solve issues related but not limited to the recreational and economic values of plants, food security and safety, and energy sustainability. In this review, we highlight the substantial progress reached in plant proteomics during the past decade which has paved the way for translational plant proteomics. Increasing proteomics knowledge in plants is not limited to model and non-model plants, proteogenomics, crop improvement, and food analysis, safety, and nutrition but to many more potential applications. Given the wealth of information generated and to some extent applied, there is the need for more efficient and broader channels to freely disseminate the information to the scientific community. This article is part of a Special Issue entitled: Translational Proteomics. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Quantum Bio-Informatics II From Quantum Information to Bio-Informatics

    Science.gov (United States)

    Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori

    2009-02-01

    The problem of quantum-like representation in economy cognitive science, and genetics / L. Accardi, A. Khrennikov and M. Ohya -- Chaotic behavior observed in linea dynamics / M. Asano, T. Yamamoto and Y. Togawa -- Complete m-level quantum teleportation based on Kossakowski-Ohya scheme / M. Asano, M. Ohya and Y. Tanaka -- Towards quantum cybernetics: optimal feedback control in quantum bio informatics / V. P. Belavkin -- Quantum entanglement and circulant states / D. Chruściński -- The compound Fock space and its application in brain models / K. -H. Fichtner and W. Freudenberg -- Characterisation of beam splitters / L. Fichtner and M. Gäbler -- Application of entropic chaos degree to a combined quantum baker's map / K. Inoue, M. Ohya and I. V. Volovich -- On quantum algorithm for multiple alignment of amino acid sequences / S. Iriyama and M. Ohya --Quantum-like models for decision making in psychology and cognitive science / A. Khrennikov -- On completely positive non-Markovian evolution of a d-level system / A. Kossakowski and R. Rebolledo -- Measures of entanglement - a Hilbert space approach / W. A. Majewski -- Some characterizations of PPT states and their relation / T. Matsuoka -- On the dynamics of entanglement and characterization ofentangling properties of quantum evolutions / M. Michalski -- Perspective from micro-macro duality - towards non-perturbative renormalization scheme / I. Ojima -- A simple symmetric algorithm using a likeness with Introns behavior in RNA sequences / M. Regoli -- Some aspects of quadratic generalized white noise functionals / Si Si and T. Hida -- Analysis of several social mobility data using measure of departure from symmetry / K. Tahata ... [et al.] -- Time in physics and life science / I. V. Volovich -- Note on entropies in quantum processes / N. Watanabe -- Basics of molecular simulation and its application to biomolecules / T. Ando and I. Yamato -- Theory of proton-induced superionic conduction in hydrogen-bonded systems

  6. Modern bioinformatics meets traditional Chinese medicine.

    Science.gov (United States)

    Gu, Peiqin; Chen, Huajun

    2014-11-01

    Traditional Chinese medicine (TCM) is gaining increasing attention with the emergence of integrative medicine and personalized medicine, characterized by pattern differentiation on individual variance and treatments based on natural herbal synergism. Investigating the effectiveness and safety of the potential mechanisms of TCM and the combination principles of drug therapies will bridge the cultural gap with Western medicine and improve the development of integrative medicine. Dealing with rapidly growing amounts of biomedical data and their heterogeneous nature are two important tasks among modern biomedical communities. Bioinformatics, as an emerging interdisciplinary field of computer science and biology, has become a useful tool for easing the data deluge pressure by automating the computation processes with informatics methods. Using these methods to retrieve, store and analyze the biomedical data can effectively reveal the associated knowledge hidden in the data, and thus promote the discovery of integrated information. Recently, these techniques of bioinformatics have been used for facilitating the interactional effects of both Western medicine and TCM. The analysis of TCM data using computational technologies provides biological evidence for the basic understanding of TCM mechanisms, safety and efficacy of TCM treatments. At the same time, the carrier and targets associated with TCM remedies can inspire the rethinking of modern drug development. This review summarizes the significant achievements of applying bioinformatics techniques to many aspects of the research in TCM, such as analysis of TCM-related '-omics' data and techniques for analyzing biological processes and pharmaceutical mechanisms of TCM, which have shown certain potential of bringing new thoughts to both sides. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. 6-C polarization analysis using point measurements of translational and rotational ground-motion: theory and applications

    Science.gov (United States)

    Sollberger, David; Greenhalgh, Stewart A.; Schmelzbach, Cedric; Van Renterghem, Cédéric; Robertsson, Johan O. A.

    2018-04-01

    We provide a six-component (6-C) polarization model for P-, SV-, SH-, Rayleigh-, and Love-waves both inside an elastic medium as well as at the free surface. It is shown that single-station 6-C data comprised of three components of rotational motion and three components of translational motion provide the opportunity to unambiguously identify the wave type, propagation direction, and local P- and S-wave velocities at the receiver location by use of polarization analysis. To extract such information by conventional processing of three-component (3-C) translational data would require large and dense receiver arrays. The additional rotational components allow the extension of the rank of the coherency matrix used for polarization analysis. This enables us to accurately determine the wave type and wave parameters (propagation direction and velocity) of seismic phases, even if more than one wave is present in the analysis time window. This is not possible with standard, pure-translational 3-C recordings. In order to identify modes of vibration and to extract the accompanying wave parameters, we adapt the multiple signal classification algorithm (MUSIC). Due to the strong nonlinearity of the MUSIC estimator function, it can be used to detect the presence of specific wave types within the analysis time window at very high resolution. We show how the extracted wavefield properties can be used, in a fully automated way, to separate the wavefield into its different wave modes using only a single 6-C recording station. As an example, we apply the method to remove surface wave energy while preserving the underlying reflection signal and to suppress energy originating from undesired directions, such as side-scattered waves.

  8. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  9. Introducing bioinformatics, the biosciences' genomic revolution

    CERN Document Server

    Zanella, Paolo

    1999-01-01

    The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.

  10. Practical criterion for the determination of translation factors. II. Application to He2++H(1s) collisions

    International Nuclear Information System (INIS)

    Errea, L.F.; Gomez-Llorente, J.M.; Mendez, L.; Riera, A.

    1985-01-01

    An illustration is reported on the use of the Euclidean norm as a criterion of the quality of translation factors in the molecular model of atomic collisions. The relation between our norm and the deviation vector of Chang and Rapp [J. Chem. Phys. 59, 572 (1973)], and the computational simplicity of the calculation and minimization of the former quantity, are very appealing features of our approach. To show how the norm method can be applied, the He 2+ +H(1s)→He + (2s,2p )+H + reaction is treated

  11. Practical criterion for the determination of translation factors. II. Application to He/sup 2 +/+H(1s) collisions

    Energy Technology Data Exchange (ETDEWEB)

    Errea, L.F.; Gomez-Llorente, J.M.; Mendez, L.; Riera, A.

    1985-10-01

    An illustration is reported on the use of the Euclidean norm as a criterion of the quality of translation factors in the molecular model of atomic collisions. The relation between our norm and the deviation vector of Chang and Rapp (J. Chem. Phys. 59, 572 (1973)), and the computational simplicity of the calculation and minimization of the former quantity, are very appealing features of our approach. To show how the norm method can be applied, the He/sup 2 +/+H(1s)..-->..He/sup +/(2s,2p )+H/sup +/ reaction is treated.

  12. Bioinformatics analysis of Brucella vaccines and vaccine targets using VIOLIN.

    Science.gov (United States)

    He, Yongqun; Xiang, Zuoshuang

    2010-09-27

    Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Bioinformatics curation and ontological representation of Brucella vaccines

  13. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    Science.gov (United States)

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  14. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  15. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  16. Single-Cell Transcriptomics Bioinformatics and Computational Challenges

    Directory of Open Access Journals (Sweden)

    Lana Garmire

    2016-09-01

    Full Text Available The emerging single-cell RNA-Seq (scRNA-Seq technology holds the promise to revolutionize our understanding of diseases and associated biological processes at an unprecedented resolution. It opens the door to reveal the intercellular heterogeneity and has been employed to a variety of applications, ranging from characterizing cancer cells subpopulations to elucidating tumor resistance mechanisms. Parallel to improving experimental protocols to deal with technological issues, deriving new analytical methods to reveal the complexity in scRNA-Seq data is just as challenging. Here we review the current state-of-the-art bioinformatics tools and methods for scRNA-Seq analysis, as well as addressing some critical analytical challenges that the field faces.

  17. OpenHelix: bioinformatics education outside of a different box.

    Science.gov (United States)

    Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C

    2010-11-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review.

  18. Microbase2.0: A Generic Framework for Computationally Intensive Bioinformatics Workflows in the Cloud

    OpenAIRE

    Flanagan Keith; Nakjang Sirintra; Hallinan Jennifer; Harwood Colin; Hirt Robert P.; Pocock Matthew R.; Wipat Anil

    2012-01-01

    As bioinformatics datasets grow ever larger, and analyses become increasingly complex, there is a need for data handling infrastructures to keep pace with developing technology. One solution is to apply Grid and Cloud technologies to address the computational requirements of analysing high throughput datasets. We present an approach for writing new, or wrapping existing applications, and a reference implementation of a framework, Microbase2.0, for executing those applications using Grid and C...

  19. Tissue Banking, Bioinformatics, and Electronic Medical Records: The Front-End Requirements for Personalized Medicine

    Science.gov (United States)

    Suh, K. Stephen; Sarojini, Sreeja; Youssif, Maher; Nalley, Kip; Milinovikj, Natasha; Elloumi, Fathi; Russell, Steven; Pecora, Andrew; Schecter, Elyssa; Goy, Andre

    2013-01-01

    Personalized medicine promises patient-tailored treatments that enhance patient care and decrease overall treatment costs by focusing on genetics and “-omics” data obtained from patient biospecimens and records to guide therapy choices that generate good clinical outcomes. The approach relies on diagnostic and prognostic use of novel biomarkers discovered through combinations of tissue banking, bioinformatics, and electronic medical records (EMRs). The analytical power of bioinformatic platforms combined with patient clinical data from EMRs can reveal potential biomarkers and clinical phenotypes that allow researchers to develop experimental strategies using selected patient biospecimens stored in tissue banks. For cancer, high-quality biospecimens collected at diagnosis, first relapse, and various treatment stages provide crucial resources for study designs. To enlarge biospecimen collections, patient education regarding the value of specimen donation is vital. One approach for increasing consent is to offer publically available illustrations and game-like engagements demonstrating how wider sample availability facilitates development of novel therapies. The critical value of tissue bank samples, bioinformatics, and EMR in the early stages of the biomarker discovery process for personalized medicine is often overlooked. The data obtained also require cross-disciplinary collaborations to translate experimental results into clinical practice and diagnostic and prognostic use in personalized medicine. PMID:23818899

  20. Translating democracy

    DEFF Research Database (Denmark)

    Doerr, Nicole

    2012-01-01

    Linguistic barriers may pose problems for politicians trying to communicate delicate decisions to a European-wide public, as well as for citizens wishing to protest at the European level. In this article I present a counter-intuitive position on the language question, one that explores how...... Forum (ESF). I compare deliberative practices in the multilingual ESF preparatory meetings with those in monolingual national Social Forum meetings in three Western European countries. My comparison shows that multilingualism does not reduce the inclusivity of democratic deliberation as compared...... in institutionalized habits and norms of deliberation. Addressing democratic theorists, my findings suggest that translation could be a way to think about difference not as a hindrance but as a resource for democracy in linguistically heterogeneous societies and public spaces, without presupposing a shared language...

  1. Translator's preface.

    Science.gov (United States)

    Lamiell, James T

    2013-08-01

    Presents a preface from James T. Lamiell, who translates Wilhelm Wundt's Psychology's Struggle for Existence (Die Psychologie im Kampf ums Dasein), in which Wundt advised against the impending divorce of psychology from philosophy, into English. Lamiell comments that more than a decade into the 21st century, it appears that very few psychologists have any interest at all in work at the interface of psychology and philosophy. He notes that one clear indication of this is that the Society for Theoretical and Philosophical Psychology, which is Division 24 of the American Psychological Association (APA), remains one of the smallest of the APA's nearly 60 divisions. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  2. Translation and validation of the German version of the Mother-Generated Index and its application during the postnatal period.

    Science.gov (United States)

    Grylka-Baeschlin, Susanne; van Teijlingen, Edwin; Stoll, Kathrin; Gross, Mechthild M

    2015-01-01

    the Mother-Generated Index (MGI) is a validated tool to assess postnatal quality of life. It is usually administered several weeks or months after birth and correlates with indices of post partum mood states and physical complaints. The instrument had not been translated into German before or validated for use among German-speaking women, nor have the results of the tool been assessed specifically for the administration directly after birth. This paper aims to describe the systematic translation process of the MGI into German and to assess the convergent validity of the German version of the instrument directly after birth and seven weeks post partum. prospective two-stage survey. two rural hospitals in the south of Germany and in the north of Switzerland. all women giving birth between 1st October and 15th December 2012 with sufficient knowledge of German and whose babies were not referred to a neonatal care unit; 226 women were eligible to participate. two questionnaires including questions relating to socio-demographic factors and perinatal care, and incorporating the MGI, the Hospital Anxiety and Depression Scale (HADS) and the Postnatal Morbidity Index (PMI). All instruments were subjected to forward and back translation and pilot-tested; the first questionnaire was then administered in the first two days after birth and the second six weeks post partum. Parametric and non-parametric tests were computed using SPSS. 129 surveys were returned an average of three days after birth and 83 after seven weeks. Higher postnatal quality of life showed a significant correlation with a lower anxiety and depression score (pGerman version of the MGI is a valid indicator of physical and emotional post partum well-being. the German version of the MGI can be used in the post partum period to identify women whose quality of life is impaired during the first days after birth, in order to initiate extended midwifery care and referral if necessary. Copyright © 2014 The Authors

  3. Mapping Translation Technology Research in Translation Studies

    DEFF Research Database (Denmark)

    Schjoldager, Anne; Christensen, Tina Paulsen; Flanagan, Marian

    2017-01-01

    section aims to improve this situation by presenting new and innovative research papers that reflect on recent technological advances and their impact on the translation profession and translators from a diversity of perspectives and using a variety of methods. In Section 2, we present translation......Due to the growing uptake of translation technology in the language industry and its documented impact on the translation profession, translation students and scholars need in-depth and empirically founded knowledge of the nature and influences of translation technology (e.g. Christensen....../Schjoldager 2010, 2011; Christensen 2011). Unfortunately, the increasing professional use of translation technology has not been mirrored within translation studies (TS) by a similar increase in research projects on translation technology (Munday 2009: 15; O’Hagan 2013; Doherty 2016: 952). The current thematic...

  4. Pattern-based Automatic Translation of Structured Power System Data to Functional Models for Decision Support Applications

    DEFF Research Database (Denmark)

    Heussen, Kai; Weckesser, Johannes Tilman Gabriel; Kullmann, Daniel

    2013-01-01

    Improved information and insight for decision support in operations and design are central promises of a smart grid. Well-structured information about the composition of power systems is increasingly becoming available in the domain, e.g. due to standard information models (e.g. CIM or IEC61850......) or otherwise structured databases. More measurements and data do not automatically improve decisions, but there is an opportunity to capitalize on this information for decision support. With suitable reasoning strategies data can be contextualized and decision-relevant events can be promoted and identified....... This paper presents an approach to link available structured power system data directly to a functional representation suitable for diagnostic reasoning. The translation method is applied to test cases also illustrating decision support....

  5. Genes Involved in the Endoplasmic Reticulum N-Glycosylation Pathway of the Red Microalga Porphyridium sp.: A Bioinformatic Study

    Directory of Open Access Journals (Sweden)

    Oshrat Levy-Ontman

    2014-02-01

    Full Text Available N-glycosylation is one of the most important post-translational modifications that influence protein polymorphism, including protein structures and their functions. Although this important biological process has been extensively studied in mammals, only limited knowledge exists regarding glycosylation in algae. The current research is focused on the red microalga Porphyridium sp., which is a potentially valuable source for various applications, such as skin therapy, food, and pharmaceuticals. The enzymes involved in the biosynthesis and processing of N-glycans remain undefined in this species, and the mechanism(s of their genetic regulation is completely unknown. In this study, we describe our pioneering attempt to understand the endoplasmic reticulum N-Glycosylation pathway in Porphyridium sp., using a bioinformatic approach. Homology searches, based on sequence similarities with genes encoding proteins involved in the ER N-glycosylation pathway (including their conserved parts were conducted using the TBLASTN function on the algae DNA scaffold contigs database. This approach led to the identification of 24 encoded-genes implicated with the ER N-glycosylation pathway in Porphyridium sp. Homologs were found for almost all known N-glycosylation protein sequences in the ER pathway of Porphyridium sp.; thus, suggesting that the ER-pathway is conserved; as it is in other organisms (animals, plants, yeasts, etc..

  6. Integrative Bioinformatic Analysis of Transcriptomic Data Identifies Conserved Molecular Pathways Underlying Ionizing Radiation-Induced Bystander Effects (RIBE

    Directory of Open Access Journals (Sweden)

    Constantinos Yeles

    2017-11-01

    Full Text Available Ionizing radiation-induced bystander effects (RIBE encompass a number of effects with potential for a plethora of damages in adjacent non-irradiated tissue. The cascade of molecular events is initiated in response to the exposure to ionizing radiation (IR, something that may occur during diagnostic or therapeutic medical applications. In order to better investigate these complex response mechanisms, we employed a unified framework integrating statistical microarray analysis, signal normalization, and translational bioinformatics functional analysis techniques. This approach was applied to several microarray datasets from Gene Expression Omnibus (GEO related to RIBE. The analysis produced lists of differentially expressed genes, contrasting bystander and irradiated samples versus sham-irradiated controls. Furthermore, comparative molecular analysis through BioInfoMiner, which integrates advanced statistical enrichment and prioritization methodologies, revealed discrete biological processes, at the cellular level. For example, the negative regulation of growth, cellular response to Zn2+-Cd2+, and Wnt and NIK/NF-kappaB signaling, thus refining the description of the phenotypic landscape of RIBE. Our results provide a more solid understanding of RIBE cell-specific response patterns, especially in the case of high-LET radiations, like α-particles and carbon-ions.

  7. The ICNP BaT - from translation tool to translation web service.

    Science.gov (United States)

    Schrader, Ulrich

    2009-01-01

    The ICNP BaT has been developed as a web application to support the collaborative translation of different versions of the ICNP into different languages. A prototype of a web service is described that could reuse the translations in the database of the ICNP BaT to provide automatic translations of nursing content based on the ICNP terminology globally. The translation web service is based on a service-oriented architecture making it easy to interoperate with different applications. Such a global translation server would free individual institutions from the maintenance costs of realizing their own translation services.

  8. Bioinformatics approaches to single-cell analysis in developmental biology.

    Science.gov (United States)

    Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H

    2016-03-01

    Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology. © The Author 2015. Published by Oxford University Press on behalf of the European

  9. Bioinformatic Analysis of Strawberry GSTF12 Gene

    Science.gov (United States)

    Wang, Xiran; Jiang, Leiyu; Tang, Haoru

    2018-01-01

    GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.

  10. Bioinformatics for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Alberto Magi

    2010-09-01

    Full Text Available The emergence of next-generation sequencing (NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis and the management of the huge amounts of data generated by these technologies. Even at the early stages of their commercial availability, a large number of softwares already exist for analyzing NGS data. These tools can be fit into many general categories including alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, structural variant detection and genome browsing. This manuscript aims to guide readers in the choice of the available computational tools that can be used to face the several steps of the data analysis workflow.

  11. Data mining in bioinformatics using Weka.

    Science.gov (United States)

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  12. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  13. ERRORS AND DIFFICULTIES IN TRANSLATING LEGAL TEXTS

    Directory of Open Access Journals (Sweden)

    Camelia, CHIRILA

    2014-11-01

    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  14. Nanoinformatics: an emerging area of information technology at the intersection of bioinformatics, computational chemistry and nanobiotechnology

    Directory of Open Access Journals (Sweden)

    Fernando González-Nilo

    2011-01-01

    Full Text Available After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.

  15. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    Science.gov (United States)

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  16. Recent developments in life sciences research: Role of bioinformatics

    African Journals Online (AJOL)

    Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...

  17. Assessment of a Bioinformatics across Life Science Curricula Initiative

    Science.gov (United States)

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  18. Current status and future perspectives of bioinformatics in Tanzania ...

    African Journals Online (AJOL)

    The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...

  19. Is there room for ethics within bioinformatics education?

    Science.gov (United States)

    Taneri, Bahar

    2011-07-01

    When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.

  20. Intelligent bioinformatics : the application of artificial intelligence techniques to bioinformatics problems

    National Research Council Canada - National Science Library

    Keedwell, Edward

    2005-01-01

    ... Intelligence and Computer Science 3.1 Introduction to search 3.2 Search algorithms 3.3 Heuristic search methods 3.4 Optimal search strategies 3.5 Problems with search techniques 3.6 Complexity of...

  1. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  2. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  3. Translation of the Children Helping Out--Responsibilities, Expectations and Supports (CHORES) questionnaire into Brazilian-Portuguese: semantic, idiomatic, conceptual and experiential equivalences and application in normal children and adolescents and in children with cerebral palsy.

    Science.gov (United States)

    Amaral, Maíra; Paula, Rebeca L; Drummond, Adriana; Dunn, Louise; Mancini, Marisa C

    2012-01-01

    The participation of children with disabilities in daily chores in different environments has been a therapeutic goal shared by both parents and rehabilitation professionals, leading to increased demand for instrument development. The Children Helping Out: Responsibilities, Expectations and Supports (CHORES) questionnaire was created with the objective of measuring child and teenager participation in daily household tasks. To translate the CHORES questionnaire into Brazilian Portuguese, evaluate semantic, idiomatic, experiential, and conceptual equivalences, apply the questionnaire to children and teenagers with and without disabilities, and test its test-retest reliability. Methodological study developed through the following stages: (1) translation of the questionnaire by two different translators; (2) synthesis of translations; (3) back-translation into English; (4) analysis by an expert committee to develop the pre-final version; (5) test-retest reliability; (6) administration to a sample of 50 parents of children with and without disabilities. The CHORES translation was validated in all stages. The implemented adaptations aimed to improve the understanding of the instrument's content by families of different socioeconomic and educational levels. The questionnaire showed strong consistency within a 7 to 14-day interval (ICCs=0.93 a 0.97; p=0.0001). After application, there was no need to change any items in the questionnaire. The translation of the CHORES questionnaire into Brazilian Portuguese offers a unique instrument for health professionals in Brazil, enabling the documentation of child and teenager participation in daily household tasks and making it possible to develop scientific investigation on the topic.

  4. Translation of Berlin Questionnaire to Portuguese language and its application in OSA identification in a sleep disordered breathing clinic

    Directory of Open Access Journals (Sweden)

    A.P. Vaz

    2011-03-01

    Full Text Available Background: Berlin Questionnaire (BQ, an English language screening tool for obstructive sleep apnea (OSA in primary care, has been applied in tertiary settings, with variable results. Aims: Development of BQ Portuguese version and evaluation of its utility in a sleep disordered breathing clinic (SDBC. Material and methods: BQ was translated using back translation methodology and prospectively applied, previously to cardiorespiratory sleep study, to 95 consecutive subjects, referred to a SDBC, with OSA suspicion. OSA risk assessment was based on responses in 10 items, organized in 3 categories: snoring and witnessed apneas (category 1, daytime sleepiness (category 2, high blood pressure (HBP/obesity (category 3. Results: In the studied sample, 67.4% were males, with a mean age of 51 ± 13 years. Categories 1, 2 and 3 were positive in 91.6, 24.2 and 66.3%, respectively. BQ identified 68.4% of the patients as being in the high risk group for OSA and the remaining 31.6% in the low risk. BQ sensitivity and specificity were 72.1 and 50%, respectively, for an apnea-hipopnea index (AHI > 5, 82.6 and 44.8% for AHI > 15, 88.4 and 39.1% for AHI > 30. Being in the high risk group for OSA did not infl uence significantly the probability of having the disease (positive likelihood ratio [LR] between 1.44-1.49. Only the items related to snoring loudness, witnessed apneas and HBP/obesity presented a statistically positive association with AHI, with the model constituted by their association presenting a greater discrimination capability, especially for an AHI > 5 (sensitivity 65.2%, specificity 80%, positive LR 3.26. Conclusions: The BQ is not an appropriate screening tool for OSA in a SDBC, although snoring loudness, witnessed apneas, HBP/obesity have demonstrated being significant questionnaire elements in this population. Resumo: Introdução: O Questionário de Berlim (QB, originalmente desenvolvido em língua inglesa como um instrumento de

  5. A middleware-based platform for the integration of bioinformatic services

    Directory of Open Access Journals (Sweden)

    Guzmán Llambías

    2015-08-01

    Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.

  6. Analyses of Brucella Pathogenesis, Host Immunity, and Vaccine Targets using Systems Biology and Bioinformatics

    OpenAIRE

    He, Yongqun

    2012-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omi...

  7. Bioinformatics research in the Asia Pacific: a 2007 update.

    Science.gov (United States)

    Ranganathan, Shoba; Gribskov, Michael; Tan, Tin Wee

    2008-01-01

    We provide a 2007 update on the bioinformatics research in the Asia-Pacific from the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998. From 2002, APBioNet has organized the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2007 Conference was organized as the 6th annual conference of the Asia-Pacific Bioinformatics Network, on Aug. 27-30, 2007 at Hong Kong, following a series of successful events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea) and New Delhi (India). Besides a scientific meeting at Hong Kong, satellite events organized are a pre-conference training workshop at Hanoi, Vietnam and a post-conference workshop at Nansha, China. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. We have organized the papers into thematic areas, highlighting the growing contribution of research excellence from this region, to global bioinformatics endeavours.

  8. Continuing Education Workshops in Bioinformatics Positively Impact Research and Careers.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2016-06-01

    Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.

  9. Development of a PET Prostate-Specific Membrane Antigen Imaging Agent: Preclinical Translation for Future Clinical Application

    Science.gov (United States)

    2017-10-01

    are those of the author(s) and should not be construed as an official Department of the Army position, policy or decision unless so designated by...phase 0) application to the FDA by the end of the funding period. The small molecule imaging agents under study home to prostate specific membrane...funding period. The small molecule imaging agents under study home to prostate specific membrane antigen (PSMA) that is prevalent on a majority of

  10. EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats

    Science.gov (United States)

    Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter

    2013-01-01

    Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: jison@ebi.ac.uk PMID:23479348

  11. Host-parasite interactions and ecology of the malaria parasite-a bioinformatics approach.

    Science.gov (United States)

    Izak, Dariusz; Klim, Joanna; Kaczanowski, Szymon

    2018-04-25

    Malaria remains one of the highest mortality infectious diseases. Malaria is caused by parasites from the genus Plasmodium. Most deaths are caused by infections involving Plasmodium falciparum, which has a complex life cycle. Malaria parasites are extremely well adapted for interactions with their host and their host's immune system and are able to suppress the human immune system, erase immunological memory and rapidly alter exposed antigens. Owing to this rapid evolution, parasites develop drug resistance and express novel forms of antigenic proteins that are not recognized by the host immune system. There is an emerging need for novel interventions, including novel drugs and vaccines. Designing novel therapies requires knowledge about host-parasite interactions, which is still limited. However, significant progress has recently been achieved in this field through the application of bioinformatics analysis of parasite genome sequences. In this review, we describe the main achievements in 'malarial' bioinformatics and provide examples of successful applications of protein sequence analysis. These examples include the prediction of protein functions based on homology and the prediction of protein surface localization via domain and motif analysis. Additionally, we describe PlasmoDB, a database that stores accumulated experimental data. This tool allows data mining of the stored information and will play an important role in the development of malaria science. Finally, we illustrate the application of bioinformatics in the development of population genetics research on malaria parasites, an approach referred to as reverse ecology.

  12. Bioinformatics in cancer therapy and drug design

    International Nuclear Information System (INIS)

    Horbach, D.Y.; Usanov, S.A.

    2005-01-01

    One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)

  13. Bioinformatics in cancer therapy and drug design

    Energy Technology Data Exchange (ETDEWEB)

    Horbach, D Y [International A. Sakharov environmental univ., Minsk (Belarus); Usanov, S A [Inst. of bioorganic chemistry, National academy of sciences of Belarus, Minsk (Belarus)

    2005-05-15

    One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)

  14. Bioinformatics study of the mangrove actin genes

    Science.gov (United States)

    Basyuni, M.; Wasilah, M.; Sumardi

    2017-01-01

    This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.

  15. Mapping Translation Technology Research in Translation Studies

    DEFF Research Database (Denmark)

    Schjoldager, Anne; Christensen, Tina Paulsen; Flanagan, Marian

    2017-01-01

    /Schjoldager 2010, 2011; Christensen 2011). Unfortunately, the increasing professional use of translation technology has not been mirrored within translation studies (TS) by a similar increase in research projects on translation technology (Munday 2009: 15; O’Hagan 2013; Doherty 2016: 952). The current thematic...... section aims to improve this situation by presenting new and innovative research papers that reflect on recent technological advances and their impact on the translation profession and translators from a diversity of perspectives and using a variety of methods. In Section 2, we present translation...... technology research as a subdiscipline of TS, and we define and discuss some basic concepts and models of the field that we use in the rest of the paper. Based on a small-scale study of papers published in TS journals between 2006 and 2016, Section 3 attempts to map relevant developments of translation...

  16. Bioconductor: open software development for computational biology and bioinformatics

    DEFF Research Database (Denmark)

    Gentleman, R.C.; Carey, V.J.; Bates, D.M.

    2004-01-01

    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....

  17. Approaches to translational plant science

    DEFF Research Database (Denmark)

    Dresbøll, Dorte Bodin; Christensen, Brian; Thorup-Kristensen, Kristian

    2015-01-01

    is lessened. In our opinion, implementation of translational plant science is a necessity in order to solve the agricultural challenges of producing food and materials in the future. We suggest an approach to translational plant science forcing scientists to think beyond their own area and to consider higher......Translational science deals with the dilemma between basic research and the practical application of scientific results. In translational plant science, focus is on the relationship between agricultural crop production and basic science in various research fields, but primarily in the basic plant...... science. Scientific and technological developments have allowed great progress in our understanding of plant genetics and molecular physiology, with potentials for improving agricultural production. However, this development has led to a separation of the laboratory-based research from the crop production...

  18. Precise machine translation of computer science study

    CSIR Research Space (South Africa)

    Marais, L

    2015-07-01

    Full Text Available mobile (Android) application for translating discrete mathematics definitions between English and Afrikaans. The main component of the system is a Grammatical Framework (GF) application grammar which produces syntactically and semantically accurate...

  19. Working with corpora in the translation classroom

    Directory of Open Access Journals (Sweden)

    Ralph Krüger

    2012-10-01

    Full Text Available This article sets out to illustrate possible applications of electronic corpora in the translation classroom. Starting with a survey of corpus use within corpus-based translation studies, the didactic value of corpora in the translation classroom and their epistemic value in translation teaching and practice will be elaborated. A typology of translation practice-oriented corpora will be presented, and the use of corpora in translation will be positioned within two general models of translation competence. Special consideration will then be given to the design and application of so-called Do-it-yourself (DIY corpora, which are compiled ad hoc with the aim of completing a specific translation task. In this context, possible sources for retrieving corpus texts will be presented and evaluated and it will be argued that, owing to time and availability constraints in real-life translation, the Internet should be used as a major source of corpus data. After a brief discussion of possible Internet research techniques for targeted and quality-focused corpus compilation, the possible use of the Internet itself as a macro-corpus will be elaborated. The article concludes with a brief presentation of corpus use in translation teaching in the MA in Specialised Translation Programme offered at Cologne University of Applied Sciences, Germany.

  20. Toward the Replacement of Animal Experiments through the Bioinformatics-driven Analysis of 'Omics' Data from Human Cell Cultures.

    Science.gov (United States)

    Grafström, Roland C; Nymark, Penny; Hongisto, Vesa; Spjuth, Ola; Ceder, Rebecca; Willighagen, Egon; Hardy, Barry; Kaski, Samuel; Kohonen, Pekka

    2015-11-01

    This paper outlines the work for which Roland Grafström and Pekka Kohonen were awarded the 2014 Lush Science Prize. The research activities of the Grafström laboratory have, for many years, covered cancer biology studies, as well as the development and application of toxicity-predictive in vitro models to determine chemical safety. Through the integration of in silico analyses of diverse types of genomics data (transcriptomic and proteomic), their efforts have proved to fit well into the recently-developed Adverse Outcome Pathway paradigm. Genomics analysis within state-of-the-art cancer biology research and Toxicology in the 21st Century concepts share many technological tools. A key category within the Three Rs paradigm is the Replacement of animals in toxicity testing with alternative methods, such as bioinformatics-driven analyses of data obtained from human cell cultures exposed to diverse toxicants. This work was recently expanded within the pan-European SEURAT-1 project (Safety Evaluation Ultimately Replacing Animal Testing), to replace repeat-dose toxicity testing with data-rich analyses of sophisticated cell culture models. The aims and objectives of the SEURAT project have been to guide the application, analysis, interpretation and storage of 'omics' technology-derived data within the service-oriented sub-project, ToxBank. Particularly addressing the Lush Science Prize focus on the relevance of toxicity pathways, a 'data warehouse' that is under continuous expansion, coupled with the development of novel data storage and management methods for toxicology, serve to address data integration across multiple 'omics' technologies. The prize winners' guiding principles and concepts for modern knowledge management of toxicological data are summarised. The translation of basic discovery results ranged from chemical-testing and material-testing data, to information relevant to human health and environmental safety. 2015 FRAME.

  1. Development of a cloud-based Bioinformatics Training Platform.

    Science.gov (United States)

    Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A

    2017-05-01

    The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.

  2. Virginia Bioinformatics Institute to expand cyberinfrastructure education and outreach project

    OpenAIRE

    Whyte, Barry James

    2008-01-01

    The National Science Foundation has awarded the Virginia Bioinformatics Institute at Virginia Tech $918,000 to expand its education and outreach program in Cyberinfrastructure - Training, Education, Advancement and Mentoring, commonly known as the CI-TEAM.

  3. Metagenomics and Bioinformatics in Microbial Ecology: Current Status and Beyond.

    Science.gov (United States)

    Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru

    2016-09-29

    Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.

  4. Bioinformatics Education in Pathology Training: Current Scope and Future Direction

    Directory of Open Access Journals (Sweden)

    Michael R Clay

    2017-04-01

    Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.

  5. In silico cloning and bioinformatic analysis of PEPCK gene in ...

    African Journals Online (AJOL)

    Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...

  6. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrö nen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-01-01

    concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource

  7. Bioinformatics tools for development of fast and cost effective simple ...

    African Journals Online (AJOL)

    Bioinformatics tools for development of fast and cost effective simple sequence repeat ... comparative mapping and exploration of functional genetic diversity in the ... Already, a number of computer programs have been implemented that aim at ...

  8. Skate Genome Project: Cyber-Enabled Bioinformatics Collaboration

    Science.gov (United States)

    Vincent, J.

    2011-01-01

    The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.

  9. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  10. On Various Negative Translations

    Directory of Open Access Journals (Sweden)

    Gilda Ferreira

    2011-01-01

    Full Text Available Several proof translations of classical mathematics into intuitionistic mathematics have been proposed in the literature over the past century. These are normally referred to as negative translations or double-negation translations. Among those, the most commonly cited are translations due to Kolmogorov, Godel, Gentzen, Kuroda and Krivine (in chronological order. In this paper we propose a framework for explaining how these different translations are related to each other. More precisely, we define a notion of a (modular simplification starting from Kolmogorov translation, which leads to a partial order between different negative translations. In this derived ordering, Kuroda and Krivine are minimal elements. Two new minimal translations are introduced, with Godel and Gentzen translations sitting in between Kolmogorov and one of these new translations.

  11. Assessment of Data Reliability of Wireless Sensor Network for Bioinformatics

    Directory of Open Access Journals (Sweden)

    Ting Dong

    2017-09-01

    Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.

  12. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  13. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  14. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  15. Gender issues in translation

    OpenAIRE

    ERGASHEVA G.I.

    2015-01-01

    The following research is done regarding gender in translation dealing specifically with the issue of the translators’ gender identity and its effect on their translations, as well as on how gender itself is translated and produced. We will try to clarify what gender is, how gender manifests itself in the system of language, and what problems translators encounter when translating or producing gender-related materials

  16. Cultural Context and Translation

    Institute of Scientific and Technical Information of China (English)

    张敏

    2009-01-01

    cultural context plays an important role in translation. Because translation is a cross-culture activity, the culture context that influ-ences translating is consisted of both the culture contexts of source language and target language. This article firstly analyzes the concept of context and cultural context, then according to the procedure of translating classifies cultural context into two stages and talks about how they respectively influence translating.

  17. Translation-coupling systems

    Science.gov (United States)

    Pfleger, Brian; Mendez-Perez, Daniel

    2013-11-05

    Disclosed are systems and methods for coupling translation of a target gene to a detectable response gene. A version of the invention includes a translation-coupling cassette. The translation-coupling cassette includes a target gene, a response gene, a response-gene translation control element, and a secondary structure-forming sequence that reversibly forms a secondary structure masking the response-gene translation control element. Masking of the response-gene translation control element inhibits translation of the response gene. Full translation of the target gene results in unfolding of the secondary structure and consequent translation of the response gene. Translation of the target gene is determined by detecting presence of the response-gene protein product. The invention further includes RNA transcripts of the translation-coupling cassettes, vectors comprising the translation-coupling cassettes, hosts comprising the translation-coupling cassettes, methods of using the translation-coupling cassettes, and gene products produced with the translation-coupling cassettes.

  18. G-DOC Plus - an integrative bioinformatics platform for precision medicine.

    Science.gov (United States)

    Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha

    2016-04-30

    G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available

  19. Cultural adaptation, translation and validation of a functional outcome questionnaire (TESS) to Portuguese with application to patients with lower extremity osteosarcoma.

    Science.gov (United States)

    Saraiva, Daniela; de Camargo, Beatriz; Davis, Aileen M

    2008-05-01

    Evaluation of physical functioning is an important tool for planning rehabilitation. Instruments need to be culturally adapted for use in non-English speaking countries. The aim of this study was to culturally adapt, including translation and preliminary validation, the Toronto extremity salvage score (TESS) for Brazil, in a sample of adolescents and young adults treated for lower extremity osteosarcoma. The process included two independent forward translations of TESS questionnaire, consensus between translators on a forward translation, back-translation by two independent translators, and a review of the back-translations. Internal consistency of the TESS and known groups validity were also evaluated. Internal consistency for the 30 item TESS was high (coefficient alpha = 0.87). TESS score ranges from 0 to 100. Forty-eight patients completed the questionnaire and scores ranged from 56 to 100 (mean score: 89.6). Patients receiving no pain medications scored higher on the TESS than those who were receiving pain medication (P = 0.014), and patients using walking aids had slightly higher but not statistically different scores. Those who were treated with amputation had higher scores than those who were treated with limb salvage procedures (P = 0.003). Preliminary evidence suggests that Brazilian-Portuguese translation is acceptable, understandable, reliable, and valid for evaluating the function in adolescents and young adults with osteosarcoma in lower extremity in Brazil. (c) 2008 Wiley-Liss, Inc.

  20. Writing Through: Practising Translation

    Directory of Open Access Journals (Sweden)

    Joel Scott

    2010-05-01

    Full Text Available This essay exists as a segment in a line of study and writing practice that moves between a critical theory analysis of translation studies conceptions of language, and the practical questions of what those ideas might mean for contemporary translation and writing practice. Although the underlying preoccupation of this essay, and my more general line of inquiry, is translation studies and practice, in many ways translation is merely a way into a discussion on language. For this essay, translation is the threshold of language. But the two trails of the discussion never manage to elude each other, and these concatenations have informed two experimental translation methods, referred to here as Live Translations and Series Translations. Following the essay are a number of poems in translation, all of which come from Blanco Nuclear by the contemporary Spanish poet, Esteban Pujals Gesalí. The first group, the Live Translations consist of transcriptions I made from audio recordings read in a public setting, in which the texts were translated in situ, either off the page of original Spanish-language poems, or through a process very much like that carried out by simultaneous translators, for which readings of the poems were played back to me through headphones at varying speeds to be translated before the audience. The translations collected are imperfect renderings, attesting to a moment in language practice rather than language objects. The second method involves an iterative translation process, by which three versions of any one poem are rendered, with varying levels of fluency, fidelity and servility. All three translations are presented one after the other as a series, with no version asserting itself as the primary translation. These examples, as well as the translation methods themselves, are intended as preliminary experiments within an endlessly divergent continuum of potential methods and translations, and not as a complete representation of

  1. Wild translation surfaces and infinite genus

    OpenAIRE

    Randecker, Anja

    2014-01-01

    The Gauss-Bonnet formula for classical translation surfaces relates the cone angle of the singularities (geometry) to the genus of the surface (topology). When considering more general translation surfaces, we observe so-called wild singularities for which the notion of cone angle is not applicable any more. In this article, we study whether there still exist relations between the geometry and the topology for translation surfaces with wild singularities. By considering short saddle connectio...

  2. Translational medicine and drug discovery

    National Research Council Canada - National Science Library

    Littman, Bruce H; Krishna, Rajesh

    2011-01-01

    ..., and examples of their application to real-life drug discovery and development. The latest thinking is presented by researchers from many of the world's leading pharmaceutical companies, including Pfizer, Merck, Eli Lilly, Abbott, and Novartis, as well as from academic institutions and public- private partnerships that support translational research...

  3. Localizing apps a practical guide for translators and translation students

    CERN Document Server

    Roturier, Johann

    2015-01-01

    The software industry has undergone rapid development since the beginning of the twenty-first century. These changes have had a profound impact on translators who, due to the evolving nature of digital content, are under increasing pressure to adapt their ways of working. Localizing Apps looks at these challenges by focusing on the localization of software applications, or apps. In each of the five core chapters, Johann Roturier examines:The role of translation and other linguistic activities in adapting software to the needs of different cultures (localization);The procedures required to prep

  4. Using example-based machine translation to translate DVD subtitles

    DEFF Research Database (Denmark)

    Flanagan, Marian

    between Swedish and Danish and Swedish and Norwegian subtitles, with the company already reporting a successful return on their investment. The hybrid EBMT/SMT system used in the current research, on the other hand, remains within the confines of academic research, and the real potential of the system...... allotted to produce the subtitles have both decreased. Therefore, this market is recognised as a potential real-world application of MT. Recent publications have introduced Corpus-Based MT approaches to translate subtitles. An SMT system has been implemented in a Swedish subtitling company to translate...

  5. Lecture 10: The European Bioinformatics Institute - "Big data" for biomedical sciences

    CERN Multimedia

    CERN. Geneva; Dana, Jose

    2013-01-01

    Part 1: Big data for biomedical sciences (Tom Hancocks) Ten years ago witnessed the completion of the first international 'Big Biology' project that sequenced the human genome. In the years since biological sciences, have seen a vast growth in data. In the coming years advances will come from integration of experimental approaches and the translation into applied technologies is the hospital, clinic and even at home. This talk will examine the development of infrastructure, physical and virtual, that will allow millions of life scientists across Europe better access to biological data Tom studied Human Genetics at the University of Leeds and McMaster University, before completing an MSc in Analytical Genomics at the University of Birmingham. He has worked for the UK National Health Service in diagnostic genetics and in training healthcare scientists and clinicians in bioinformatics. Tom joined the EBI in 2012 and is responsible for the scientific development and delivery of training for the BioMedBridges pr...

  6. Bioinformatics Meets Virology: The European Virus Bioinformatics Center's Second Annual Meeting.

    Science.gov (United States)

    Ibrahim, Bashar; Arkhipova, Ksenia; Andeweg, Arno C; Posada-Céspedes, Susana; Enault, François; Gruber, Arthur; Koonin, Eugene V; Kupczok, Anne; Lemey, Philippe; McHardy, Alice C; McMahon, Dino P; Pickett, Brett E; Robertson, David L; Scheuermann, Richard H; Zhernakova, Alexandra; Zwart, Mark P; Schönhuth, Alexander; Dutilh, Bas E; Marz, Manja

    2018-05-14

    The Second Annual Meeting of the European Virus Bioinformatics Center (EVBC), held in Utrecht, Netherlands, focused on computational approaches in virology, with topics including (but not limited to) virus discovery, diagnostics, (meta-)genomics, modeling, epidemiology, molecular structure, evolution, and viral ecology. The goals of the Second Annual Meeting were threefold: (i) to bring together virologists and bioinformaticians from across the academic, industrial, professional, and training sectors to share best practice; (ii) to provide a meaningful and interactive scientific environment to promote discussion and collaboration between students, postdoctoral fellows, and both new and established investigators; (iii) to inspire and suggest new research directions and questions. Approximately 120 researchers from around the world attended the Second Annual Meeting of the EVBC this year, including 15 renowned international speakers. This report presents an overview of new developments and novel research findings that emerged during the meeting.

  7. Why Translation Is Difficult

    DEFF Research Database (Denmark)

    Carl, Michael; Schaeffer, Moritz Jonas

    2017-01-01

    The paper develops a definition of translation literality that is based on the syntactic and semantic similarity of the source and the target texts. We provide theoretical and empirical evidence that absolute literal translations are easy to produce. Based on a multilingual corpus of alternative...... translations we investigate the effects of cross-lingual syntactic and semantic distance on translation production times and find that non-literality makes from-scratch translation and post-editing difficult. We show that statistical machine translation systems encounter even more difficulties with non-literality....

  8. Determinants of translation ambiguity

    Science.gov (United States)

    Degani, Tamar; Prior, Anat; Eddington, Chelsea M.; Arêas da Luz Fontes, Ana B.; Tokowicz, Natasha

    2016-01-01

    Ambiguity in translation is highly prevalent, and has consequences for second-language learning and for bilingual lexical processing. To better understand this phenomenon, the current study compared the determinants of translation ambiguity across four sets of translation norms from English to Spanish, Dutch, German and Hebrew. The number of translations an English word received was correlated across these different languages, and was also correlated with the number of senses the word has in English, demonstrating that translation ambiguity is partially determined by within-language semantic ambiguity. For semantically-ambiguous English words, the probability of the different translations in Spanish and Hebrew was predicted by the meaning-dominance structure in English, beyond the influence of other lexical and semantic factors, for bilinguals translating from their L1, and translating from their L2. These findings are consistent with models postulating direct access to meaning from L2 words for moderately-proficient bilinguals. PMID:27882188

  9. Translation in ESL Classes

    Directory of Open Access Journals (Sweden)

    Nagy Imola Katalin

    2015-12-01

    Full Text Available The problem of translation in foreign language classes cannot be dealt with unless we attempt to make an overview of what translation meant for language teaching in different periods of language pedagogy. From the translation-oriented grammar-translation method through the complete ban on translation and mother tongue during the times of the audio-lingual approaches, we have come today to reconsider the role and status of translation in ESL classes. This article attempts to advocate for translation as a useful ESL class activity, which can completely fulfil the requirements of communicativeness. We also attempt to identify some activities and games, which rely on translation in some books published in the 1990s and the 2000s.

  10. Translation and Quality Management

    DEFF Research Database (Denmark)

    Petersen, Margrethe

    1996-01-01

    theory which would seem likely to be of interest in this connection and section 2. gives a linguist's introduction to the part of the area of quality management which I consider relevant for present purposes. Section 3. is devoted to the case study of a small translation firm which has been certified......The aim of this article is to consider the issue of quality in translation. Specifically, the question under consideration is whether quality assurance in relation to translation is feasible and, if so, what some of the implications for translation theory, translation practice and the teaching...... of translation would be. To provide a backdrop against which the issue may be discussed, I present an overview of the two areas which seem most likely to hold potential answers, viz., that of translation theory and that of quality management. Section 1. gives a brief outline of some contributions to translation...

  11. International Meeting on Needs and Challenges in Translational Medicine: filling the gap between basic research and clinical applications. Book of abstract

    International Nuclear Information System (INIS)

    Moretti, F.; Belardelli, F.; Romero, M.

    2008-01-01

    This multidisciplinary international meeting is organized by the Istituto Superiore di Sanita, in collaboration with Alleanza Contro il Cancro (Alliance Against Cancer, the Network of the Italian Comprehensive Cancer Centres) and EATRIS (European Advanced Translational Research Infrastructure in Medicine). The primary goal of the meeting is to provide a scientific forum to discuss the recent progress in translational research. Moreover, a particular focus will be devoted to the identification of needs, obstacles and new opportunities to promote translational research in biomedicine. The scientific programme will cover a broad range of fields including: cancer; neurosciences; rare diseases; cardiovascular diseases and infectious and autoimmune diseases. Furthermore, special attention will be given to the discussion of how comprehensive initiatives for addressing critical regulatory issues for First-In-Man - Phase I clinical studies can potentially improve the efficiency and quality of biomedical and translational research at an international level [it

  12. Remote data retrieval for bioinformatics applications: an agent migration approach.

    Directory of Open Access Journals (Sweden)

    Lei Gao

    Full Text Available Some of the approaches have been developed to retrieve data automatically from one or multiple remote biological data sources. However, most of them require researchers to remain online and wait for returned results. The latter not only requires highly available network connection, but also may cause the network overload. Moreover, so far none of the existing approaches has been designed to address the following problems when retrieving the remote data in a mobile network environment: (1 the resources of mobile devices are limited; (2 network connection is relatively of low quality; and (3 mobile users are not always online. To address the aforementioned problems, we integrate an agent migration approach with a multi-agent system to overcome the high latency or limited bandwidth problem by moving their computations to the required resources or services. More importantly, the approach is fit for the mobile computing environments. Presented in this paper are also the system architecture, the migration strategy, as well as the security authentication of agent migration. As a demonstration, the remote data retrieval from GenBank was used to illustrate the feasibility of the proposed approach.

  13. Application of bioinformatics to optimization of serum proteome in ...

    African Journals Online (AJOL)

    SAM

    2014-05-14

    OSCC) from oral leukoplakia ... study the sera proteomes of 32 healthy volunteers, 6 patients with oral mucosa leukoplakia, 28 OSCC patients, and 8 .... American Ciphergen SELDI Protein Biology System II plus (PBS. II plus) and ...

  14. Memetics and Translation Studies

    OpenAIRE

    Andrew, Chesterman

    2000-01-01

    Translation Studies is a branch of memetics. This is a claim, a hypothesis. More specifically, it is an interpretive hypothesis: I claim that Translation Studies can be thus interpreted, and that this is a useful thing to do because it offers a new and beneficial way of understanding translation.

  15. Sound Effects in Translation

    DEFF Research Database (Denmark)

    Mees, Inger M.; Dragsted, Barbara; Gorm Hansen, Inge

    2015-01-01

    ), Translog was employed to measure task times. The quality of the products was assessed by three experienced translators, and the number and types of misrecognitions were identified by a phonetician. Results indicate that SR translation provides a potentially useful supplement to written translation...

  16. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  17. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  18. Biochip microsystem for bioinformatics recognition and analysis

    Science.gov (United States)

    Lue, Jaw-Chyng (Inventor); Fang, Wai-Chi (Inventor)

    2011-01-01

    A system with applications in pattern recognition, or classification, of DNA assay samples. Because DNA reference and sample material in wells of an assay may be caused to fluoresce depending upon dye added to the material, the resulting light may be imaged onto an embodiment comprising an array of photodetectors and an adaptive neural network, with applications to DNA analysis. Other embodiments are described and claimed.

  19. GOBLET: the Global Organisation for Bioinformatics Learning, Education and Training.

    Science.gov (United States)

    Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G

    2015-04-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.

  20. The Temple Translator's Workstation Project

    National Research Council Canada - National Science Library

    Vanni, Michelle; Zajac, Remi

    1996-01-01

    .... The Temple Translator's Workstation is incorporated into a Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machine- translation...

  1. Sound Effects in Translation

    DEFF Research Database (Denmark)

    Mees, Inger M.; Dragsted, Barbara; Gorm Hansen, Inge

    2013-01-01

    On the basis of a pilot study using speech recognition (SR) software, this paper attempts to illustrate the benefits of adopting an interdisciplinary approach in translator training. It shows how the collaboration between phoneticians, translators and interpreters can (1) advance research, (2) have......), Translog was employed to measure task times. The quality of the products was assessed by three experienced translators, and the number and types of misrecognitions were identified by a phonetician. Results indicate that SR translation provides a potentially useful supplement to written translation...

  2. Lost in translation

    DEFF Research Database (Denmark)

    Hedegaard, Steffen; Simonsen, Jakob Grue

    2011-01-01

    of translated texts. Our results suggest (i) that frame-based classifiers are usable for author attribution of both translated and untranslated texts; (ii) that framebased classifiers generally perform worse than the baseline classifiers for untranslated texts, but (iii) perform as well as, or superior...... to the baseline classifiers on translated texts; (iv) that—contrary to current belief—naïve classifiers based on lexical markers may perform tolerably on translated texts if the combination of author and translator is present in the training set of a classifier....

  3. Speaking your Translation

    DEFF Research Database (Denmark)

    Dragsted, Barbara; Mees, Inger M.; Gorm Hansen, Inge

    2011-01-01

    In this article we discuss the translation processes and products of 14 MA students who produced translations from Danish (L1) into English (L2) under different working conditions: (1) written translation, (2) sight translation, and (3) sight translation with a speech recognition (SR) tool. Audio......, since students were dictating in their L2, we looked into the number and types of error that occurred when using the SR software. Items that were misrecognised by the program could be divided into three categories: homophones, hesitations, and incorrectly pronounced words. Well over fifty per cent...

  4. Best practices in bioinformatics training for life scientists

    DEFF Research Database (Denmark)

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik

    2013-01-01

    their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes...... to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse...

  5. Lost in translation?

    DEFF Research Database (Denmark)

    Granas, Anne Gerd; Nørgaard, Lotte Stig; Sporrong, Sofia Kälvemark

    2014-01-01

    OBJECTIVE: The "Beliefs about Medicines Questionnaire" (BMQ) assess balance of necessity and concern of medicines. The BMQ has been translated from English to many languages. However, the original meaning of statements, such as "My medicine is a mystery to me", may be lost in translation. The aim...... of this study is to compare three Scandinavian translations of the BMQ. (1) How reliable are the translations? (2) Are they still valid after translation? METHODS: Translated Norwegian, Swedish and Danish versions of the BMQ were scrutinized by three native Scandinavian researchers. Linguistic differences...... and ambiguities in the 5-point Likert scale and the BMQ statements were compared. RESULTS: In the Scandinavian translations, the Likert scale expanded beyond the original version at one endpoint (Swedish) or both endpoints (Danish). In the BMQ statements, discrepancies ranged from smaller inaccuracies toward...

  6. What is a translator?

    Directory of Open Access Journals (Sweden)

    Martha Pulido

    2016-08-01

    Full Text Available I copied the title from Foucault’s text, "Qu'est-ce qu'un auteur" in Dits et écrits [1969], Paris, Gallimard, 1994, that I read in French, then in English in Donald F. Bouchard’s and Sherry Simon’s translation, and finally in Spanish in Yturbe Corina’s translation, and applied for the translator some of the analysis that Foucault presents to define the author. Foucault suggests that if we cannot define an author, at least we can see where their function is reflected. My purpose in this paper is to present those surfaces where the function of the translator is reflected or where it can be revealed, and to analyse the categories that could lead us to the elaboration of a suitable definition of a Translator. I dare already give a compound noun for the translator: Translator-Function.

  7. What is a translator?

    Directory of Open Access Journals (Sweden)

    Martha Martha Pulido

    2016-05-01

    Full Text Available I copied the title from Foucault’s text, "Qu'est-ce qu'un auteur" in Dits et écrits [1969], Paris, Gallimard, 1994, that I read in French, then in English in Donald F. Bouchard’s and Sherry Simon’s translation, and finally in Spanish in Yturbe Corina’s translation, and applied for the translator some of the analysis that Foucault presents to define the author. Foucault suggests that if we cannot define an author, at least we can see where their function is reflected. My purpose in this paper is to present those surfaces where the function of the translator is reflected or where it can be revealed, and to analyse the categories that could lead us to the elaboration of a suitable definition of a Translator. I dare already give a compound noun for the translator: Translator-Function.

  8. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    Energy Technology Data Exchange (ETDEWEB)

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics

  9. Open source tools and toolkits for bioinformatics: significance, and where are we?

    Science.gov (United States)

    Stajich, Jason E; Lapp, Hilmar

    2006-09-01

    This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.

  10. 5th HUPO BPP Bioinformatics Meeting at the European Bioinformatics Institute in Hinxton, UK--Setting the analysis frame.

    Science.gov (United States)

    Stephan, Christian; Hamacher, Michael; Blüggel, Martin; Körting, Gerhard; Chamrad, Daniel; Scheer, Christian; Marcus, Katrin; Reidegeld, Kai A; Lohaus, Christiane; Schäfer, Heike; Martens, Lennart; Jones, Philip; Müller, Michael; Auyeung, Kevin; Taylor, Chris; Binz, Pierre-Alain; Thiele, Herbert; Parkinson, David; Meyer, Helmut E; Apweiler, Rolf

    2005-09-01

    The Bioinformatics Committee of the HUPO Brain Proteome Project (HUPO BPP) meets regularly to execute the post-lab analyses of the data produced in the HUPO BPP pilot studies. On July 7, 2005 the members came together for the 5th time at the European Bioinformatics Institute (EBI) in Hinxton, UK, hosted by Rolf Apweiler. As a main result, the parameter set of the semi-automated data re-analysis of MS/MS spectra has been elaborated and the subsequent work steps have been defined.

  11. Database and Bioinformatics Studies of Probiotics.

    Science.gov (United States)

    Tao, Lin; Wang, Bohua; Zhong, Yafen; Pow, Siok Hoon; Zeng, Xian; Qin, Chu; Zhang, Peng; Chen, Shangying; He, Weidong; Tan, Ying; Liu, Hongxia; Jiang, Yuyang; Chen, Weiping; Chen, Yu Zong

    2017-09-06

    Probiotics have been widely explored for health benefits, animal cares, and agricultural applications. Recent advances in microbiome, microbiota, and microbial dark matter research have fueled greater interests in and paved ways for the study of the mechanisms of probiotics and the discovery of new probiotics from uncharacterized microbial sources. A probiotics database named PROBIO was developed to facilitate these efforts and the need for the information on the known probiotics, which provides the comprehensive information about the probiotic functions of 448 marketed, 167 clinical trial/field trial, and 382 research probiotics for use or being studied for use in humans, animals, and plants. The potential applications of the probiotics data are illustrated by several literature-reported investigations, which have used the relevant information for probing the function and mechanism of the probiotics and for discovering new probiotics. PROBIO can be accessed free of charge at http://bidd2.nus.edu.sg/probio/homepage.htm .

  12. Intrageneric Primer Design: Bringing Bioinformatics Tools to the Class

    Science.gov (United States)

    Lima, Andre O. S.; Garces, Sergio P. S.

    2006-01-01

    Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…

  13. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    Science.gov (United States)

    A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  14. Bioinformatics in the Netherlands : The value of a nationwide community

    NARCIS (Netherlands)

    van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap

    2017-01-01

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures

  15. Bioinformatic tools and guideline for PCR primer design | Abd ...

    African Journals Online (AJOL)

    Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...

  16. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  17. Hidden in the Middle: Culture, Value and Reward in Bioinformatics

    Science.gov (United States)

    Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul

    2016-01-01

    Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…

  18. Bioinformatics for Undergraduates: Steps toward a Quantitative Bioscience Curriculum

    Science.gov (United States)

    Chapman, Barbara S.; Christmann, James L.; Thatcher, Eileen F.

    2006-01-01

    We describe an innovative bioinformatics course developed under grants from the National Science Foundation and the California State University Program in Research and Education in Biotechnology for undergraduate biology students. The project has been part of a continuing effort to offer students classroom experiences focused on principles and…

  19. Mathematics and evolutionary biology make bioinformatics education comprehensible

    Science.gov (United States)

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  20. Rapid cloning and bioinformatic analysis of spinach Y chromosome ...

    Indian Academy of Sciences (India)

    Rapid cloning and bioinformatic analysis of spinach Y chromosome- specific EST sequences. Chuan-Liang Deng, Wei-Li Zhang, Ying Cao, Shao-Jing Wang, ... Arabidopsis thaliana mRNA for mitochondrial half-ABC transporter (STA1 gene). 389 2.31E-13. 98.96. SP3−12. Betula pendula histidine kinase 3 (HK3) mRNA, ...

  1. Staff Scientist - RNA Bioinformatics | Center for Cancer Research

    Science.gov (United States)

    The newly established RNA Biology Laboratory (RBL) at the Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) in Frederick, Maryland is recruiting a Staff Scientist with strong expertise in RNA bioinformatics to join the Intramural Research Program’s mission of high impact, high reward science. The RBL is the equivalent of an

  2. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    Science.gov (United States)

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  3. Discourse Analysis in Translator Training

    OpenAIRE

    Gülfidan Ayvaz

    2015-01-01

    Translator training enables students to gain experience in both linguistic parameters and translation practice. Discourse Analysis is one of the strategies that lead to a better translation process and quality in translation. In that regard, this study aims to present DA as a translation strategy for translation practice and a useful tool for translator training. The relationship between DA and Translator Training is not widely studied. Therefore this study aims to define DA and how it can be...

  4. Establishment, maintenance and in vitro and in vivo applications of primary human glioblastoma multiforme (GBM) xenograft models for translational biology studies and drug discovery.

    Science.gov (United States)

    Carlson, Brett L; Pokorny, Jenny L; Schroeder, Mark A; Sarkaria, Jann N

    2011-03-01

    Development of clinically relevant tumor model systems for glioblastoma multiforme (GBM) is important for advancement of basic and translational biology. One model that has gained wide acceptance in the neuro-oncology community is the primary xenograft model. This model entails the engraftment of patient tumor specimens into the flank of nude mice and subsequent serial passage of these tumors in the flank of mice. These tumors are then used to establish short-term explant cultures or intracranial xenografts. This unit describes detailed procedures for establishment, maintenance, and utilization of a primary GBM xenograft panel for the purpose of using them as tumor models for basic or translational studies.

  5. JavaScript DNA translator: DNA-aligned protein translations.

    Science.gov (United States)

    Perry, William L

    2002-12-01

    There are many instances in molecular biology when it is necessary to identify ORFs in a DNA sequence. While programs exist for displaying protein translations in multiple ORFs in alignment with a DNA sequence, they are often expensive, exist as add-ons to software that must be purchased, or are only compatible with a particular operating system. JavaScript DNA Translator is a shareware application written in JavaScript, a scripting language interpreted by the Netscape Communicator and Internet Explorer Web browsers, which makes it compatible with several different operating systems. While the program uses a familiar Web page interface, it requires no connection to the Internet since calculations are performed on the user's own computer. The program analyzes one or multiple DNA sequences and generates translations in up to six reading frames aligned to a DNA sequence, in addition to displaying translations as separate sequences in FASTA format. ORFs within a reading frame can also be displayed as separate sequences. Flexible formatting options are provided, including the ability to hide ORFs below a minimum size specified by the user. The program is available free of charge at the BioTechniques Software Library (www.Biotechniques.com).

  6. Treatment of rat spinal cord injury with the neurotrophic factor albumin-oleic acid: translational application for paralysis, spasticity and pain.

    Directory of Open Access Journals (Sweden)

    Gerardo Avila-Martin

    Full Text Available Sensorimotor dysfunction following incomplete spinal cord injury (iSCI is often characterized by the debilitating symptoms of paralysis, spasticity and pain, which require treatment with novel pleiotropic pharmacological agents. Previous in vitro studies suggest that Albumin (Alb and Oleic Acid (OA may play a role together as an endogenous neurotrophic factor. Although Alb can promote basic recovery of motor function after iSCI, the therapeutic effect of OA or Alb-OA on a known translational measure of SCI associated with symptoms of spasticity and change in nociception has not been studied. Following T9 spinal contusion injury in Wistar rats, intrathecal treatment with: i Saline, ii Alb (0.4 nanomoles, iii OA (80 nanomoles, iv Alb-Elaidic acid (0.4/80 nanomoles, or v Alb-OA (0.4/80 nanomoles were evaluated on basic motor function, temporal summation of noxious reflex activity, and with a new test of descending modulation of spinal activity below the SCI up to one month after injury. Albumin, OA and Alb-OA treatment inhibited nociceptive Tibialis Anterior (TA reflex activity. Moreover Alb-OA synergistically promoted early recovery of locomotor activity to 50 ± 10% of control and promoted de novo phasic descending inhibition of TA noxious reflex activity to 47 ± 5% following non-invasive electrical conditioning stimulation applied above the iSCI. Spinal L4-L5 immunohistochemistry demonstrated a unique increase in serotonin fibre innervation up to 4.2 ± 1.1 and 2.3 ± 0.3 fold within the dorsal and ventral horn respectively with Alb-OA treatment when compared to uninjured tissue, in addition to a reduction in NR1 NMDA receptor phosphorylation and microglia reactivity. Early recovery of voluntary motor function accompanied with tonic and de novo phasic descending inhibition of nociceptive TA flexor reflex activity following Alb-OA treatment, mediated via known endogenous spinal mechanisms of action, suggests a clinical application of this novel

  7. Struggling with Translations

    DEFF Research Database (Denmark)

    Obed Madsen, Søren

    This paper shows empirical how actors have difficulties with translating strategy texts. The paper uses four cases as different examples of what happens, and what might be difficult, when actors translate organizational texts. In order to explore this, it draws on a translation training method from...... translation theory. The study shows that for those who have produced the text, it is difficult to translate a strategy where they have to change the words so others who don’t understand the language in the text can understand it. It also shows that for those who haven’t been a part of the production, it very...... challenge the notion that actors understand all texts and that managers per se can translate a text....

  8. Translational ecology for hydrogeology.

    Science.gov (United States)

    Schlesinger, William H

    2013-01-01

    Translational ecology--a special discipline aimed to improve the accessibility of science to policy makers--will help hydrogeologists contribute to the solution of pressing environmental problems. Patterned after translational medicine, translational ecology is a partnership to ensure that the right science gets done in a timely fashion, so that it can be communicated to those who need it. © 2013, National Ground Water Association.

  9. Translation and Intertextuality

    Directory of Open Access Journals (Sweden)

    Mohammad Rahimi

    2015-09-01

    Full Text Available This study is intends to describe and Presents a new theory of translation based on the "Intertextuality" unlike the Translation theories that presented to date, what all are based on the principle of "Equivalence". Our theory is based on the examples of Arabic poetry translated into Persian poetry. The major findings of this study show that the Intertextuality can serve as a link between the original text and the target. it can also interact with other texts is the translation result in the target language, Whtich is the book of poetic eloquence is addressed and was mentioned Literary robbery.

  10. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    Science.gov (United States)

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  11. International Meeting on Needs and Challenges in Translational Medicine: filling the gap between basic research and clinical applications. Book of abstract

    Energy Technology Data Exchange (ETDEWEB)

    Moretti, F; Belardelli, F [Department of Cell Biology and Neurosciences, Istituto Superiore di Sanita, Rome (Italy); Romero, M [Alleanza Contro il Cancro, Rome, (Italy)

    2008-07-01

    This multidisciplinary international meeting is organized by the Istituto Superiore di Sanita, in collaboration with Alleanza Contro il Cancro (Alliance Against Cancer, the Network of the Italian Comprehensive Cancer Centres) and EATRIS (European Advanced Translational Research Infrastructure in Medicine). The primary goal of the meeting is to provide a scientific forum to discuss the recent progress in translational research. Moreover, a particular focus will be devoted to the identification of needs, obstacles and new opportunities to promote translational research in biomedicine. The scientific programme will cover a broad range of fields including: cancer; neurosciences; rare diseases; cardiovascular diseases and infectious and autoimmune diseases. Furthermore, special attention will be given to the discussion of how comprehensive initiatives for addressing critical regulatory issues for First-In-Man - Phase I clinical studies can potentially improve the efficiency and quality of biomedical and translational research at an international level. [Italian] Questo convegno internazionale multidisciplinare e organizzato dall'Istituto Superiore di Sanita, in collaborazione con Alleanza Contro il Cancro (la rete italiana degli IRCCS oncologici) ed EATRIS (European Advanced Translational Research Infrastructure in Medicine). L'obiettivo principale del convegno e di rappresentare un forum scientifico per lo scambio di informazioni e di opinioni sui nuovi progressi nel campo della ricerca traslazionale. Un interesse particolare sara rivolto inoltre all'identificazione dei bisogni, degli ostacoli e delle nuove opportunita per promuovere la ricerca traslazionale in biomedicina. Il programma scientifico coprira una vasta area di campi di ricerca, tra cui cancro, neuroscienze, malattie rare, malattie cardiovascolari, malattie infettive ed autoimmuni. Particolare attenzione sara rivolta poi al dibattito sul modo in cui iniziative di vasta portata, che riguardano gli aspetti

  12. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  13. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    Science.gov (United States)

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  14. Applying Instructional Design Theories to Bioinformatics Education in Microarray Analysis and Primer Design Workshops

    Science.gov (United States)

    Shachak, Aviv; Ophir, Ron; Rubin, Eitan

    2005-01-01

    The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…

  15. Introductory Bioinformatics Exercises Utilizing Hemoglobin and Chymotrypsin to Reinforce the Protein Sequence-Structure-Function Relationship

    Science.gov (United States)

    Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany

    2007-01-01

    We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…

  16. Vertical and Horizontal Integration of Bioinformatics Education: A Modular, Interdisciplinary Approach

    Science.gov (United States)

    Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.

    2009-01-01

    Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…

  17. Lost in Translation?

    NARCIS (Netherlands)

    Jonkers, Peter

    2017-01-01

    Translating sacred scriptures is not only a praxis that is crucial for the fruitful, i.e. non-distorted and unbiased dialogue between different religious traditions, but also raises some fundamental theoretical questions when it comes to translating the sacred texts of the religious other or

  18. Translating VDM to Alloy

    DEFF Research Database (Denmark)

    Lausdahl, Kenneth

    2013-01-01

    specifications. However, to take advantage of the automated analysis of Alloy, the model-oriented VDM specifications must be translated into a constraint-based Alloy specifications. We describe how a sub- set of VDM can be translated into Alloy and how assertions can be expressed in VDM and checked by the Alloy...

  19. Students' Differentiated Translation Processes

    Science.gov (United States)

    Bossé, Michael J.; Adu-Gyamfi, Kwaku; Chandler, Kayla

    2014-01-01

    Understanding how students translate between mathematical representations is of both practical and theoretical importance. This study examined students' processes in their generation of symbolic and graphic representations of given polynomial functions. The purpose was to investigate how students perform these translations. The result of the study…

  20. Creativity, Culture and Translation

    Science.gov (United States)

    Babaee, Siamak; Wan Yahya, Wan Roselezam; Babaee, Ruzbeh

    2014-01-01

    Some scholars (Bassnett-McGuire, Catford, Brislin) suggest that a good piece of translation should be a strict reflection of the style of the original text while some others (Gui, Newmark, Wilss) consider the original text untranslatable unless it is reproduced. Opposing views by different critics suggest that translation is still a challenging…

  1. Translation as (Global) Writing

    Science.gov (United States)

    Horner, Bruce; Tetreault, Laura

    2016-01-01

    This article explores translation as a useful point of departure and framework for taking a translingual approach to writing engaging globalization. Globalization and the knowledge economy are putting renewed emphasis on translation as a key site of contest between a dominant language ideology of monolingualism aligned with fast capitalist…

  2. Measuring Translation Literality

    DEFF Research Database (Denmark)

    Carl, Michael; Schaeffer, Moritz

    2017-01-01

    Tirkkonen-Condit (2005: 407–408) argues that “It looks as if literal translation is [the result of] a default rendering procedure”. As a corollary, more literal translations should be easier to process, and less literal ones should be associated with more cognitive effort. In order to assess this...

  3. Text Coherence in Translation

    Science.gov (United States)

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can…

  4. Stimulating translational research

    DEFF Research Database (Denmark)

    Bentires-Alj, Mohamed; Rajan, Abinaya; van Harten, Wim

    2015-01-01

    Translational research leaves no-one indifferent and everyone expects a particular benefit. We as EU-LIFE (www.eu-life.eu), an alliance of 13 research institutes in European life sciences, would like to share our experience in an attempt to identify measures to promote translational research with...... without undermining basic exploratory research and academic freedom....

  5. Translation, Quality and Growth

    DEFF Research Database (Denmark)

    Petersen, Margrethe

    The paper investigates the feasibility and some of the possible consequences of applying quality management to translation. It first gives an introduction to two different schools of translation and to (total) quality management. It then examines whether quality management may, in theory...

  6. Translation, Interpreting and Lexicography

    DEFF Research Database (Denmark)

    Dam, Helle Vrønning; Tarp, Sven

    2018-01-01

    in the sense that their practice fields are typically ‘about something else’. Translators may, for example, be called upon to translate medical texts, and interpreters may be assigned to work on medical speeches. Similarly, practical lexicography may produce medical dictionaries. In this perspective, the three...

  7. Translation between cultures

    Directory of Open Access Journals (Sweden)

    Henrique de Oliveira Lee

    2016-05-01

    Full Text Available This article will question the pertinence of understanding interculturality in terms of translation between cultures. I shall study this hypothesis in two ways : 1 / the cosmopolitan horizon, which the idea of translation may implicate ; 2 / the critique of the premises of unique origin and homogeneity of cultures which this hypothesis makes possible.

  8. Idioms and Back Translation

    Science.gov (United States)

    Griffin, Frank

    2004-01-01

    The challenges of intercultural communication are an integral part of many undergraduate business communication courses. Marketing gaffes clearly illustrate the pitfalls of translation and underscore the importance of a knowledge of the culture with which one is attempting to communicate. A good way to approach the topic of translation pitfalls in…

  9. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  10. Bioinformatics: Cheap and robust method to explore biomaterial from Indonesia biodiversity

    Science.gov (United States)

    Widodo

    2015-02-01

    Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another

  11. Accelerating String Set Matching in FPGA Hardware for Bioinformatics Research

    Directory of Open Access Journals (Sweden)

    Burgess Shane C

    2008-04-01

    Full Text Available Abstract Background This paper describes techniques for accelerating the performance of the string set matching problem with particular emphasis on applications in computational proteomics. The process of matching peptide sequences against a genome translated in six reading frames is part of a proteogenomic mapping pipeline that is used as a case-study. The Aho-Corasick algorithm is adapted for execution in field programmable gate array (FPGA devices in a manner that optimizes space and performance. In this approach, the traditional Aho-Corasick finite state machine (FSM is split into smaller FSMs, operating in parallel, each of which matches up to 20 peptides in the input translated genome. Each of the smaller FSMs is further divided into five simpler FSMs such that each simple FSM operates on a single bit position in the input (five bits are sufficient for representing all amino acids and special symbols in protein sequences. Results This bit-split organization of the Aho-Corasick implementation enables efficient utilization of the limited random access memory (RAM resources available in typical FPGAs. The use of on-chip RAM as opposed to FPGA logic resources for FSM implementation also enables rapid reconfiguration of the FPGA without the place and routing delays associated with complex digital designs. Conclusion Experimental results show storage efficiencies of over 80% for several data sets. Furthermore, the FPGA implementation executing at 100 MHz is nearly 20 times faster than an implementation of the traditional Aho-Corasick algorithm executing on a 2.67 GHz workstation.

  12. Translation Ambiguity but Not Word Class Predicts Translation Performance

    Science.gov (United States)

    Prior, Anat; Kroll, Judith F.; Macwhinney, Brian

    2013-01-01

    We investigated the influence of word class and translation ambiguity on cross-linguistic representation and processing. Bilingual speakers of English and Spanish performed translation production and translation recognition tasks on nouns and verbs in both languages. Words either had a single translation or more than one translation. Translation…

  13. Examining English-German Translation Ambiguity Using Primed Translation Recognition

    Science.gov (United States)

    Eddington, Chelsea M.; Tokowicz, Natasha

    2013-01-01

    Many words have more than one translation across languages. Such "translation-ambiguous" words are translated more slowly and less accurately than their unambiguous counterparts. We examine the extent to which word context and translation dominance influence the processing of translation-ambiguous words. We further examine how these factors…

  14. Vanillin inhibits translation and induces messenger ribonucleoprotein (mRNP) granule formation in saccharomyces cerevisiae: application and validation of high-content, image-based profiling.

    Science.gov (United States)

    Iwaki, Aya; Ohnuki, Shinsuke; Suga, Yohei; Izawa, Shingo; Ohya, Yoshikazu

    2013-01-01

    Vanillin, generated by acid hydrolysis of lignocellulose, acts as a potent inhibitor of the growth of the yeast Saccharomyces cerevisiae. Here, we investigated the cellular processes affected by vanillin using high-content, image-based profiling. Among 4,718 non-essential yeast deletion mutants, the morphology of those defective in the large ribosomal subunit showed significant similarity to that of vanillin-treated cells. The defects in these mutants were clustered in three domains of the ribosome: the mRNA tunnel entrance, exit and backbone required for small subunit attachment. To confirm that vanillin inhibited ribosomal function, we assessed polysome and messenger ribonucleoprotein granule formation after treatment with vanillin. Analysis of polysome profiles showed disassembly of the polysomes in the presence of vanillin. Processing bodies and stress granules, which are composed of non-translating mRNAs and various proteins, were formed after treatment with vanillin. These results suggest that vanillin represses translation in yeast cells.

  15. Applications of Recombinant DNA Technology in Gastrointestinal Medicine and Hepatology: Basic Paradigms of Molecular Cell Biology. Part C: Protein Synthesis and Post-Translational Processing in Eukaryotic Cells

    Directory of Open Access Journals (Sweden)

    Gary E Wild

    2000-01-01

    Full Text Available The translation of mRNA constitutes the first step in the synthesis of a functional protein. The polypeptide chain is subsequently folded into the appropriate three-dimensional configuration and undergoes a variety of processing steps before being converted into its active form. These processing steps are intimately related to the cellular events that occur in the endoplasmic reticulum and Golgi compartments, and determine the sorting and transport of different proteins to their appropriate destinations within the cell. While the regulation of gene expression occurs primarily at the level of transcription, the expression of many genes can also be controlled at the level of translation. Most proteins can be regulated in response to extracellular signals. In addition, intracellular protein levels can be controlled by differential rates of protein degradation. Thus, the regulation of both the amounts and activities of intracellular proteins ultimately determines all aspects of cell behaviour.

  16. Establishment and Application of a High Throughput Screening System Targeting the Interaction between HCV Internal Ribosome Entry Site and Human Eukaryotic Translation Initiation Factor 3

    Directory of Open Access Journals (Sweden)

    Yuying Zhu

    2017-05-01

    Full Text Available Viruses are intracellular obligate parasites and the host cellular machinery is usually recruited for their replication. Human eukaryotic translation initiation factor 3 (eIF3 could be directly recruited by the hepatitis C virus (HCV internal ribosome entry site (IRES to promote the translation of viral proteins. In this study, we establish a fluorescence polarization (FP based high throughput screening (HTS system targeting the interaction between HCV IRES and eIF3. By screening a total of 894 compounds with this HTS system, two compounds (Mucl39526 and NP39 are found to disturb the interaction between HCV IRES and eIF3. And these two compounds are further demonstrated to inhibit the HCV IRES-dependent translation in vitro. Thus, this HTS system is functional to screen the potential HCV replication inhibitors targeting human eIF3, which is helpful to overcome the problem of viral resistance. Surprisingly, one compound HP-3, a kind of oxytocin antagonist, is discovered to significantly enhance the interaction between HCV IRES and eIF3 by this HTS system. HP-3 is demonstrated to directly interact with HCV IRES and promote the HCV IRES-dependent translation both in vitro and in vivo, which strongly suggests that HP-3 has potentials to promote HCV replication. Therefore, this HTS system is also useful to screen the potential HCV replication enhancers, which is meaningful for understanding the viral replication and screening novel antiviral drugs. To our knowledge, this is the first HTS system targeting the interaction between eIF3 and HCV IRES, which could be applied to screen both potential HCV replication inhibitors and enhancers.

  17. Establishing cephalometric landmarks for the translational study of Le Fort-based facial transplantation in Swine: enhanced applications using computer-assisted surgery and custom cutting guides.

    Science.gov (United States)

    Santiago, Gabriel F; Susarla, Srinivas M; Al Rakan, Mohammed; Coon, Devin; Rada, Erin M; Sarhane, Karim A; Shores, Jamie T; Bonawitz, Steven C; Cooney, Damon; Sacks, Justin; Murphy, Ryan J; Fishman, Elliot K; Brandacher, Gerald; Lee, W P Andrew; Liacouras, Peter; Grant, Gerald; Armand, Mehran; Gordon, Chad R

    2014-05-01

    Le Fort-based, maxillofacial allotransplantation is a reconstructive alternative gaining clinical acceptance. However, the vast majority of single-jaw transplant recipients demonstrate less-than-ideal skeletal and dental relationships, with suboptimal aesthetic harmony. The purpose of this study was to investigate reproducible cephalometric landmarks in a large-animal model, where refinement of computer-assisted planning, intraoperative navigational guidance, translational bone osteotomies, and comparative surgical techniques could be performed. Cephalometric landmarks that could be translated into the human craniomaxillofacial skeleton, and that would remain reliable following maxillofacial osteotomies with midfacial alloflap inset, were sought on six miniature swine. Le Fort I- and Le Fort III-based alloflaps were harvested in swine with osteotomies, and all alloflaps were either autoreplanted or transplanted. Cephalometric analyses were performed on lateral cephalograms preoperatively and postoperatively. Critical cephalometric data sets were identified with the assistance of surgical planning and virtual prediction software and evaluated for reliability and translational predictability. Several pertinent landmarks and human analogues were identified, including pronasale, zygion, parietale, gonion, gnathion, lower incisor base, and alveolare. Parietale-pronasale-alveolare and parietale-pronasale-lower incisor base were found to be reliable correlates of sellion-nasion-A point angle and sellion-nasion-B point angle measurements in humans, respectively. There is a set of reliable cephalometric landmarks and measurement angles pertinent for use within a translational large-animal model. These craniomaxillofacial landmarks will enable development of novel navigational software technology, improve cutting guide designs, and facilitate exploration of new avenues for investigation and collaboration.

  18. Meeting review: 2002 O'Reilly Bioinformatics Technology Conference.

    Science.gov (United States)

    Counsell, Damian

    2002-01-01

    At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.

  19. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  20. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  1. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Science.gov (United States)

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  2. Bioinformatics meets user-centred design: a perspective.

    Directory of Open Access Journals (Sweden)

    Katrina Pavelin

    Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.

  3. A Quick Guide for Building a Successful Bioinformatics Community

    Science.gov (United States)

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-01-01

    “Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  4. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Directory of Open Access Journals (Sweden)

    Fristensky Brian

    2007-02-01

    Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  5. Translating person-centered care into practice

    DEFF Research Database (Denmark)

    Zoffmann, Vibeke; Hörnsten, Åsa; Storbækken, Solveig

    2016-01-01

    OBJECTIVE: Person-centred care [PCC] can engage people in living well with a chronic condition. However, translating PCC into practice is challenging. We aimed to compare the translational potentials of three approaches: motivational interviewing [MI], illness integration support [IIS] and guided...... tools. CONCLUSION: Each approach has a primary application: MI, when ambivalence threatens positive change; IIS, when integrating newly diagnosed chronic conditions; and GSD, when problem solving is difficult, or deadlocked. PRACTICE IMPLICATIONS: Professionals must critically consider the context...

  6. Human resources management in a translation process

    OpenAIRE

    Rogelj, Jure

    2015-01-01

    The purpose of the web application development is the modernization of the current data acquisition and management model for new and existing translators in the company Iolar d.o.o. Previously data on translators who signed up to work in the company were entered multiple times as they were entered through several entry points. The acquired data were then manually entered into an MS Excel sheet and the Projetex program. We analyzed the current data acquisition and management model as well ...

  7. Human resources management in a translation process

    OpenAIRE

    Rogelj, Jure

    2014-01-01

    The purpose of the web application development is the modernization of the current data acquisition and management model for new and existing translators in the company Iolar d.o.o. Previously data on translators who signed up to work in the company were entered multiple times as they were entered through several entry points. The acquired data were then manually entered into an MS Excel sheet and the Projetex program. We analyzed the current data acquisition and management model as well ...

  8. p3d--Python module for structural bioinformatics.

    Science.gov (United States)

    Fufezan, Christian; Specht, Michael

    2009-08-21

    High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  9. p3d – Python module for structural bioinformatics

    Directory of Open Access Journals (Sweden)

    Fufezan Christian

    2009-08-01

    Full Text Available Abstract Background High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. Results p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files. p3d's strength arises from the combination of a very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP tree, b set theory and c functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. Conclusion p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  10. GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training

    Science.gov (United States)

    Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.

    2015-01-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076

  11. Bioinformatics analysis and detection of gelatinase encoded gene in Lysinibacillussphaericus

    Science.gov (United States)

    Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat

    2016-11-01

    In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.

  12. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  13. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  14. A comparison of common programming languages used in bioinformatics.

    Science.gov (United States)

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  15. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra

    2013-06-25

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  16. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  17. Translation, Cultural Translation and the Hegemonic English

    Directory of Open Access Journals (Sweden)

    Roman Horak

    2015-01-01

    Full Text Available This brief chapter problematizes the hegemonic position of the English language in Cultural Studies, which, in the author's view, can be understood as a moment that stands against a true internationalisation of the project. Following an argu-ment referring to the necessary 'translation' process (here seen as 're-articulation', 'transcoding' or 'transculturation' Stuart Hall has put forward almost two decades ago, the essay, firstly, turns to the notion of 'linguistic translations', and deals, secondly, with what has been coined 'cultural translation'. Discussing approaches developed by Walter Benjamin, Umberto Eco and Homi Bhabha, the complex relationship between the two terms is being investigated. Finally, in a modest attempt to throw some light on this hegemonic structure, central aspects of the output of three important journals (European Journal of Cultural Studies, International Journal of Cultural Studies, Cultural Studies, i. e. an analysis of the linguistic and institutional backgrounds of the authors of the ten most-read and most-cited essays, are presented. Based on these findings I argue that it is not simply the addition of the discursive field (language to the academic space (institution that defines the mecha-nism of exclusion and inclusion. Rather, it is the articulation of both moments, i.e. that of language and that of the institution, which - in various contexts (but in their own very definite ways - can help to develop that structure which at present is still hindering a further, more profound internationalisation of the project that is Cultural Studies.

  18. Translation of feminine: Szymborska

    Directory of Open Access Journals (Sweden)

    Olga Donata Guerizoli Kempinska

    2014-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2014v1n33p35 The paper discusses the problems present in the process of the translation of the feminine, related to the discursive articulations of the gender and to the socio-historical conditions of its construction. The differences between languages make this articulation hard to transpose and such is the case in some of Wisława Szymborska’s poems. An attentive reading of her work and of its translations in different languages reveals that the transposition of its specifically feminine humor is also a challenge for the translator

  19. Bacterial translational regulations: high diversity between all mRNAs and major role in gene expression

    Directory of Open Access Journals (Sweden)

    Picard Flora

    2012-10-01

    Full Text Available Abstract Background In bacteria, the weak correlations at the genome scale between mRNA and protein levels suggest that not all mRNAs are translated with the same efficiency. To experimentally explore mRNA translational level regulation at the systemic level, the detailed translational status (translatome of all mRNAs was measured in the model bacterium Lactococcus lactis in exponential phase growth. Results Results demonstrated that only part of the entire population of each mRNA species was engaged in translation. For transcripts involved in translation, the polysome size reached a maximum of 18 ribosomes. The fraction of mRNA engaged in translation (ribosome occupancy and ribosome density were not constant for all genes. This high degree of variability was analyzed by bioinformatics and statistical modeling in order to identify general rules of translational regulation. For most of the genes, the ribosome density was lower than the maximum value revealing major control of translation by initiation. Gene function was a major translational regulatory determinant. Both ribosome occupancy and ribosome density were particularly high for transcriptional regulators, demonstrating the positive role of translational regulation in the coordination of transcriptional networks. mRNA stability was a negative regulatory factor of ribosome occupancy and ribosome density, suggesting antagonistic regulation of translation and mRNA stability. Furthermore, ribosome occupancy was identified as a key component of intracellular protein levels underlining the importance of translational regulation. Conclusions We have determined, for the first time in a bacterium, the detailed translational status for all mRNAs present in the cell. We have demonstrated experimentally the high diversity of translational states allowing individual gene differentiation and the importance of translation-level regulation in the complex process linking gene expression to protein

  20. Bio-jETI: a service integration, design, and provisioning platform for orchestrated bioinformatics processes.

    Science.gov (United States)

    Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard

    2008-04-25

    With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.

  1. Design and bioinformatics analysis of novel biomimetic peptides as nanocarriers for gene transfer

    Directory of Open Access Journals (Sweden)

    Asia Majidi

    2015-01-01

    Full Text Available Objective(s: The introduction of nucleic acids into cells for therapeutic objectives is significantly hindered by the size and charge of these molecules and therefore requires efficient vectors that assist cellular uptake. For several years great efforts have been devoted to the study of development of recombinant vectors based on biological domains with potential applications in gene therapy. Such vectors have been synthesized in genetically engineered approach, resulting in biomacromolecules with new properties that are not present in nature. Materials and Methods: In this study, we have designed new peptides using homology modeling with the purpose of overcoming the cell barriers for successful gene delivery through Bioinformatics tools. Three different carriers were designed and one of those with better score through Bioinformatics tools was cloned, expressed and its affinity for pDNA was monitored. Results: The resultszz demonstrated that the vector can effectively condense pDNAinto nanoparticles with the average sizes about 100 nm. Conclusion: We hope these peptides can overcome the biological barriers associated with gene transfer, and mediate efficient gene delivery.

  2. Galaxy Workflows for Web-based Bioinformatics Analysis of Aptamer High-throughput Sequencing Data

    Directory of Open Access Journals (Sweden)

    William H Thiel

    2016-01-01

    Full Text Available Development of RNA and DNA aptamers for diagnostic and therapeutic applications is a rapidly growing field. Aptamers are identified through iterative rounds of selection in a process termed SELEX (Systematic Evolution of Ligands by EXponential enrichment. High-throughput sequencing (HTS revolutionized the modern SELEX process by identifying millions of aptamer sequences across multiple rounds of aptamer selection. However, these vast aptamer HTS datasets necessitated bioinformatics techniques. Herein, we describe a semiautomated approach to analyze aptamer HTS datasets using the Galaxy Project, a web-based open source collection of bioinformatics tools that were originally developed to analyze genome, exome, and transcriptome HTS data. Using a series of Workflows created in the Galaxy webserver, we demonstrate efficient processing of aptamer HTS data and compilation of a database of unique aptamer sequences. Additional Workflows were created to characterize the abundance and persistence of aptamer sequences within a selection and to filter sequences based on these parameters. A key advantage of this approach is that the online nature of the Galaxy webserver and its graphical interface allow for the analysis of HTS data without the need to compile code or install multiple programs.

  3. Opportunities and challenges provided by cloud repositories for bioinformatics-enabled drug discovery.

    Science.gov (United States)

    Dalpé, Gratien; Joly, Yann

    2014-09-01

    Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. © 2014 Wiley Periodicals, Inc.

  4. Perceived radial translation during centrifugation

    NARCIS (Netherlands)

    Bos, J.E.; Correia Grácio, B.J.

    2015-01-01

    BACKGROUND: Linear acceleration generally gives rise to translation perception. Centripetal acceleration during centrifugation, however, has never been reported giving rise to a radial, inward translation perception. OBJECTIVE: To study whether centrifugation can induce a radial translation

  5. Russian translations for Cochrane.

    Science.gov (United States)

    Yudina, E V; Ziganshina, L E

    2015-01-01

    Cochrane collaboration has made a huge contribution to the development of evidence-based medicine; Cochrane work is the international gold standard of independent, credible and reliable high-quality information in medicine. Over the past 20 years the Cochrane Collaboration helped transforming decision-making in health and reforming it significantly, saving lives and contributing to longevity [1]. Until recently, Cochrane evidence were available only in English, which represents a significant barrier to their wider use in non-English speaking countries. To provide access to evidence, obtained from Cochrane Reviews, for health professionals and general public (from non-English-speaking countries), bypassing language barriers, Cochrane collaboration in 2014 initiated an international project of translating Plain language summaries of Cochrane Reviews into other languages [2, 3]. Russian translations of Plain language summaries were started in May 2014 by the team from Kazan Federal University (Department of Basic and Clinical Pharmacology; 2014-2015 as an Affiliated Centre in Tatarstan of the Nordic Cochrane Centre, since August 2015 as Cochrane Russia, a Russian branch of Cochrane Nordic, Head - Liliya Eugenevna Ziganshina) on a voluntary basis. To assess the quality of Russian translations of Cochrane Plain Language Summaries (PLS) and their potential impact on the Russian speaking community through user feedback with the overarching aim of furthering the translations project. We conducted the continuous online survey via Google Docs. We invited respondents through the electronic Russian language discussion forum on Essential Medicines (E-lek), links to survey on the Russian Cochrane.org website, invitations to Cochrane contributors registered in Archie from potential Russian-speaking countries. We set up the survey in Russian and English. The respondents were asked to respond to the questionnaire regarding the relevance and potential impact of the Cochrane Russian

  6. A phased translation function

    International Nuclear Information System (INIS)

    Read, R.J.; Schierbeek, A.J.

    1988-01-01

    A phased translation function, which takes advantage of prior phase information to determine the position of an oriented mulecular replacement model, is examined. The function is the coefficient of correlation between the electron density computed with the prior phases and the electron density of the translated model, evaluated in reciprocal space as a Fourier transform. The correlation coefficient used in this work is closely related to an overlap function devised by Colman, Fehlhammer and Bartels. Tests with two protein structures, one of which was solved with the help of the phased translation function, show that little phase information is required to resolve the translation problem, and that the function is relatively insensitive to misorientation of the model. (orig.)

  7. Translation and Creation

    Directory of Open Access Journals (Sweden)

    Paulo Bezerra

    2012-12-01

    Full Text Available The article begins with the differences betweenscientific and fictional translations, and focus on the second.The fictional translation works with meanings, opens itselfto the plurissignification in the purpose to create a similarity of the dissimilarity; in this process, the translator does nottranslate a language, but what a creative individuality makeswith a language. At last there is an approach to the knowledgeand skills necessaries to a translator of literature: theknowledge of the theories of the literature and of thetranslation, the capacity to preserve the national color ofthe original text and at the same time to respect the arrivallanguage, and the sensibility to his national languagevariations present in the daily and in the literary spheres.

  8. Translation of research outcome

    African Journals Online (AJOL)

    unhcc

    2017-01-03

    Jan 3, 2017 ... we must act”1 - Translation of research outcome for health policy, strategy and ... others iron-out existing gaps on Health Policy .... within the broader framework of global call and ... research: defining the terrain; identifying.

  9. Staging Ethnographic Translation

    DEFF Research Database (Denmark)

    Lundberg, Pia

    2009-01-01

    Objectifying the cultural diversity of visual fieldmethods - and the analysis of balancing the cultural known and unknown through anthropological analysis (aided by the analytical concept translation (Edwin Ardener 1989))...

  10. Translation for language purposes

    DEFF Research Database (Denmark)

    Schjoldager, Anne

    2003-01-01

    The paper describes the background, subjects, assumptions, procedure, and preliminary results of a small-scale experimental study of L2 translation (Danish into English) and picture verbalization in L2 (English)....

  11. The Use of Circular Arc Cams for the Command of a Robotic System. Part II: Application to Knife Edge Translating Follower

    Directory of Open Access Journals (Sweden)

    Stelian Alaci

    2014-12-01

    Full Text Available In the second part of paper the results from the first part are applied to the cam mechanisms with knife-edge translating follower. First, a method for characterizing the tangency points fulfilling the actual constraints of the problem is identified. Applying this criterion, a set of equidistant points is generated on the whole geometrical locus of tangency points and the characteristics of approximating circles are found. Considering all replacement solutions, the kinematical analysis for all cam mechanisms with approximate profiles is made, choosing amid them the one performing the closest kinematical behavior to the exact cam mechanism.

  12. Lost in translation?

    DEFF Research Database (Denmark)

    Zethsen, Karen Korning; Askehave, Inger

    2011-01-01

    This article deals with an aspect of patient information that differs somewhat from the traditional scope of this journal; namely the linguistic and translational aspects of Patient Information Leaflets (PILs). During the past decade much work has been dedicated to making the English PILs...... as informative and lay-friendly as possible. However, much of the good work is ruined when the PIL is translated. Why is this so and what can be done about it?...

  13. Jungmann's translation of Paradise Lost

    OpenAIRE

    Janů, Karel

    2014-01-01

    This thesis examines Josef Jungmann's translation of John Milton's Paradise Lost. Josef Jungmann was one of the leading figures of the Czech National Revival and translated Milton 's poem between the years 1800 and 1804. The thesis covers Jungmann's theoretical model of translation and presents Jungmann's motives for translation of Milton's epic poem. The paper also describes the aims Jungmann had with his translation and whether he has achieved them. The reception Jungmann's translation rece...

  14. PCI: A PATRAN-NASTRAN model translator

    Science.gov (United States)

    Sheerer, T. J.

    1990-01-01

    The amount of programming required to develop a PATRAN-NASTRAN translator was surprisingly small. The approach taken produced a highly flexible translator comparable with the PATNAS translator and superior to the PATCOS translator. The coding required varied from around ten lines for a shell element to around thirty for a bar element, and the time required to add a feature to the program is typically less than an hour. The use of a lookup table for element names makes the translator also applicable to other versions of NASTRAN. The saving in time as a result of using PDA's Gateway utilities was considerable. During the writing of the program it became apparent that, with a somewhat more complex structure, it would be possible to extend the element data file to contain all data required to define the translation from PATRAN to NASTRAN by mapping of data between formats. Similar data files on property, material and grid formats would produce a completely universal translator from PATRAN to any FEA program, or indeed any CAE system.

  15. Systematic reviews and knowledge translation.

    Science.gov (United States)

    Tugwell, Peter; Robinson, Vivian; Grimshaw, Jeremy; Santesso, Nancy

    2006-08-01

    Proven effective interventions exist that would enable all countries to meet the Millennium Development Goals. However, uptake and use of these interventions in the poorest populations is at least 50% less than in the richest populations within each country. Also, we have recently shown that community effectiveness of interventions is lower for the poorest populations due to a "staircase" effect of lower coverage/access, worse diagnostic accuracy, less provider compliance and less consumer adherence. We propose an evidence-based framework for equity-oriented knowledge translation to enhance community effectiveness and health equity. This framework is represented as a cascade of steps to assess and prioritize barriers and thus choose effective knowledge translation interventions that are tailored for relevant audiences (public, patient, practitioner, policy-maker, press and private sector), as well as the evaluation, monitoring and sharing of these strategies. We have used two examples of effective interventions (insecticide-treated bednets to prevent malaria and childhood immunization) to illustrate how this framework can provide a systematic method for decision-makers to ensure the application of evidence-based knowledge in disadvantaged populations. Future work to empirically validate and evaluate the usefulness of this framework is needed. We invite researchers and implementers to use the cascade for equity-oriented knowledge translation as a guide when planning implementation strategies for proven effective interventions. We also encourage policy-makers and health-care managers to use this framework when deciding how effective interventions can be implemented in their own settings.

  16. Revisiting interaction in knowledge translation

    Directory of Open Access Journals (Sweden)

    Zackheim Lisa

    2007-10-01

    Full Text Available Abstract Background Although the study of research utilization is not new, there has been increased emphasis on the topic over the recent past. Science push models that are researcher driven and controlled and demand pull models emphasizing users/decision-maker interests have largely been abandoned in favour of more interactive models that emphasize linkages between researchers and decisionmakers. However, despite these and other theoretical and empirical advances in the area of research utilization, there remains a fundamental gap between the generation of research findings and the application of those findings in practice. Methods Using a case approach, the current study looks at the impact of one particular interaction approach to research translation used by a Canadian funding agency. Results Results suggest there may be certain conditions under which different levels of decisionmaker involvement in research will be more or less effective. Four attributes are illuminated by the current case study: stakeholder diversity, addressability/actionability of results, finality of study design and methodology, and politicization of results. Future research could test whether these or other variables can be used to specify some of the conditions under which different approaches to interaction in knowledge translation are likely to facilitate research utilization. Conclusion This work suggests that the efficacy of interaction approaches to research translation may be more limited than current theory proposes and underscores the need for more completely specified models of research utilization that can help address the slow pace of change in this area.

  17. On the application of the theory of the translational Brownian movement to the calculation of the differential cross-sections for the incoherent scattering of slow neutrons

    International Nuclear Information System (INIS)

    Coffey, W.T.

    1978-01-01

    It is shown how three models (based on the theory of the Brownian movement) for the translational motion of an atom in a fluid may be used to calculate explicitly the intermediate scattering functions and differential cross-sections for the incoherent scattering of slow neutrons. In the first model the translational motion of the atom is represented by the motion of a particle in space subjected to no forces other than those arising from the thermal motion of its surroundings. The differential scattering cross-section for this model is then obtained as a continued fraction similar to that given by Sack (Proc. Phys. Soc.; B70:402 and 414 (1957)) for the electric polarisability in his investigation of the role of inertial effects in dielectric relaxation. The second model is a corrected version of the itinerant oscillator model of Sears (Proc. Phys. Soc.; 86:953 (1965)). Here the differential cross-section is obtained in the form of a series and a closed-form expression is found for the intermediate scattering function. The last model to be considered is the harmonically bound particle where again a closed form expression is obtained for the intermediate scattering function. In each case the intermediate scattering function has a mathematical form which is similar to the after-effect function describing the decay of electric polarisation for the rotational versions of the models. (author)

  18. NMR/MS Translator for the Enhanced Simultaneous Analysis of Metabolomics Mixtures by NMR Spectroscopy and Mass Spectrometry: Application to Human Urine.

    Science.gov (United States)

    Bingol, Kerem; Brüschweiler, Rafael

    2015-06-05

    A novel metabolite identification strategy is presented for the combined NMR/MS analysis of complex metabolite mixtures. The approach first identifies metabolite candidates from 1D or 2D NMR spectra by NMR database query, which is followed by the determination of the masses (m/z) of their possible ions, adducts, fragments, and characteristic isotope distributions. The expected m/z ratios are then compared with the MS(1) spectrum for the direct assignment of those signals of the mass spectrum that contain information about the same metabolites as the NMR spectra. In this way, the mass spectrum can be assigned with very high confidence, and it provides at the same time validation of the NMR-derived metabolites. The method was first demonstrated on a model mixture, and it was then applied to human urine collected from a pool of healthy individuals. A number of metabolites could be detected that had not been reported previously, further extending the list of known urine metabolites. The new analysis approach, which is termed NMR/MS Translator, is fully automated and takes only a few seconds on a computer workstation. NMR/MS Translator synergistically uses the power of NMR and MS, enhancing the accuracy and efficiency of the identification of those metabolites compiled in databases.

  19. Applications of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Applications of Clustering. Biology – medical imaging, bioinformatics, ecology, phylogenies problems etc. Market research. Data Mining. Social Networks. Any problem measuring similarity/correlation. (dimensions represent different parameters)

  20. Overview of the Inland California Translational Consortium

    Science.gov (United States)

    Malkas, Linda H.

    2017-05-01

    The mission of the Inland California Translational Consortium (ICTC), an independent research consortium comprising a unique hub of regional institutions (City of Hope [COH], California Institute of Technology [Caltech], Jet Propulsion Laboratory [JPL], University of California Riverside [UCR], and Claremont Colleges Keck Graduate Institute [KGI], is to institute a new paradigm within the academic culture to accelerate translation of innovative biomedical discoveries into clinical applications that positively affect human health and life. The ICTC actively supports clinical translational research as well as the implementation and advancement of novel education and training models for the translation of basic discoveries into workable products and practices that preserve and improve human health while training and educating at all levels of the workforce using innovative forward-thinking approaches.