WorldWideScience

Sample records for bioinformatics resource center

  1. Improvements to PATRIC, the all-bacterial Bioinformatics Database and Analysis Resource Center

    Science.gov (United States)

    Wattam, Alice R.; Davis, James J.; Assaf, Rida; Boisvert, Sébastien; Brettin, Thomas; Bun, Christopher; Conrad, Neal; Dietrich, Emily M.; Disz, Terry; Gabbard, Joseph L.; Gerdes, Svetlana; Henry, Christopher S.; Kenyon, Ronald W.; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K.; Olsen, Gary J.; Murphy-Olson, Daniel E.; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D.; Shukla, Maulik; Vonstein, Veronika; Warren, Andrew; Xia, Fangfang; Yoo, Hyunseung; Stevens, Rick L.

    2017-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the bacterial Bioinformatics Resource Center (https://www.patricbrc.org). Recent changes to PATRIC include a redesign of the web interface and some new services that provide users with a platform that takes them from raw reads to an integrated analysis experience. The redesigned interface allows researchers direct access to tools and data, and the emphasis has changed to user-created genome-groups, with detailed summaries and views of the data that researchers have selected. Perhaps the biggest change has been the enhanced capability for researchers to analyze their private data and compare it to the available public data. Researchers can assemble their raw sequence reads and annotate the contigs using RASTtk. PATRIC also provides services for RNA-Seq, variation, model reconstruction and differential expression analysis, all delivered through an updated private workspace. Private data can be compared by ‘virtual integration’ to any of PATRIC's public data. The number of genomes available for comparison in PATRIC has expanded to over 80 000, with a special emphasis on genomes with antimicrobial resistance data. PATRIC uses this data to improve both subsystem annotation and k-mer classification, and tags new genomes as having signatures that indicate susceptibility or resistance to specific antibiotics. PMID:27899627

  2. Wrapping and interoperating bioinformatics resources using CORBA.

    Science.gov (United States)

    Stevens, R; Miller, C

    2000-02-01

    Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.

  3. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers

    DEFF Research Database (Denmark)

    Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude

    2012-01-01

    Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response...... and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...

  4. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    DEFF Research Database (Denmark)

    Babbitt, Patricia C.; Bagos, Pantelis G.; Bairoch, Amos

    2015-01-01

    During 11–12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from...... protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication...

  5. PineappleDB: An online pineapple bioinformatics resource

    Directory of Open Access Journals (Sweden)

    Fairbairn David J

    2005-10-01

    Full Text Available Abstract Background A world first pineapple EST sequencing program has been undertaken to investigate genes expressed during non-climacteric fruit ripening and the nematode-plant interaction during root infection. Very little is known of how non-climacteric fruit ripening is controlled or of the molecular basis of the nematode-plant interaction. PineappleDB was developed to provide the research community with access to a curated bioinformatics resource housing the fruit, root and nematode infected gall expressed sequences. Description PineappleDB is an online, curated database providing integrated access to annotated expressed sequence tag (EST data for cDNA clones isolated from pineapple fruit, root, and nematode infected root gall vascular cylinder tissues. The database currently houses over 5600 EST sequences, 3383 contig consensus sequences, and associated bioinformatic data including splice variants, Arabidopsis homologues, both MIPS based and Gene Ontology functional classifications, and clone distributions. The online resource can be searched by text or by BLAST sequence homology. The data outputs provide comprehensive sequence, bioinformatic and functional classification information. Conclusion The online pineapple bioinformatic resource provides the research community with access to pineapple fruit and root/gall sequence and bioinformatic data in a user-friendly format. The search tools enable efficient data mining and present a wide spectrum of bioinformatic and functional classification information. PineappleDB will be of broad appeal to researchers investigating pineapple genetics, non-climacteric fruit ripening, root-knot nematode infection, crassulacean acid metabolism and alternative RNA splicing in plants.

  6. Bioinformatics Resources for In Silico Proteome Analysis

    Directory of Open Access Journals (Sweden)

    Pruess Manuela

    2003-01-01

    Full Text Available In the growing field of proteomics, tools for the in silico analysis of proteins and even of whole proteomes are of crucial importance to make best use of the accumulating amount of data. To utilise this data for healthcare and drug development, first the characteristics of proteomes of entire species—mainly the human—have to be understood, before secondly differentiation between individuals can be surveyed. Specialised databases about nucleic acid sequences, protein sequences, protein tertiary structure, genome analysis, and proteome analysis represent useful resources for analysis, characterisation, and classification of protein sequences. Different from most proteomics tools focusing on similarity searches, structure analysis and prediction, detection of specific regions, alignments, data mining, 2D PAGE analysis, or protein modelling, respectively, comprehensive databases like the proteome analysis database benefit from the information stored in different databases and make use of different protein analysis tools to provide computational analysis of whole proteomes.

  7. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat.

    Science.gov (United States)

    Babbitt, Patricia C; Bagos, Pantelis G; Bairoch, Amos; Bateman, Alex; Chatonnet, Arnaud; Chen, Mark Jinan; Craik, David J; Finn, Robert D; Gloriam, David; Haft, Daniel H; Henrissat, Bernard; Holliday, Gemma L; Isberg, Vignir; Kaas, Quentin; Landsman, David; Lenfant, Nicolas; Manning, Gerard; Nagano, Nozomi; Srinivasan, Narayanaswamy; O'Donovan, Claire; Pruitt, Kim D; Sowdhamini, Ramanathan; Rawlings, Neil D; Saier, Milton H; Sharman, Joanna L; Spedding, Michael; Tsirigos, Konstantinos D; Vastermark, Ake; Vriend, Gerrit

    2015-01-01

    During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication and funding. An important outcome of this meeting was the creation of a Specialist Protein Resource Network that we believe will improve coordination of the activities of its member resources. We invite further protein database resources to join the network and continue the dialogue.

  8. mockrobiota: a Public Resource for Microbiome Bioinformatics Benchmarking.

    Science.gov (United States)

    Bokulich, Nicholas A; Rideout, Jai Ram; Mercurio, William G; Shiffer, Arron; Wolfe, Benjamin; Maurice, Corinne F; Dutton, Rachel J; Turnbaugh, Peter J; Knight, Rob; Caporaso, J Gregory

    2016-01-01

    Mock communities are an important tool for validating, optimizing, and comparing bioinformatics methods for microbial community analysis. We present mockrobiota, a public resource for sharing, validating, and documenting mock community data resources, available at http://caporaso-lab.github.io/mockrobiota/. The materials contained in mockrobiota include data set and sample metadata, expected composition data (taxonomy or gene annotations or reference sequences for mock community members), and links to raw data (e.g., raw sequence data) for each mock community data set. mockrobiota does not supply physical sample materials directly, but the data set metadata included for each mock community indicate whether physical sample materials are available. At the time of this writing, mockrobiota contains 11 mock community data sets with known species compositions, including bacterial, archaeal, and eukaryotic mock communities, analyzed by high-throughput marker gene sequencing. IMPORTANCE The availability of standard and public mock community data will facilitate ongoing method optimizations, comparisons across studies that share source data, and greater transparency and access and eliminate redundancy. These are also valuable resources for bioinformatics teaching and training. This dynamic resource is intended to expand and evolve to meet the changing needs of the omics community.

  9. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    Science.gov (United States)

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  10. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    , and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...... as a strategic frontier between biology and computer science. Machine learning approaches (e.g. neural networks, hidden Markov models, and belief networsk) are ideally suited for areas in which there is a lot of data but little theory. The goal in machine learning is to extract useful information from a body...... of data by building good probabilistic models. The particular twist behind machine learning, however, is to automate the process as much as possible.In this book, the authors present the key machine learning approaches and apply them to the computational problems encountered in the analysis of biological...

  11. CattleTickBase: An integrated Internet-based bioinformatics resource for Rhipicephalus (Boophilus) microplus

    Science.gov (United States)

    The Rhipicephalus microplus genome is large and complex in structure, making a genome sequence difficult to assemble and costly to resource the required bioinformatics. In light of this, a consortium of international collaborators was formed to pool resources to begin sequencing this genome. We have...

  12. MOWServ: a web client for integration of bioinformatic resources

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  13. MOWServ: a web client for integration of bioinformatic resources.

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J; Claros, M Gonzalo; Trelles, Oswaldo

    2010-07-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user's tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/.

  14. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  15. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS.

    Science.gov (United States)

    Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin

    2015-01-01

    Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  16. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    NARCIS (Netherlands)

    Babbitt, P.C.; Bagos, P.G.; Bairoch, A.; Bateman, A.; Chatonnet, A.; Chen, M.J.; Craik, D.J.; Finn, R.D.; Gloriam, D.; Haft, D.H.; Henrissat, B.; Holliday, G.L.; Isberg, V.; Kaas, Q.; Landsman, D.; Lenfant, N.; Manning, G.; Nagano, N.; Srinivasan, N.; O'Donovan, C.; Pruitt, K.D.; Sowdhamini, R.; Rawlings, N.D.; Saier, M.H., Jr.; Sharman, J.L.; Spedding, M.; Tsirigos, K.D.; Vastermark, A.; Vriend, G.

    2015-01-01

    During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from pro

  17. Interoperability of GADU in using heterogeneous grid resources for bioinformatics applications.

    Science.gov (United States)

    Sulakhe, Dinanath; Rodriguez, Alex; Wilde, Michael; Foster, Ian; Maltsev, Natalia

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual data system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.

  18. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    DEFF Research Database (Denmark)

    Babbitt, Patricia C.; Bagos, Pantelis G.; Bairoch, Amos;

    2015-01-01

    protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication...

  19. National Sexual Violence Resource Center (NSVRC)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Sexual Violence Resource Center (NSVRC) is a national information and resource hub relating to all aspects of sexual violence. NSVRC staff collect and...

  20. VectorBase: improvements to a bioinformatics resource for invertebrate vector genomics.

    Science.gov (United States)

    Megy, Karine; Emrich, Scott J; Lawson, Daniel; Campbell, David; Dialynas, Emmanuel; Hughes, Daniel S T; Koscielny, Gautier; Louis, Christos; Maccallum, Robert M; Redmond, Seth N; Sheehan, Andrew; Topalis, Pantelis; Wilson, Derek

    2012-01-01

    VectorBase (http://www.vectorbase.org) is a NIAID-supported bioinformatics resource for invertebrate vectors of human pathogens. It hosts data for nine genomes: mosquitoes (three Anopheles gambiae genomes, Aedes aegypti and Culex quinquefasciatus), tick (Ixodes scapularis), body louse (Pediculus humanus), kissing bug (Rhodnius prolixus) and tsetse fly (Glossina morsitans). Hosted data range from genomic features and expression data to population genetics and ontologies. We describe improvements and integration of new data that expand our taxonomic coverage. Releases are bi-monthly and include the delivery of preliminary data for emerging genomes. Frequent updates of the genome browser provide VectorBase users with increasing options for visualizing their own high-throughput data. One major development is a new population biology resource for storing genomic variations, insecticide resistance data and their associated metadata. It takes advantage of improved ontologies and controlled vocabularies. Combined, these new features ensure timely release of multiple types of data in the public domain while helping overcome the bottlenecks of bioinformatics and annotation by engaging with our user community.

  1. Resource Centered Computing delivering high parallel performance

    OpenAIRE

    2014-01-01

    International audience; Modern parallel programming requires a combination of differentparadigms, expertise and tuning, that correspond to the differentlevels in today's hierarchical architectures. To cope with theinherent difficulty, ORWL (ordered read-write locks) presents a newparadigm and toolbox centered around local or remote resources, suchas data, processors or accelerators. ORWL programmers describe theircomputation in terms of access to these resources during criticalsections. Exclu...

  2. BioStar: an online question & answer resource for the bioinformatics community

    Science.gov (United States)

    Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...

  3. Automation of Bioinformatics Workflows using CloVR, a Cloud Virtual Resource

    Science.gov (United States)

    Vangala, Mahesh

    2013-01-01

    Exponential growth of biological data, mainly due to revolutionary developments in NGS technologies in past couple of years, created a multitude of challenges in downstream data analysis using bioinformatics approaches. To handle such tsunami of data, bioinformatics analysis must be carried out in an automated and parallel fashion. A successful analysis often requires more than a few computational steps and bootstrapping these individual steps (scripts) into components and the components into pipelines certainly makes bioinformatics a reproducible and manageable segment of scientific research. CloVR (http://clovr.org) is one such flexible framework that facilitates the abstraction of bioinformatics workflows into executable pipelines. CloVR comes packaged with various built-in bioinformatics pipelines that can make use of multicore processing power when run on servers and/or cloud. CloVR is amenable to build custom pipelines based on individual laboratory requirements. CloVR is available as a single executable virtual image file that comes bundled with pre-installed and pre-configured bioinformatics tools and packages and thus circumvents the cumbersome installation difficulties. CloVR is highly portable and can be run on traditional desktop/laptop computers, central servers and cloud compute farms. In conclusion, CloVR provides built-in automated analysis pipelines for microbial genomics with a scope to develop and integrate custom-workflows that make use of parallel processing power when run on compute clusters, there by addressing the bioinformatics challenges with NGS data.

  4. Self-Access Centers: Maximizing Learners’ Access to Center Resources

    Directory of Open Access Journals (Sweden)

    Mark W. Tanner

    2010-09-01

    Full Text Available Originally published in TESL-EJ March 2009, Volume 12, Number 4 (http://tesl-ej.org/ej48/a2.html. Reprinted with permission from the authors.Although some students have discovered how to use self-access centers effectively, the majority appear to be unaware of available resources. A website and database of materials were created to help students locate materials and use the Self-Access Study Center (SASC at Brigham Young University’s English Language Center (ELC more effectively. Students took two surveys regarding their use of the SASC. The first survey was given before the website and database were made available. A second survey was administered 12 weeks after students had been introduced to the resource. An analysis of the data shows that students tend to use SASC resources more autonomously as a result of having a web-based database. The survey results suggest that SAC managers can encourage more autonomous use of center materials by provided a website and database to help students find appropriate materials to use to learn English.

  5. Introducing Bioinformatics into the Biology Curriculum: Exploring the National Center for Biotechnology Information.

    Science.gov (United States)

    Smith, Thomas M.; Emmeluth, Donald S.

    2002-01-01

    Explains the potential of computer technology in science education and argues for integrating bioinformatics into the biology curriculum. Describes three modules designed to introduce students to technological advancements in biology. Aims to develop a better understanding of molecular biology among students. (YDS)

  6. Illinois trauma centers and community violence resources

    Directory of Open Access Journals (Sweden)

    Bennet Butler

    2014-01-01

    Full Text Available Background: Elder abuse and neglect (EAN, intimate partner violence (IPV, and street-based community violence (SBCV are significant public health problems, which frequently lead to traumatic injury. Trauma centers can provide an effective setting for intervention and referral, potentially interrupting the cycle of violence. Aims: To assess existing institutional resources for the identification and treatment of violence victims among patients presenting with acute injury to statewide trauma centers. Settings and Design: We used a prospective, web-based survey of trauma medical directors at 62 Illinois trauma centers. Nonresponders were contacted via telephone to complete the survey. Materials and Methods: This survey was based on a survey conducted in 2004 assessing trauma centers and IPV resources. We modified this survey to collect data on IPV, EAN, and SBCV. Statistical Analysis: Univariate and bivariate statistics were performed using STATA statistical software. Results: We found that 100% of trauma centers now screen for IPV, an improvement from 2004 (P = 0.007. Screening for EAN (70% and SBCV (61% was less common (P < 0.001, and hospitals thought that resources for SBCV in particular were inadequate (P < 0.001 and fewer resources were available for these patients (P = 0.02. However, there was lack of uniformity of screening, tracking, and referral practices for victims of violence throughout the state. Conclusion: The multiplicity of strategies for tracking and referring victims of violence in Illinois makes it difficult to assess screening and tracking or form generalized policy recommendations. This presents an opportunity to improve care delivered to victims of violence by standardizing care and referral protocols.

  7. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  8. Model-driven user interfaces for bioinformatics data resources: regenerating the wheel as an alternative to reinventing it

    Directory of Open Access Journals (Sweden)

    Swainston Neil

    2006-12-01

    Full Text Available Abstract Background The proliferation of data repositories in bioinformatics has resulted in the development of numerous interfaces that allow scientists to browse, search and analyse the data that they contain. Interfaces typically support repository access by means of web pages, but other means are also used, such as desktop applications and command line tools. Interfaces often duplicate functionality amongst each other, and this implies that associated development activities are repeated in different laboratories. Interfaces developed by public laboratories are often created with limited developer resources. In such environments, reducing the time spent on creating user interfaces allows for a better deployment of resources for specialised tasks, such as data integration or analysis. Laboratories maintaining data resources are challenged to reconcile requirements for software that is reliable, functional and flexible with limitations on software development resources. Results This paper proposes a model-driven approach for the partial generation of user interfaces for searching and browsing bioinformatics data repositories. Inspired by the Model Driven Architecture (MDA of the Object Management Group (OMG, we have developed a system that generates interfaces designed for use with bioinformatics resources. This approach helps laboratory domain experts decrease the amount of time they have to spend dealing with the repetitive aspects of user interface development. As a result, the amount of time they can spend on gathering requirements and helping develop specialised features increases. The resulting system is known as Pierre, and has been validated through its application to use cases in the life sciences, including the PEDRoDB proteomics database and the e-Fungi data warehouse. Conclusion MDAs focus on generating software from models that describe aspects of service capabilities, and can be applied to support rapid development of repository

  9. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    Science.gov (United States)

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org.

  10. The NIH-NIAID schistosomiasis resource center.

    Directory of Open Access Journals (Sweden)

    Fred A Lewis

    Full Text Available A bench scientist studying schistosomiasis must make a large commitment to maintain the parasite's life cycle, which necessarily involves a mammalian (definitive host and the appropriate species of snail (intermediate host. This is often a difficult and expensive commitment to make, especially in the face of ever-tightening funds for tropical disease research. In addition to funding concerns, investigators usually face additional problems in the allocation of sufficient lab space to this effort (especially for snail rearing and the limited availability of personnel experienced with life cycle upkeep. These problems can be especially daunting for the new investigator entering the field. Over 40 years ago, the National Institutes of Health-National Institute of Allergy and Infectious Diseases (NIH-NIAID had the foresight to establish a resource from which investigators could obtain various schistosome life stages without having to expend the effort and funds necessary to maintain the entire life cycle on their own. This centralized resource translated into cost savings to both NIH-NIAID and to principal investigators by freeing up personnel costs on grants and allowing investigators to divert more funds to targeted research goals. Many investigators, especially those new to the field of tropical medicine, are only vaguely, if at all, aware of the scope of materials and support provided by this resource. This review is intended to help remedy that situation. Following a short history of the contract, we will give a brief description of the schistosome species provided, provide an estimate of the impact the resource has had on the research community, and describe some new additions and potential benefits the resource center might have for the ever-changing research interests of investigators.

  11. Marshall Space Flight Center Telescience Resource Kit

    Science.gov (United States)

    Wade, Gina

    2016-01-01

    Telescience Resource Kit (TReK) is a suite of software applications that can be used to monitor and control assets in space or on the ground. The Telescience Resource Kit was originally developed for the International Space Station program. Since then it has been used to support a variety of NASA programs and projects including the WB-57 Ascent Vehicle Experiment (WAVE) project, the Fast Affordable Science and Technology Satellite (FASTSAT) project, and the Constellation Program. The Payloads Operations Center (POC), also known as the Payload Operations Integration Center (POIC), provides the capability for payload users to operate their payloads at their home sites. In this environment, TReK provides local ground support system services and an interface to utilize remote services provided by the POC. TReK provides ground system services for local and remote payload user sites including International Partner sites, Telescience Support Centers, and U.S. Investigator sites in over 40 locations worldwide. General Capabilities: Support for various data interfaces such as User Datagram Protocol, Transmission Control Protocol, and Serial interfaces. Data Services - retrieve, process, record, playback, forward, and display data (ground based data or telemetry data). Command - create, modify, send, and track commands. Command Management - Configure one TReK system to serve as a command server/filter for other TReK systems. Database - databases are used to store telemetry and command definition information. Application Programming Interface (API) - ANSI C interface compatible with commercial products such as Visual C++, Visual Basic, LabVIEW, Borland C++, etc. The TReK API provides a bridge for users to develop software to access and extend TReK services. Environments - development, test, simulations, training, and flight. Includes standalone training simulators.

  12. Virtualized cloud data center networks issues in resource management

    CERN Document Server

    Tsai, Linjiun

    2016-01-01

    This book discusses the characteristics of virtualized cloud networking, identifies the requirements of cloud network management, and illustrates the challenges in deploying virtual clusters in multi-tenant cloud data centers. The book also introduces network partitioning techniques to provide contention-free allocation, topology-invariant reallocation, and highly efficient resource utilization, based on the Fat-tree network structure. Managing cloud data center resources without considering resource contentions among different cloud services and dynamic resource demands adversely affects the performance of cloud services and reduces the resource utilization of cloud data centers. These challenges are mainly due to strict cluster topology requirements, resource contentions between uncooperative cloud services, and spatial/temporal data center resource fragmentation. Cloud data center network resource allocation/reallocation which cope well with such challenges will allow cloud services to be provisioned with ...

  13. 75 FR 22438 - Proposed Information Collection (Health Resource Center Medical Center Payment Form) Activity...

    Science.gov (United States)

    2010-04-28

    ... AFFAIRS Proposed Information Collection (Health Resource Center Medical Center Payment Form) Activity... information technology. Title: Health Resource Center Medical Center Payment Form, VA Form 10-0505. OMB... proposed collection of certain information by the agency. Under the Paperwork Reduction Act (PRA) of...

  14. MSeqDR: A Centralized Knowledge Repository and Bioinformatics Web Resource to Facilitate Genomic Investigations in Mitochondrial Disease.

    Science.gov (United States)

    Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu

    2016-06-01

    MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources.

  15. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications

    Directory of Open Access Journals (Sweden)

    Schmid Amy K

    2007-11-01

    Full Text Available Abstract Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV, and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the

  16. A Dynamic and Interactive Monitoring System of Data Center Resources

    Directory of Open Access Journals (Sweden)

    Yu Ling-Fei

    2016-01-01

    Full Text Available To maximize the utilization and effectiveness of resources, it is very necessary to have a well suited management system for modern data centers. Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. In this paper, a dynamic and interactive monitoring system for data center resources, ResourceView, is presented. By consolidating all data center management functionality into a single interface, ResourceView shares a common view of the timeline metric status, while providing comprehensive, centralized monitoring of data center physical and virtual IT assets including power, cooling, physical space and VMs, so that to improve availability and efficiency. In addition, servers and VMs can be monitored from several viewpoints such as clusters, racks and projects, which is very convenient for users.

  17. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    Science.gov (United States)

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  18. Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics

    Directory of Open Access Journals (Sweden)

    Tennis Joseph T

    2009-09-01

    Full Text Available Abstract Background Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. Results We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. Conclusion The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search to

  19. Natural Resources at Kennedy Space Center

    Science.gov (United States)

    Phillips, Lynne

    2015-01-01

    Informative presentation on the purpose and need for an Ecological Program at the Kennedy Space Center. Includes the federal laws mandating the program followed by a description of many of the long term monitoring projects. Projects include wildlife surveying by observation as well as interactive surveys to collect basic animal data for analysis of trends in habitat use and ecosystem health. The program is designed for a broad range in audience from elementary to college level.

  20. 76 FR 53885 - Patent and Trademark Resource Centers Metrics

    Science.gov (United States)

    2011-08-30

    ... United States Patent and Trademark Office Patent and Trademark Resource Centers Metrics ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark Office (USPTO), as part of its... following methods: E-mail: InformationCollection@uspto.gov . Include ``Patent and Trademark Resource...

  1. Education resources of the National Center for Biotechnology Information

    OpenAIRE

    Cooper, Peter S.; Lipshultz, Dawn; Matten, Wayne T.; McGinnis, Scott D.; Pechous, Steven; Romiti, Monica L.; Tao, Tao; Valjavec-Gratian, Majda; Sayers, Eric W.

    2010-01-01

    The National Center for Biotechnology Information (NCBI) hosts 39 literature and molecular biology databases containing almost half a billion records. As the complexity of these data and associated resources and tools continues to expand, so does the need for educational resources to help investigators, clinicians, information specialists and the general public make use of the wealth of public data available at the NCBI. This review describes the educational resources available at NCBI via th...

  2. 75 FR 39622 - Proposed Information Collection (Health Resource Center Medical Center Payment Form) Activity...

    Science.gov (United States)

    2010-07-09

    ... AFFAIRS Proposed Information Collection (Health Resource Center Medical Center Payment Form) Activity... collection of information abstracted below to the Office of Management and Budget (OMB) for review and comment. The PRA submission describes the nature of the information collection and its expected cost...

  3. Electronic Commerce Resource Centers. An Industry--University Partnership.

    Science.gov (United States)

    Gulledge, Thomas R.; Sommer, Rainer; Tarimcilar, M. Murat

    1999-01-01

    Electronic Commerce Resource Centers focus on transferring emerging technologies to small businesses through university/industry partnerships. Successful implementation hinges on a strategic operating plan, creation of measurable value for customers, investment in customer-targeted training, and measurement of performance outputs. (SK)

  4. Evaluating an Assistive Technology Resource Center in Taiwan

    Science.gov (United States)

    Ho, Hua-Kuo

    2010-01-01

    The purpose of this article is intended to present the procedure and outcomes of an evaluation of the Assistive Technology Resource Center in a city of Taiwan. The evaluation was initiated by Chiayi City Government through inviting three professionals in the field of assistive technology as evaluators. For the purpose of evaluation, the Executive…

  5. Multi-Cultural Resource Center Materials Handbook, Grades K-3.

    Science.gov (United States)

    Gillespie, Mary F.; Barrientos, Anita

    This annotated bibliography cites multicultural materials whose themes correlate with basic concepts taught in the primary grades. The items are in the Multi-Cultural Resource Center of the Toledo, Ohio public schools. The purpose of the bibliography is to help teachers integrate materials into their classroom. Films, filmstrips, books, study…

  6. Building an Information Resource Center for Competitive Intelligence.

    Science.gov (United States)

    Martin, J. Sperling

    1992-01-01

    Outlines considerations in the design of a Competitive Intelligence Information Resource Center (CIIRC), which is needed by business organizations for effective strategic decision making. Discussed are user needs, user participation, information sources, technology and interface design, operational characteristics, and planning for implementation.…

  7. The NIH-NIAID Filariasis Research Reagent Resource Center.

    Directory of Open Access Journals (Sweden)

    Michelle L Michalski

    2011-11-01

    Full Text Available Filarial worms cause a variety of tropical diseases in humans; however, they are difficult to study because they have complex life cycles that require arthropod intermediate hosts and mammalian definitive hosts. Research efforts in industrialized countries are further complicated by the fact that some filarial nematodes that cause disease in humans are restricted in host specificity to humans alone. This potentially makes the commitment to research difficult, expensive, and restrictive. Over 40 years ago, the United States National Institutes of Health-National Institute of Allergy and Infectious Diseases (NIH-NIAID established a resource from which investigators could obtain various filarial parasite species and life cycle stages without having to expend the effort and funds necessary to maintain the entire life cycles in their own laboratories. This centralized resource (The Filariasis Research Reagent Resource Center, or FR3 translated into cost savings to both NIH-NIAID and to principal investigators by freeing up personnel costs on grants and allowing investigators to divert more funds to targeted research goals. Many investigators, especially those new to the field of tropical medicine, are unaware of the scope of materials and support provided by the FR3. This review is intended to provide a short history of the contract, brief descriptions of the fiilarial species and molecular resources provided, and an estimate of the impact the resource has had on the research community, and describes some new additions and potential benefits the resource center might have for the ever-changing research interests of investigators.

  8. Amarillo National Resource Center for Plutonium 1999 plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-01-30

    The purpose of the Amarillo National Resource Center for Plutonium is to serve the Texas Panhandle, the State of Texas and the US Department of Energy by: conducting scientific and technical research; advising decision makers; and providing information on nuclear weapons materials and related environment, safety, health, and nonproliferation issues while building academic excellence in science and technology. This paper describes the electronic resource library which provides the national archives of technical, policy, historical, and educational information on plutonium. Research projects related to the following topics are described: Environmental restoration and protection; Safety and health; Waste management; Education; Training; Instrumentation development; Materials science; Plutonium processing and handling; and Storage.

  9. Strategies for developing biostatistics resources in an academic health center.

    Science.gov (United States)

    Welty, Leah J; Carter, Rickey E; Finkelstein, Dianne M; Harrell, Frank E; Lindsell, Christopher J; Macaluso, Maurizio; Mazumdar, Madhu; Nietert, Paul J; Oster, Robert A; Pollock, Brad H; Roberson, Paula K; Ware, James H

    2013-04-01

    Biostatistics--the application of statistics to understanding health and biology-provides powerful tools for developing research questions, designing studies, refining measurements, analyzing data, and interpreting findings. Biostatistics plays an important role in health-related research, yet biostatistics resources are often fragmented, ad hoc, or oversubscribed within academic health centers (AHCs). Given the increasing complexity and quantity of health-related data, the emphasis on accelerating clinical and translational science, and the importance of conducting reproducible research, the need for the thoughtful development of biostatistics resources within AHCs is growing.In this article, the authors identify strategies for developing biostatistics resources in three areas: (1) recruiting and retaining biostatisticians, (2) efficiently using biostatistics resources, and (3) improving biostatistical contributions to science. AHCs should consider these three domains in building strong biostatistics resources, which they can leverage to support a broad spectrum of research. For each of the three domains, the authors describe the advantages and disadvantages of AHCs creating centralized biostatistics units rather than dispersing such resources across clinical departments or other research units. They also address the challenges that biostatisticians face in contributing to research without sacrificing their individual professional growth or the trajectory of their research teams. The authors ultimately recommend that AHCs create centralized biostatistics units because this approach offers distinct advantages both to investigators who collaborate with biostatisticians as well as to the biostatisticians themselves, and it is better suited to accomplish the research and education missions of AHCs.

  10. Distributed computing in bioinformatics.

    Science.gov (United States)

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  11. Education resources of the National Center for Biotechnology Information.

    Science.gov (United States)

    Cooper, Peter S; Lipshultz, Dawn; Matten, Wayne T; McGinnis, Scott D; Pechous, Steven; Romiti, Monica L; Tao, Tao; Valjavec-Gratian, Majda; Sayers, Eric W

    2010-11-01

    The National Center for Biotechnology Information (NCBI) hosts 39 literature and molecular biology databases containing almost half a billion records. As the complexity of these data and associated resources and tools continues to expand, so does the need for educational resources to help investigators, clinicians, information specialists and the general public make use of the wealth of public data available at the NCBI. This review describes the educational resources available at NCBI via the NCBI Education page (www.ncbi.nlm.nih.gov/Education/). These resources include materials designed for new users, such as About NCBI and the NCBI Guide, as well as documentation, Frequently Asked Questions (FAQs) and writings on the NCBI Bookshelf such as the NCBI Help Manual and the NCBI Handbook. NCBI also provides teaching materials such as tutorials, problem sets and educational tools such as the Amino Acid Explorer, PSSM Viewer and Ebot. NCBI also offers training programs including the Discovery Workshops, webinars and tutorials at conferences. To help users keep up-to-date, NCBI produces the online NCBI News and offers RSS feeds and mailing lists, along with a presence on Facebook, Twitter and YouTube.

  12. Database Resources of the National Center for Biotechnology Information

    Science.gov (United States)

    2017-01-01

    The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank® nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. The Entrez system provides search and retrieval operations for most of these data from 37 distinct databases. The E-utilities serve as the programming interface for the Entrez system. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. New resources released in the past year include iCn3D, MutaBind, and the Antimicrobial Resistance Gene Reference Database; and resources that were updated in the past year include My Bibliography, SciENcv, the Pathogen Detection Project, Assembly, Genome, the Genome Data Viewer, BLAST and PubChem. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:27899561

  13. Agrigenomics for microalgal biofuel production: an overview of various bioinformatics resources and recent studies to link OMICS to bioenergy and bioeconomy.

    Science.gov (United States)

    Misra, Namrata; Panda, Prasanna Kumar; Parida, Bikram Kumar

    2013-11-01

    Microalgal biofuels offer great promise in contributing to the growing global demand for alternative sources of renewable energy. However, to make algae-based fuels cost competitive with petroleum, lipid production capabilities of microalgae need to improve substantially. Recent progress in algal genomics, in conjunction with other "omic" approaches, has accelerated the ability to identify metabolic pathways and genes that are potential targets in the development of genetically engineered microalgal strains with optimum lipid content. In this review, we summarize the current bioeconomic status of global biofuel feedstocks with particular reference to the role of "omics" in optimizing sustainable biofuel production. We also provide an overview of the various databases and bioinformatics resources available to gain a more complete understanding of lipid metabolism across algal species, along with the recent contributions of "omic" approaches in the metabolic pathway studies for microalgal biofuel production.

  14. 75 FR 7487 - National Center for Research Resources; Notice of Closed Meetings

    Science.gov (United States)

    2010-02-19

    ....gov . Name of Committee: National Center for Research Resources Special Emphasis Panel; COBRE III...: National Center for Research Resources Special Emphasis Panel; RCMI COBRE. Date: March 17-18, 2010. Time:...

  15. Genomic resources for a commercial flatfish, the Senegalese sole (Solea senegalensis: EST sequencing, oligo microarray design, and development of the Soleamold bioinformatic platform

    Directory of Open Access Journals (Sweden)

    Planas Josep V

    2008-10-01

    Full Text Available Abstract Background The Senegalese sole, Solea senegalensis, is a highly prized flatfish of growing commercial interest for aquaculture in Southern Europe. However, despite the industrial production of Senegalese sole being hampered primarily by lack of information on the physiological mechanisms involved in reproduction, growth and immunity, very limited genomic information is available on this species. Results Sequencing of a S. senegalensis multi-tissue normalized cDNA library, from adult tissues (brain, stomach, intestine, liver, ovary, and testis, larval stages (pre-metamorphosis, metamorphosis, juvenile stages (post-metamorphosis, abnormal fish, and undifferentiated gonads, generated 10,185 expressed sequence tags (ESTs. Clones were sequenced from the 3'-end to identify isoform specific sequences. Assembly of the entire EST collection into contigs gave 5,208 unique sequences of which 1,769 (34% had matches in GenBank, thus showing a low level of redundancy. The sequence of the 5,208 unigenes was used to design and validate an oligonucleotide microarray representing 5,087 unique Senegalese sole transcripts. Finally, a novel interactive bioinformatic platform, Soleamold, was developed for the Senegalese sole EST collection as well as microarray and ISH data. Conclusion New genomic resources have been developed for S. senegalensis, an economically important fish in aquaculture, which include a collection of expressed genes, an oligonucleotide microarray, and a publicly available bioinformatic platform that can be used to study gene expression in this species. These resources will help elucidate transcriptional regulation in wild and captive Senegalese sole for optimization of its production under intensive culture conditions.

  16. Genomic resources for a commercial flatfish, the Senegalese sole (Solea senegalensis): EST sequencing, oligo microarray design, and development of the Soleamold bioinformatic platform

    Science.gov (United States)

    Cerdà, Joan; Mercadé, Jaume; Lozano, Juan José; Manchado, Manuel; Tingaud-Sequeira, Angèle; Astola, Antonio; Infante, Carlos; Halm, Silke; Viñas, Jordi; Castellana, Barbara; Asensio, Esther; Cañavate, Pedro; Martínez-Rodríguez, Gonzalo; Piferrer, Francesc; Planas, Josep V; Prat, Francesc; Yúfera, Manuel; Durany, Olga; Subirada, Francesc; Rosell, Elisabet; Maes, Tamara

    2008-01-01

    Background The Senegalese sole, Solea senegalensis, is a highly prized flatfish of growing commercial interest for aquaculture in Southern Europe. However, despite the industrial production of Senegalese sole being hampered primarily by lack of information on the physiological mechanisms involved in reproduction, growth and immunity, very limited genomic information is available on this species. Results Sequencing of a S. senegalensis multi-tissue normalized cDNA library, from adult tissues (brain, stomach, intestine, liver, ovary, and testis), larval stages (pre-metamorphosis, metamorphosis), juvenile stages (post-metamorphosis, abnormal fish), and undifferentiated gonads, generated 10,185 expressed sequence tags (ESTs). Clones were sequenced from the 3'-end to identify isoform specific sequences. Assembly of the entire EST collection into contigs gave 5,208 unique sequences of which 1,769 (34%) had matches in GenBank, thus showing a low level of redundancy. The sequence of the 5,208 unigenes was used to design and validate an oligonucleotide microarray representing 5,087 unique Senegalese sole transcripts. Finally, a novel interactive bioinformatic platform, Soleamold, was developed for the Senegalese sole EST collection as well as microarray and ISH data. Conclusion New genomic resources have been developed for S. senegalensis, an economically important fish in aquaculture, which include a collection of expressed genes, an oligonucleotide microarray, and a publicly available bioinformatic platform that can be used to study gene expression in this species. These resources will help elucidate transcriptional regulation in wild and captive Senegalese sole for optimization of its production under intensive culture conditions. PMID:18973667

  17. Resource allocation in academic health centers: creating common metrics.

    Science.gov (United States)

    Joiner, Keith A; Castellanos, Nathan; Wartman, Steven A

    2011-09-01

    Optimizing resource allocation is essential for effective academic health center (AHC) management, yet guidelines and principles for doing so in the research and educational arenas remain limited. To address this issue, the authors analyzed responses to the 2007-2008 Association of Academic Health Centers census using ratio analysis. The concept was to normalize data from an individual institution to that same institution, by creating a ratio of two separate values from the institution (e.g., total faculty FTEs/total FTEs). The ratios were then compared across institutions. Generally, this strategy minimizes the effect of institution size on the responses, size being the predominant limitation of using absolute values for developing meaningful metrics. In so doing, ratio analysis provides a range of responses that can be displayed in graphical form to determine the range and distribution of values. The data can then be readily scrutinized to determine where any given institution falls within the distribution. Staffing ratios and operating ratios from up to 54 institutions are reported. For ratios including faculty numbers in the numerator or denominator, the range of values is wide and minimally discriminatory, reflecting heterogeneity across institutions in faculty definitions. Values for financial ratios, in particular total payroll expense/total operating expense, are more tightly clustered, reflecting in part the use of units with a uniform definition (i.e., dollars), and emphasizing the utility of such ratios in decision guidelines. The authors describe how to apply these insights to develop metrics for resource allocation in the research and educational arenas.

  18. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  19. Eukaryotic Pathogen Database Resources (EuPathDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — EuPathDB Bioinformatics Resource Center for Biodefense and Emerging/Re-emerging Infectious Diseases is a portal for accessing genomic-scale datasets associated with...

  20. Autophagy Regulatory Network - a systems-level bioinformatics resource for studying the mechanism and regulation of autophagy.

    Science.gov (United States)

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases.

  1. GALT Protein Database, a Bioinformatics Resource for the Manage-ment and Analysis of Structural Features of a Galactosemia-related Protein and Its Mutants

    Institute of Scientific and Technical Information of China (English)

    Antonio d'Acierno; Angelo Facchiano; Anna Marabotti

    2009-01-01

    We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type Ⅰ. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.

  2. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for bioinformatics resource discovery and disparate data and service integration

    Science.gov (United States)

    2010-01-01

    Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We

  3. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  4. Community engagement and the resource centers for minority aging research.

    Science.gov (United States)

    Sood, Johanna R; Stahl, Sidney M

    2011-06-01

    The National Institute on Aging created the Resource Centers for Minority Aging Research (RCMARs) to address infrastructure development intended to reduce health disparities among older adults. The overall goals of the RCMARs are to (a) increase the size of the cadre of researchers conducting research on issues related to minority aging; (b) increase the diversity of researchers conducting research on minority aging; (c) create and test reliable measures for use in older diverse populations; and (d) conduct research on recruitment and retention of community-dwelling older adults for research addressing behavioral, social, and medical issues. Along with this latter goal, the RCMARs developed and maintain academic-community partnerships. To accomplish the recruitment and retention goal, the RCMARs established Community Liaison Working Groups using a collaborative approach to scientific inquiry; this special issue will identify research priorities for moving the science of recruitment and retention forward. In addition, sustainable and efficient methods for fostering long-term partnerships will be identified between community and academia. Evidence-based approaches to the recruitment and retention of diverse elders are explored. We expect this supplement to serve as a catalyst for researchers interested in engaging diverse community-dwelling elders in health-related research. In addition, this supplement should serve as a source of the most contemporary evidence-based approaches to the recruitment and retention of diverse older populations for participation in social, behavioral, and clinical research.

  5. Uso da bioinformática na diferenciação molecular da Entamoeba histolytica e Entamoeba díspar = Molecular discrimination of Entamoeba histolytica and Entamoeba dispar by bioinformatics resources

    Directory of Open Access Journals (Sweden)

    Eliane Gasparino

    2008-07-01

    Full Text Available Amebíase invasiva, causada por Entamoeba histolytica, é microscopicamente indistinguível da espécie não-patogênica Entamoeba dispar. Com auxílio de ferramentas de bioinformática, objetivou-se diferenciar Entamoeba histolytica e Entamoeba dispar por técnicasmoleculares. A análise foi realizada a partir do banco de dados da National Center for Biotechnology Information; pela pesquisa de similaridade de sequências, elegeu-se o gene da cisteína sintase. Um par de primer foi desenhado (programa Web Primer e foi selecionada a enzima de restrição TaqI (programa Web Cutter. Após a atuação da enzima, o fragmento foi dividido em dois, um com 255 pb e outro com 554 pb, padrão característico da E. histolytica. Na ausência de corte, o fragmento apresentou o tamanho de 809 pb, referente à E. dispar.Under microscopic conditions, the invasive Entamoeba histolytica is indistinguishable from the non-pathogenic species Entamoeba dispar. In this way, the present study was carried out to determine a molecular strategy for discriminating both species by the mechanisms of bioinformatics. The gene cysteine synthetase was consideredfor such a purpose by using the resources of the National Center for Biotechnology Information data bank in the search for similarities in the gene sequence. In this way, a primer pair was designed by the Web Primer program and the restriction enzyme TaqI was selected by the Web Cutter software program. The DNA fragment had a size of 809 bpbefore cutting, which is consistent with E. dispar. The gene fragment was partitioned in a first fragment with 255 bp and a second one with 554 bp, which is similar to the genetic characteristics of E. histolytica.

  6. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, May 1, 1997--July 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-09-01

    Progress summaries are provided from the Amarillo National Center for Plutonium. Programs include the plutonium information resource center, environment, public health, and safety, education and training, nuclear and other material studies.

  7. Earth Resources Observation and Science (EROS) Center's Earth as Art Image Gallery

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Resources Observation and Science (EROS) Center manages this collection of Landsat 7 scenes created for aesthetic purposes rather than scientific...

  8. 34 CFR 669.1 - What is the Language Resource Centers Program?

    Science.gov (United States)

    2010-07-01

    ... improving the nation's capacity for teaching and learning foreign languages effectively. (Authority: 20 U.S... 34 Education 3 2010-07-01 2010-07-01 false What is the Language Resource Centers Program? 669.1... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION LANGUAGE RESOURCE CENTERS PROGRAM General § 669.1 What is...

  9. 75 FR 18216 - National Center for Research Resources; Notice of Meeting

    Science.gov (United States)

    2010-04-09

    ... HUMAN SERVICES National Institutes of Health National Center for Research Resources; Notice of Meeting...: Louise E. Ramm, PhD, Deputy Director, National Center for Research Resources, National Institutes of... accommodations, should notify the Contact Person listed below in advance of the meeting. The meeting will...

  10. 75 FR 49498 - National Center for Research Resources; Notice of Meeting

    Science.gov (United States)

    2010-08-13

    ... HUMAN SERVICES National Institutes of Health National Center for Research Resources; Notice of Meeting.... Contact Person: Louise E. Ramm, PhD, Deputy Director, National Center for Research Resources, National... accommodations, should notify the Contact Person listed below in advance of the meeting. The meeting will...

  11. Flow cytometry bioinformatics.

    Directory of Open Access Journals (Sweden)

    Kieran O'Neill

    Full Text Available Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing, and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating, to use dimensionality reduction to aid gating, and to find populations automatically in higher dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data

  12. Student-Centered Teaching in Large Classes with Limited Resources

    Science.gov (United States)

    Renaud, Susan; Tannenbaum, Elizabeth; Stantial, Phillip

    2007-01-01

    The authors shares suggestions for instructors who teach large classes (from 50-80 students) with minimal resources. The challenges of managing the classroom, using pair and group work effectively, and working with limited resources are addressed. The authors suggests ways to take attendance quickly, to reduce written work to grade, to start and…

  13. An Introduction to Bioinformatics

    Institute of Scientific and Technical Information of China (English)

    SHENG Qi-zheng; De Moor Bart

    2004-01-01

    As a newborn interdisciplinary field, bioinformatics is receiving increasing attention from biologists, computer scientists, statisticians, mathematicians and engineers. This paper briefly introduces the birth, importance, and extensive applications of bioinformatics in the different fields of biological research. A major challenge in bioinformatics - the unraveling of gene regulation - is discussed in detail.

  14. Natural Resource Program Center FY 2011 Annual Report

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This annual report describes implementation of the Natural Resource Program Center’s Inventory and Monitoring (I&M) program during FY 2011. The introduction...

  15. National Maternal and Child Oral Health Resource Center

    Science.gov (United States)

    ... of fluoride varnish, including materials and organizations. Promoting Oral Health During Pregnancy The latest update on programs, policy, ... the release of the national consensus statement on oral health care during pregnancy . Fluoride Vanish Resource Highlights A ...

  16. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    Directory of Open Access Journals (Sweden)

    Tilton Susan C

    2012-11-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single

  17. Educational Resources Information Center (ERIC) File Partition Study: Final Report.

    Science.gov (United States)

    Hull, Cynthia C.; Wanger, Judith

    A study to provide the National Center for Educational Communication (NCE) with information that could be useful in making the ERIC data base more relevant to the needs of educators and more efficiently usable by them is discussed. Specific purposes of this project were to use an empirical field-survey study as an armature around which to: (1)…

  18. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2016-03-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations.

  19. Bioinformatics Training: A Review of Challenges, Actions and Support Requirements

    DEFF Research Database (Denmark)

    Schneider, M.V.; Watson, J.; Attwood, T.;

    2010-01-01

    As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...

  20. Agile parallel bioinformatics workflow management using Pwrake.

    OpenAIRE

    2011-01-01

    Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environm...

  1. Earth Resources Observation and Science (EROS) Center's Earth as Art Image Gallery 3

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Resources Observation and Science (EROS) Center manages the Earth as Art Three exhibit, which provides fresh and inspiring glimpses of different parts of...

  2. Earth Resources Observation and Science (EROS) Center's Journey of Lewis and Clark Gallery

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Resources Observation and Science (EROS) Center manages the this gallery of Landsat-derived images of one of the most remarkable and productive scientific...

  3. Earth Resources Observation and Science (EROS) Center's Earth as Art Image Gallery 2

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Resources Observation and Science (EROS) Center manages this collection of forty-five new scenes developed for their aesthetic beauty, rather than for...

  4. Amarillo National Resource Center for Plutonium quarterly technical progress report, August 1--October 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    This paper describes activities of the Center under the following topical sections: Electronic resource library; Environmental restoration and protection; Health and safety; Waste management; Communication program; Education program; Training; Analytical development; Materials science; Plutonium processing and handling; and Storage.

  5. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, February 1, 1998--April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    Activities from the Amarillo National Resource Center for Plutonium are described. Areas of work include materials science of nuclear and explosive materials, plutonium processing and handling, robotics, and storage.

  6. 78 FR 26684 - Notice of Funding Availability for the Small Business Transportation Resource Center Program

    Science.gov (United States)

    2013-05-07

    ... disadvantaged businesses (SDB), disadvantaged business enterprises (DBE), women owned small businesses (WOSB... Office of the Secretary Notice of Funding Availability for the Small Business Transportation Resource Center Program AGENCY: Office of Small and Disadvantaged Business Utilization (OSDBU), Office of...

  7. Earth Resources Observation and Science (EROS) Center's Landsat State Mosaics Gallery

    Data.gov (United States)

    National Aeronautics and Space Administration — The Earth Resources Observation and Science (EROS) Center manages the this gallery of images of the 50 U.S. states plus Puerto Rico as derived by Landsat data.

  8. Deep Learning in Bioinformatics

    OpenAIRE

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2016-01-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current res...

  9. Amarillo National Resource Center for Plutonium quarterly technical progress report, August 1, 1997--October 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report summarizes activities of the Amarillo National Resource Center for Plutonium during the quarter. The report describes the Electronic Resource Library; DOE support activities; current and future environmental health and safety programs; pollution prevention and pollution avoidance; communication, education, training, and community involvement programs; and nuclear and other material studies, including plutonium storage and disposition studies.

  10. Using Language Corpora to Develop a Virtual Resource Center for Business English

    Science.gov (United States)

    Ngo, Thi Phuong Le

    2015-01-01

    A Virtual Resource Center (VRC) has been brought into use since 2008 as an integral part of a task-based language teaching and learning program for Business English courses at Nantes University, France. The objective of the center is to enable students to work autonomously and individually on their language problems so as to improve their language…

  11. 75 FR 80062 - National Center for Research Resources; Notice of Closed Meetings

    Science.gov (United States)

    2010-12-21

    ... Democracy Blvd., Room 1068, Bethesda, MD 20892, 301-435-0965. Name of Committee: National Center for... Health, National Center for Research Resources, Office of Review, Room 1074, 6701 Democracy Blvd., MSC... Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702, ARRA Related Construction...

  12. Assessment of water resources for nuclear energy centers

    Energy Technology Data Exchange (ETDEWEB)

    Samuels, G.

    1976-09-01

    Maps of the conterminous United States showing the rivers with sufficient flow to be of interest as potential sites for nuclear energy centers are presented. These maps show the rivers with (1) mean annual flows greater than 3000 cfs, with the flow rates identified for ranges of 3000 to 6000, 6000 to 12,000, 12,000 to 24,000, and greater than 24,000 cfs; (2) monthly, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs; and (3) annual, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs. Criteria relating river flow rates required for various size generating stations both for sites located on reservoirs and for sites without local storage of cooling water are discussed. These criteria are used in conjunction with plant water consumption rates (based on both instantaneous peak and annual average usage rates) to estimate the installed generating capacity that may be located at one site or within a river basin. Projections of future power capacity requirements, future demand for water (both withdrawals and consumption), and regions of expected water shortages are also presented. Regional maps of water availability, based on annual, 20-year low flows, are also shown. The feasibility of locating large energy centers in these regions is discussed.

  13. National Training Center Fort Irwin expansion area aquatic resources survey

    Energy Technology Data Exchange (ETDEWEB)

    Cushing, C.E.; Mueller, R.P.

    1996-02-01

    Biologists from Pacific Northwest National Laboratory (PNNL) were requested by personnel from Fort Irwin to conduct a biological reconnaissance of the Avawatz Mountains northeast of Fort Irwin, an area for proposed expansion of the Fort. Surveys of vegetation, small mammals, birds, reptiles, amphibians, and aquatic resources were conducted during 1995 to characterize the populations and habitats present with emphasis on determining the presence of any species of special concern. This report presents a description of the sites sampled, a list of the organisms found and identified, and a discussion of relative abundance. Taxonomic identifications were done to the lowest level possible commensurate with determining the status of the taxa relative to its possible listing as a threatened, endangered, or candidate species. Consultation with taxonomic experts was undertaken for the Coleoptera ahd Hemiptera. In addition to listing the macroinvertebrates found, the authors also present a discussion related to the possible presence of any threatened or endangered species or species of concern found in Sheep Creek Springs, Tin Cabin Springs, and the Amargosa River.

  14. Gene: a gene-centered information resource at NCBI.

    Science.gov (United States)

    Brown, Garth R; Hem, Vichet; Katz, Kenneth S; Ovetsky, Michael; Wallin, Craig; Ermolaeva, Olga; Tolstoy, Igor; Tatusova, Tatiana; Pruitt, Kim D; Maglott, Donna R; Murphy, Terence D

    2015-01-01

    The National Center for Biotechnology Information's (NCBI) Gene database (www.ncbi.nlm.nih.gov/gene) integrates gene-specific information from multiple data sources. NCBI Reference Sequence (RefSeq) genomes for viruses, prokaryotes and eukaryotes are the primary foundation for Gene records in that they form the critical association between sequence and a tracked gene upon which additional functional and descriptive content is anchored. Additional content is integrated based on the genomic location and RefSeq transcript and protein sequence data. The content of a Gene record represents the integration of curation and automated processing from RefSeq, collaborating model organism databases, consortia such as Gene Ontology, and other databases within NCBI. Records in Gene are assigned unique, tracked integers as identifiers. The content (citations, nomenclature, genomic location, gene products and their attributes, phenotypes, sequences, interactions, variation details, maps, expression, homologs, protein domains and external databases) is available via interactive browsing through NCBI's Entrez system, via NCBI's Entrez programming utilities (E-Utilities and Entrez Direct) and for bulk transfer by FTP.

  15. The MMS Science Data Center: Operations, Capabilities, and Resource.

    Science.gov (United States)

    Larsen, K. W.; Pankratz, C. K.; Giles, B. L.; Kokkonen, K.; Putnam, B.; Schafer, C.; Baker, D. N.

    2015-12-01

    The Magnetospheric MultiScale (MMS) constellation of satellites completed their six month commissioning period in August, 2015 and began science operations. Science operations for the Solving Magnetospheric Acceleration, Reconnection, and Turbulence (SMART) instrument package occur at the Laboratory for Atmospheric and Space Physics (LASP). The Science Data Center (SDC) at LASP is responsible for the data production, management, distribution, and archiving of the data received. The mission will collect several gigabytes per day of particles and field data. Management of these data requires effective selection, transmission, analysis, and storage of data in the ground segment of the mission, including efficient distribution paths to enable the science community to answer the key questions regarding magnetic reconnection. Due to the constraints on download volume, this includes the Scientist-in-the-Loop program that identifies high-value science data needed to answer the outstanding questions of magnetic reconnection. Of particular interest to the community is the tools and associated website we have developed to provide convenient access to the data, first by the mission science team and, beginning March 1, 2016, by the entire community. This presentation will demonstrate the data and tools available to the community via the SDC and discuss the technologies we chose and lessons learned.

  16. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  17. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  18. [Significance and utilization of "RECHS" (Resource Center for Health Science) focusing on the importance of human bio-resources].

    Science.gov (United States)

    Matuo, Yushi; Matsunami, Hidetoshi; Takemura, Masao; Saito, Kuniaki

    2011-12-01

    The Resource Center for Health Science (RECHS) has initiated a project based on the development and utilization of Bio-Resources/Database (BR/DB), comprising personal health records(PHR), such as health/medical records of the health of individuals, physically consolidated with bio-resources, e.g. serum, urine etc. taken from the same individuals. This is characterized as analytical alterations of BR/DB annually collected from healthy individuals, targeting 100,000, but not as data dependent on the number of unhealthy individuals so far investigated. The purpose is to establish a primary defense for the improvement of QOL by applying BR/DB to analysis by epidemiology and clinical chemistry. Furthermore, it also contributes to the construction of a PHR system planned as a national project. The RECHS coordinating activities are fully dependent on as many general hospitals as possible on the basis of regional medical services, and academia groups capable of analyzing BR/DB.

  19. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2016-07-29

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.

  20. 75 FR 48365 - Solicitation for a Cooperative Agreement-NIC Cost Containment Online Resource Center Project

    Science.gov (United States)

    2010-08-10

    ... National Institute of Corrections Solicitation for a Cooperative Agreement--NIC Cost Containment Online...: Solicitation for cooperative agreement. SUMMARY: The National Institute of Corrections (NIC) is soliciting.... The NIC Cost Containment Online Resource Center (CCORC) will be housed on the NIC Web site and...

  1. ERIC--The First 15 Years. A History of the Educational Resources Information Center.

    Science.gov (United States)

    Trester, Delmer J.

    This account of the Educational Resources Information Center (ERIC) provides information on its background and origin, and traces the development of the system from initial planning in 1962 through mid-1979. Although this is essentially an overview of the growth of the system, some of the more complex aspects of the ERIC story are included in the…

  2. Measuring Malaysia School Resource Centers' Standards through iQ-PSS: An Online Management Information System

    Science.gov (United States)

    Zainudin, Fadzliaton; Ismail, Kamarulzaman

    2010-01-01

    The Ministry of Education has come up with an innovative way to monitor the progress of 9,843 School Resource Centers (SRCs) using an online management information system called iQ-PSS (Quality Index of SRC). This paper aims to describe the data collection method and analyze the current state of SRCs in Malaysia and explain how the results can be…

  3. Turning Russian specialized microbial culture collections into resource centers for biotechnology.

    Science.gov (United States)

    Ivshina, Irena B; Kuyukina, Maria S

    2013-11-01

    Specialized nonmedical microbial culture collections contain unique bioresources that could be useful for biotechnology companies. Cooperation between collections and companies has suffered from shortcomings in infrastructure and legislation, hindering access to holdings. These challenges may be overcome by the transformation of collections into national bioresource centers and integration into international microbial resource networks.

  4. Microcomputers in Education: An Annotated Bibliography of Educational Resources Center Materials.

    Science.gov (United States)

    Leavy, Rebecca S.

    This annotated bibliography is a listing of both book and non-book materials in the collection at the Educational Resources Center at Western Kentucky University that relate to using microcomputers in education. These materials are primarily concerned with locating, selecting, and evaluating appropriate software; implementation of a microcomputer…

  5. 76 FR 62814 - National Center For Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-10-11

    ..., Office of Review, National Center for Research Resources, National Institutes of Health, 6701 Democracy Blvd., 1 Democracy Plaza, Rm. 1070, Bethesda, MD 20892, 301-435-0813, matocham@mail.nih.gov... Research; 93.371, Biomedical Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702,...

  6. 75 FR 26760 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2010-05-12

    ... Officer, National Center for Research Resources, or National Institutes of Health, 6701 Democracy Blvd., 1 Democracy Plaza, Room 1074, MSC 4874, Bethesda, MD 20892-4874, 301-435-0824, dunnbo@mail.nih.gov... Research; 93.371, Biomedical Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702,...

  7. 75 FR 70934 - National Center For Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2010-11-19

    ... of Health, NCRR/OR, Democracy I, 6701 Democracy Blvd., 1066, Bethesda, MD 20892. (Telephone... Center for Research Resources, National Institutes of Health, 6705 Democracy Blvd., Dem. 1, Room 1074... Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702, ARRA Related Construction...

  8. 76 FR 29254 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-05-20

    ... applications. Place: National Institutes of Health, One Democracy Plaza, 6701 Democracy Boulevard, Bethesda, MD..., National Center For Research Resources, Office of Review, 6701 Democracy Blvd., Room 1082, Bethesda, MD... Infrastructure, 93.306, 93.333, 93.702, ARRA Related Construction Awards, National Institutes of Health,...

  9. Using Electronic Information Resources Centers by Faculty Members at University Education: Competencies, Needs and Challenges

    Science.gov (United States)

    Abouelenein, Yousri

    2017-01-01

    This study aimed at investigating the factual situation of electronic information resources centers to faculty members at university education. Competencies that faculty members should possess regarding this issue were determined. Also their needs for (scientific research skills and teaching) were assessed. In addition, problems that hinder their…

  10. 77 FR 42790 - Notice of Funding Availability for the Small Business Transportation Resource Center Program

    Science.gov (United States)

    2012-07-20

    ... business enterprises (DBE), women owned small businesses (WOSB), HubZone, service disabled veteran owned... Office of the Secretary of Transportation Notice of Funding Availability for the Small Business Transportation Resource Center Program AGENCY: Office of Small and Disadvantaged Business Utilization...

  11. Personal Resources and Homelessness in Early Life: Predictors of Depression in Consumers of Homeless Multiservice Centers

    Science.gov (United States)

    DeForge, Bruce R.; Belcher, John R.; O'Rourke, Michael; Lindsey, Michael A.

    2008-01-01

    This study explored the relationship between personal resources and previous adverse life events such as homelessness and depression. Participants were recruited from two church sponsored multisite social service centers in Anne Arundel County, Maryland. The interview included demographics and several standardized scales to assess history of…

  12. Vertical and Horizontal Integration of Bioinformatics Education: A Modular, Interdisciplinary Approach

    Science.gov (United States)

    Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.

    2009-01-01

    Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…

  13. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, November 1, 1997--January 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    This report provides information on projects conducted by the Amarillo National Resource Center for Plutonium, a consortium of Texas A&M University, Texas Tech University, and the University of Texas. Progress is reported for four major areas: (1) plutonium information resource; (2) environmental, safety, and health; (3) communication, education, training, and community involvement; and (4) nuclear and other material studies. Environmental, safety, and health projects reported include a number of studies on high explosives. Progress reported for nuclear material studies includes storage and waste disposal investigations.

  14. Using the PubMatrix literature-mining resource to accelerate student-centered learning in a veterinary problem-based learning curriculum.

    Science.gov (United States)

    David, John; Irizarry, Kristopher J L

    2009-01-01

    Problem-based learning (PBL) creates an atmosphere in which veterinary students must take responsibility for their own education. Unlike a traditional curriculum where students receive discipline-specific information by attending formal lectures, PBL is designed to elicit self-directed, student-centered learning such that each student determines (1) what he/she does not know (learning issues), (2) what he/she needs to learn, (3) how he/she will learn it, and (4) what resources he/she will use. One of the biggest challenges facing students in a PBL curriculum is efficient time management while pursuing learning issues. Bioinformatics resources, such as the PubMatrix literature-mining tool, allow access to tremendous amounts of information almost instantaneously. To accelerate student-centered learning it is necessary to include resources that enhance the rate at which students can process biomedical information. Unlike using the PubMed interface directly, the PubMatrix tool enables users to automate queries, allowing up to 1,000 distinct PubMed queries to be executed per single PubMatrix submission. Users may submit multiple PubMatrix queries per session, resulting in the ability to execute tens of thousands of PubMed queries in a single day. The intuitively organized results, which remain accessible from PubMatrix user accounts, enable students to rapidly assimilate and process hundreds of thousands of individual publication records as they relate to the student's specific learning issues and query terms. Subsequently, students can explore substantially more of the biomedical publication landscape per learning issue and spend a greater fraction of their time actively engaged in resolving their learning issues.

  15. Mental health resources for LGBT collegians: a content analysis of college counseling center Web sites.

    Science.gov (United States)

    Wright, Paul J; McKinley, Christopher J

    2011-01-01

    This study content analyzed a randomly selected stratified national sample of 203 four-year United States colleges' counseling center Web sites to assess the degree to which such sites feature information and reference services for lesbian, gay, bisexual, and transgender (LGBT) collegians. Results revealed that LGBT-targeted communications were infrequent. For instance, fewer than one third of counseling center Web sites described individual counseling opportunities for LGBT students, fewer than 11% mentioned group counseling opportunities, and fewer than 6% offered a university crafted pamphlet with information about LGBT issues and resources. Findings are interpreted within the context of prior LGBT student health research.

  16. Western Mineral and Environmental Resources Science Center--providing comprehensive earth science for complex societal issues

    Science.gov (United States)

    Frank, David G.; Wallace, Alan R.; Schneider, Jill L.

    2010-01-01

    Minerals in the environment and products manufactured from mineral materials are all around us and we use and come into contact with them every day. They impact our way of life and the health of all that lives. Minerals are critical to the Nation's economy and knowing where future mineral resources will come from is important for sustaining the Nation's economy and national security. The U.S. Geological Survey (USGS) Mineral Resources Program (MRP) provides scientific information for objective resource assessments and unbiased research results on mineral resource potential, production and consumption statistics, as well as environmental consequences of mining. The MRP conducts this research to provide information needed for land planners and decisionmakers about where mineral commodities are known and suspected in the earth's crust and about the environmental consequences of extracting those commodities. As part of the MRP scientists of the Western Mineral and Environmental Resources Science Center (WMERSC or 'Center' herein) coordinate the development of national, geologic, geochemical, geophysical, and mineral-resource databases and the migration of existing databases to standard models and formats that are available to both internal and external users. The unique expertise developed by Center scientists over many decades in response to mineral-resource-related issues is now in great demand to support applications such as public health research and remediation of environmental hazards that result from mining and mining-related activities. Western Mineral and Environmental Resources Science Center Results of WMERSC research provide timely and unbiased analyses of minerals and inorganic materials to (1) improve stewardship of public lands and resources; (2) support national and international economic and security policies; (3) sustain prosperity and improve our quality of life; and (4) protect and improve public health, safety, and environmental quality. The MRP

  17. The GMOD Drupal Bioinformatic Server Framework

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  18. An older-worker employment model: Japan's Silver Human Resource Centers.

    Science.gov (United States)

    Bass, S A; Oka, M

    1995-10-01

    Over the past 20 years, a unique model of publicly assisted industries has developed in Japan, which contracts for services provided by retirees. Jobs for retirees are part-time and temporary in nature and, for the most part, are designed to assist in expanding community-based services. The program, known as the Silver Human Resource Centers, has expanded nationwide and reflects a novel approach to the productive engagement of retirees in society that may be replicable in other industrialized nations.

  19. A library-based bioinformatics services program.

    Science.gov (United States)

    Yarfitz, S; Ketchell, D S

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise.

  20. A library-based bioinformatics services program*

    Science.gov (United States)

    Yarfitz, Stuart; Ketchell, Debra S.

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise. PMID:10658962

  1. Planning the loading of data centers' resources based on download statistics

    Directory of Open Access Journals (Sweden)

    L. S. Hloba

    2016-06-01

    Full Text Available The customer service quality depends on the procedure of the application maintenance in data center of the communication provider. In the article the control approach of dynamic resource involvement has been suggested in order to ensure the input flow maintenance that takes into account the random nature of applications’ inflow and utilizes both short-term and long-term load statistics. The proposed approach consists of two methods that manage the number of the implicated serving nodes. The first one verifies the resource amount adequacy, provides the evaluation of input load’s dynamics based on the short-term statistics as well as the current state of the technical facilities. The second one accounts for the long-term statistics according to which the implication of additional resources can be scheduled during the load peaks. The simulation results of technical resources management have been presented for the data center infrastructure of the communication provider, that prove the effectiveness of the proposed methods.

  2. Renewable Resources: a national catalog of model projects. Volume 3. Southern Solar Energy Center Region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-07-01

    This compilation of diverse conservation and renewable energy projects across the United States was prepared through the enthusiastic participation of solar and alternate energy groups from every state and region. Compiled and edited by the Center for Renewable Resources, these projects reflect many levels of innovation and technical expertise. In many cases, a critique analysis is presented of how projects performed and of the institutional conditions associated with their success or failure. Some 2000 projects are included in this compilation; most have worked, some have not. Information about all is presented to aid learning from these experiences. The four volumes in this set are arranged in state sections by geographic region, coinciding with the four Regional Solar Energy Centers. The table of contents is organized by project category so that maximum cross-referencing may be obtained. This volume includes information on the Southern Solar Energy Center Region. (WHK)

  3. Renewable Resources: a national catalog of model projects. Volume 1. Northeast Solar Energy Center Region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-07-01

    This compilation of diverse conservation and renewable energy projects across the United States was prepared through the enthusiastic participation of solar and alternate energy groups from every state and region. Compiled and edited by the Center for Renewable Resources, these projects reflect many levels of innovation and technical expertise. In many cases, a critique analysis is presented of how projects performed and of the institutional conditions associated with their success or failure. Some 2000 projects are included in this compilation; most have worked, some have not. Information about all is presented to aid learning from these experiences. The four volumes in this set are arranged in state sections by geographic region, coinciding with the four Regional Solar Energy Centers. The table of contents is organized by project category so that maximum cross-referencing may be obtained. This volume includes information on the Northeast Solar Energy Center Region. (WHK).

  4. Bioinformatics for Exploration

    Science.gov (United States)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  5. Feature selection in bioinformatics

    Science.gov (United States)

    Wang, Lipo

    2012-06-01

    In bioinformatics, there are often a large number of input features. For example, there are millions of single nucleotide polymorphisms (SNPs) that are genetic variations which determine the dierence between any two unrelated individuals. In microarrays, thousands of genes can be proled in each test. It is important to nd out which input features (e.g., SNPs or genes) are useful in classication of a certain group of people or diagnosis of a given disease. In this paper, we investigate some powerful feature selection techniques and apply them to problems in bioinformatics. We are able to identify a very small number of input features sucient for tasks at hand and we demonstrate this with some real-world data.

  6. Human resources management in fitness centers and their relationship with the organizational performance

    Directory of Open Access Journals (Sweden)

    Jerónimo García Fernández

    2014-12-01

    Full Text Available Purpose: Human capital is essential in organizations providing sports services. However, there are few studies that examine what practices are carried out and whether they, affect sports organizations achieve better results are. Therefore the aim of this paper is to analyze the practices of human resource management in private fitness centers and the relationship established with organizational performance.Design/methodology/approach: Questionnaire to 101 managers of private fitness centers in Spain, performing exploratory and confirmatory factor analysis, and linear regressions between the variables.Findings: In organizations of fitness, the findings show that training practices, reward, communication and selection are positively correlated with organizational performance.Research limitations/implications: The fact that you made a convenience sampling in a given country and reduce the extrapolation of the results to the market.Originality/value: First, it represents a contribution to the fact that there are no studies analyzing the management of human resources in sport organizations from the point of view of the top leaders. On the other hand, allows fitness center managers to adopt practices to improve organizational performance.

  7. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  8. Science center capabilities to monitor and investigate Michigan’s water resources, 2016

    Science.gov (United States)

    Giesen, Julia A.; Givens, Carrie E.

    2016-09-06

    Michigan faces many challenges related to water resources, including flooding, drought, water-quality degradation and impairment, varying water availability, watershed-management issues, stormwater management, aquatic-ecosystem impairment, and invasive species. Michigan’s water resources include approximately 36,000 miles of streams, over 11,000 inland lakes, 3,000 miles of shoreline along the Great Lakes (MDEQ, 2016), and groundwater aquifers throughout the State.The U.S. Geological Survey (USGS) works in cooperation with local, State, and other Federal agencies, as well as tribes and universities, to provide scientific information used to manage the water resources of Michigan. To effectively assess water resources, the USGS uses standardized methods to operate streamgages, water-quality stations, and groundwater stations. The USGS also monitors water quality in lakes and reservoirs, makes periodic measurements along rivers and streams, and maintains all monitoring data in a national, quality-assured, hydrologic database.The USGS in Michigan investigates the occurrence, distribution, quantity, movement, and chemical and biological quality of surface water and groundwater statewide. Water-resource monitoring and scientific investigations are conducted statewide by USGS hydrologists, hydrologic technicians, biologists, and microbiologists who have expertise in data collection as well as various scientific specialties. A support staff consisting of computer-operations and administrative personnel provides the USGS the functionality to move science forward. Funding for USGS activities in Michigan comes from local and State agencies, other Federal agencies, direct Federal appropriations, and through the USGS Cooperative Matching Funds, which allows the USGS to partially match funding provided by local and State partners.This fact sheet provides an overview of the USGS current (2016) capabilities to monitor and study Michigan’s vast water resources. More

  9. Medical Image Resource Center--making electronic teaching files from PACS.

    Science.gov (United States)

    Lim, C C Tchoyoson; Yang, Guo Liang; Nowinski, Wieslaw L; Hui, Francis

    2003-12-01

    A picture archive and communications system (PACS) is a rich source of images and data suitable for creating electronic teaching files (ETF). However, the potential for PACS to support nonclinical applications has not been fully realized: at present there is no mechanism for PACS to identify and store teaching files; neither is there a standardized method for sharing such teaching images. The Medical Image Resource Center (MIRC) is a new central image repository that defines standards for data exchange among different centers. We developed an ETF server that retrieves digital imaging and communication in medicine (DICOM) images from PACS, and enables users to create teaching files that conform to the new MIRC schema. We test-populated our ETF server with illustrative images from the clinical case load of the National Neuroscience Institute, Singapore. Together, PACS and MIRC have the potential to benefit radiology teaching and research.

  10. The Resource Configuration Method with Lower Energy Consumption Based on Prediction in Cloud Data Center

    Directory of Open Access Journals (Sweden)

    Quan Liang

    2014-07-01

    Full Text Available The cloud computing data center have numerous hosts as well as application requests. In future, the short response time and user Qos are required, and the lower electricity power consumption to build the low-carbon green network is an irrevocable trend. The paper first puts forward a reconfiguration framework based on the request prediction of Double Exponential Smoothing, On the basis, work out in advance the allocation scheme which can improve the resource utilization ratio as well as lower energy consumption. The paper also present a concept of Utility Ratio Matrix (URM to represent allocations of hosts and Virtual Machines (VMs and a reconfiguration algorithm. The algorithm can separate the reconfiguration computing from the real allocation so that it can avoid a time delay, and can also reduce the energy consumption in data center. The corresponding analysis and experimental results show the feasibility of the reconfiguration algorithm in this paper.

  11. Inbound Call Centers and Emotional Dissonance in the Job Demands – Resources Model

    Science.gov (United States)

    Molino, Monica; Emanuel, Federica; Zito, Margherita; Ghislieri, Chiara; Colombo, Lara; Cortese, Claudio G.

    2016-01-01

    Background: Emotional labor, defined as the process of regulating feelings and expressions as part of the work role, is a major characteristic in call centers. In particular, interacting with customers, agents are required to show certain emotions that are considered acceptable by the organization, even though these emotions may be different from their true feelings. This kind of experience is defined as emotional dissonance and represents a feature of the job especially for call center inbound activities. Aim: The present study was aimed at investigating whether emotional dissonance mediates the relationship between job demands (workload and customer verbal aggression) and job resources (supervisor support, colleague support, and job autonomy) on the one hand, and, on the other, affective discomfort, using the job demands-resources model as a framework. The study also observed differences between two different types of inbound activities: customer assistance service (CA) and information service. Method: The study involved agents of an Italian Telecommunication Company, 352 of whom worked in the CA and 179 in the information service. The hypothesized model was tested across the two groups through multi-group structural equation modeling. Results: Analyses showed that CA agents experience greater customer verbal aggression and emotional dissonance than information service agents. Results also showed, only for the CA group, a full mediation of emotional dissonance between workload and affective discomfort, and a partial mediation of customer verbal aggression and job autonomy, and affective discomfort. Conclusion: This study’s findings contributed both to the emotional labor literature, investigating the mediational role of emotional dissonance in the job demands-resources model, and to call center literature, considering differences between two specific kinds of inbound activities. Suggestions for organizations and practitioners emerged in order to identify

  12. Inbound Call Centers and Emotional Dissonance in the Job Demands – Resources Model

    Directory of Open Access Journals (Sweden)

    Monica Molino

    2016-07-01

    Full Text Available Background: Emotional labor, defined as the process of regulating feelings and expressions as part of the work role, is a major characteristic in call centers. In particular, interacting with customers, agents are required to show certain emotions that are considered acceptable by the organization, even though these emotions may be different from their true feelings. This kind of experience is defined as emotional dissonance and represents a feature of the job especially for call center inbound activities. Aim: The present study was aimed at investigating whether emotional dissonance mediates the relationship between job demands (workload and customer verbal aggression and job resources (supervisor support, colleague support and job autonomy on the one hand, and, on the other, affective discomfort, using the job demands-resources model as a framework. The study also observed differences between two different types of inbound activities: customer assistance service and information service.Method: The study involved agents of an Italian Telecommunication Company, 352 of whom worked in the customer assistance service and 179 in the information service. The hypothesized model was tested across the two groups through multi-group structural equation modeling.Results: Analyses showed that customer assistance service agents experience greater customer verbal aggression and emotional dissonance than information service agents. Results also showed, only for the customer assistance service group, a full mediation of emotional dissonance between workload and affective discomfort, and a partial mediation of customer verbal aggression and job autonomy, and affective discomfort.Conclusion: This study’s findings contributed both to the emotional labor literature, investigating the mediational role of emotional dissonance in the job demands-resources model, and to call center literature, considering differences between two specific kinds of inbound activities

  13. NASA Space Engineering Research Center for utilization of local planetary resources

    Science.gov (United States)

    In 1987, responding to widespread concern about America's competitiveness and future in the development of space technology and the academic preparation of our next generation of space professionals, NASA initiated a program to establish Space Engineering Research Centers (SERC's) at universities with strong doctoral programs in engineering. The goal was to create a national infrastructure for space exploration and development, and sites for the Centers would be selected on the basis of originality of proposed research, the potential for near-term utilization of technologies developed, and the impact these technologies could have on the U.S. space program. The Centers would also be charged with a major academic mission: the recruitment of topnotch students and their training as space professionals. This document describes the goals, accomplishments, and benefits of the research activities of the University of Arizona/NASA SERC. This SERC has become recognized as the premier center in the area known as In-Situ Resource Utilization or Indigenous Space Materials Utilization.

  14. Uso da bioinformática na diferenciação molecular da Entamoeba histolytica e Entamoeba díspar - DOI: 10.4025/actascihealthsci.v30i2.2375 Molecular discrimination of Entamoeba histolytica and Entamoeba dispar by bioinformatics resources - DOI: 10.4025/actascihealthsci.v30i2.2375

    Directory of Open Access Journals (Sweden)

    Débora Sommer

    2008-12-01

    Full Text Available Amebíase invasiva, causada por Entamoeba histolytica, é microscopicamente indistinguível da espécie não-patogênica Entamoeba dispar. Com auxílio de ferramentas de bioinformática, objetivou-se diferenciar Entamoeba histolytica e Entamoeba dispar por técnicas moleculares. A análise foi realizada a partir do banco de dados da National Center for Biotechnology Information; pela pesquisa de similaridade de sequências, elegeu-se o gene da cisteína sintase. Um par de primer foi desenhado (programa Web Primer e foi selecionada a enzima de restrição TaqI (programa Web Cutter. Após a atuação da enzima, o fragmento foi dividido em dois, um com 255 pb e outro com 554 pb, padrão característico da E. histolytica. Na ausência de corte, o fragmento apresentou o tamanho de 809 pb, referente à E. dispar.Under microscopic conditions, the invasive Entamoeba histolytica is indistinguishable from the non-pathogenic species Entamoeba dispar. In this way, the present study was carried out to determine a molecular strategy for discriminating both species by the mechanisms of bioinformatics. The gene cysteine synthetase was considered for such a purpose by using the resources of the National Center for Biotechnology Information data bank in the search for similarities in the gene sequence. In this way, a primer pair was designed by the Web Primer program and the restriction enzyme TaqI was selected by the Web Cutter software program. The DNA fragment had a size of 809 bp before cutting, which is consistent with E. dispar. The gene fragment was partitioned in a first fragment with 255 bp and a second one with 554 bp, which is similar to the genetic characteristics of E. histolytica.

  15. Optimal scheduling of logistical support for medical resources order and shipment in community health service centers

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-11-01

    Full Text Available Purpose: This paper aims to propose an optimal scheduling for medical resources order and shipment in community health service centers (CHSCs.Design/methodology/approach: This paper presents two logistical support models for scheduling medical resources in CHSCs. The first model is a deterministic planning model (DM, which systematically considers the demands for various kinds of medical resources, the lead time of supplier, the storage capacity and other constraints, as well as the integrated shipment planning in the dimensions of time and space. The problem is a multi-commodities flow problem and is formulated as a mixed 0-1 integer programming model. Considering the demand for medical resources is always stochastic in practice, the second model is constructed as a stochastic programming model (SM. A solution procedure is developed to solve the proposed two models and a simulation-based evaluation method is proposed to compare the performances of the proposed models. Findings andFindings: The main contributions of this paper includes the following two aspects: (1 While most research on medical resources optimization studies a static problem taking no consideration of the time evolution and especially the dynamic demand for such resources, the proposed models in our paper integrate time-space network technique, which can find the optimal scheduling of logistical support for medical resources order and shipment in CHSCs effectively. (2 The logistics plans in response to the deterministic demand and the time-varying demand are constructed as 0-1 mixed integer programming model and stochastic integer programming model, respectively. The optimal solutions not only minimize the operation cost of the logistics system, but also can improve the order and shipment operation in practice.Originality/value: Currently, medical resources in CHSCs are purchased by telephone or e-mail. The important parameters in decision making, i.e. order/shipment frequency

  16. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITY Evaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  17. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  18. The Contribution of Background Variables, Internal and External Resources to Life Satisfaction among Adolescents in Residential Treatment Centers

    Science.gov (United States)

    Lipschitz-Elhawi, Racheli; Itzhaky, Haya; Michal, Hefetz

    2008-01-01

    The article deals with the contribution of background variables (gender, years of residence in a treatment center, and family status), internal resource (self-esteem), and external resources (peer, family and significant other support, sense of belonging to the community) to life satisfaction among adolescents living in residential treatment…

  19. Teachers' Link to Electronic Resources in the Library Media Center: A Local Study of Awareness, Knowledge, and Influence

    Science.gov (United States)

    Williams, Teresa D.; Grimble, Bonnie J.; Irwin, Marilyn

    2004-01-01

    High school students often use online databases and the Internet in the school library media center (SLMC) to complete teachers' assignments. This case study used a survey to assess teachers' awareness of electronic resources, and to determine whether their directions influence student use of these resources in the SLMC. Participants were teachers…

  20. Providing Curriculum Support in the School Library Media Center: Resource Alignment, or How To Eat an Elephant.

    Science.gov (United States)

    Lowe, Karen

    2003-01-01

    Discusses the process of weeding, updating, and building a school library media collection that supports the state curriculum. Explains resource alignment, a process for using the shelf list as a tool to analyze and align media center resources to state curricula, and describes a five-year plan and its usefulness for additional funding. (LRW)

  1. Evaluation of pharmacist utilization of a poison center as a resource for patient care.

    Science.gov (United States)

    Armahizer, Michael J; Johnson, David; Deusenberry, Christina M; Foley, John J; Krenzelok, Edward P; Pummer, Tara L

    2013-06-01

    The objective of this study was to evaluate pharmacist use of a Regional Poison Information Center (RPIC), identify potential barriers to utilization, and provide strategies to overcome these barriers. All calls placed to a RPIC by a pharmacist, physician, or nurse over a 5-year period were retrieved. These data were analyzed to assess the pharmacist utilization of the RPIC and the variation of call types. Additionally, a survey, designed to assess the past and future use of the RPIC by pharmacists, was distributed to pharmacists in the region. Of the 37,799 calls made to the RPIC, 26,367 (69.8%) were from nurses, 8096 (21.4%) were from physicians, and 3336 (8.8%) were from pharmacists. Among calls initiated by pharmacists, the majority involved medication identification (n = 2391, 71.7%). The survey had a 38.9% response rate (n = 715) and revealed a trend toward less RPIC utilization by pharmacists with more formal training but less practice experience. The utilization of the RPIC was lowest among pharmacists as compared to other health care professionals. This may be due to pharmacists' unfamiliarity with the poison center's scope of services and resources. Therefore, it is important that pharmacists are educated on the benefit of utilizing poison centers in clinical situations.

  2. Agile parallel bioinformatics workflow management using Pwrake

    Directory of Open Access Journals (Sweden)

    Tanaka Masahiro

    2011-09-01

    Full Text Available Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows

  3. Pattern recognition in bioinformatics.

    Science.gov (United States)

    de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T

    2013-09-01

    Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained.

  4. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  5. Bioinformatics analyses for signal transduction networks

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Research in signaling networks contributes to a deeper understanding of organism living activities. With the development of experimental methods in the signal transduction field, more and more mechanisms of signaling pathways have been discovered. This paper introduces such popular bioin-formatics analysis methods for signaling networks as the common mechanism of signaling pathways and database resource on the Internet, summerizes the methods of analyzing the structural properties of networks, including structural Motif finding and automated pathways generation, and discusses the modeling and simulation of signaling networks in detail, as well as the research situation and tendency in this area. Now the investigation of signal transduction is developing from small-scale experiments to large-scale network analysis, and dynamic simulation of networks is closer to the real system. With the investigation going deeper than ever, the bioinformatics analysis of signal transduction would have immense space for development and application.

  6. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  7. Bioinformatics meets parasitology.

    Science.gov (United States)

    Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B

    2012-05-01

    The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention.

  8. Emergent Computation Emphasizing Bioinformatics

    CERN Document Server

    Simon, Matthew

    2005-01-01

    Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...

  9. Engineering BioInformatics

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@ With the completion of human genome sequencing, a new era of bioinformatics st arts. On one hand, due to the advance of high throughput DNA microarray technol ogies, functional genomics such as gene expression information has increased exp onentially and will continue to do so for the foreseeable future. Conventional m eans of storing, analysing and comparing related data are already overburdened. Moreover, the rich information in genes , their functions and their associated wide biological implication requires new technologies of analysing data that employ sophisticated statistical and machine learning algorithms, powerful com puters and intensive interaction together different data sources such as seque nce data, gene expression data, proteomics data and metabolic pathway informati on to discover complex genomic structures and functional patterns with other bi ological process to gain a comprehensive understanding of cell physiology.

  10. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  11. Virtual bioinformatics distance learning suite*.

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-05-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material over the Internet. Currently, we provide two fully computer-based courses, "Introduction to Bioinformatics" and "Bioinformatics in Functional Genomics." Here we will discuss the application of distance learning in bioinformatics training and our experiences gained during the 3 years that we have run the courses, with about 400 students from a number of universities. The courses are available at bioinf.uta.fi.

  12. Community Coordinated Modeling Center: A Powerful Resource in Space Science and Space Weather Education

    Science.gov (United States)

    Chulaki, A.; Kuznetsova, M. M.; Rastaetter, L.; MacNeice, P. J.; Shim, J. S.; Pulkkinen, A. A.; Taktakishvili, A.; Mays, M. L.; Mendoza, A. M. M.; Zheng, Y.; Mullinix, R.; Collado-Vega, Y. M.; Maddox, M. M.; Pembroke, A. D.; Wiegand, C.

    2015-12-01

    Community Coordinated Modeling Center (CCMC) is a NASA affiliated interagency partnership with the primary goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable this small group to serve as a hub for raising generations of young space scientists and engineers. CCMC resources are publicly available online, providing unprecedented global access to the largest collection of modern space science models (developed by the international research community). CCMC has revolutionized the way simulations are utilized in classrooms settings, student projects, and scientific labs and serves hundreds of educators, students and researchers every year. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unrivaled capabilities and experiences, the team provides in-depth space weather training to students and professionals worldwide, and offers an amazing opportunity for undergraduates to engage in real-time space weather monitoring, analysis, forecasting and research. In-house development of state-of-the-art space weather tools and applications provides exciting opportunities to students majoring in computer science and computer engineering fields to intern with the software engineers at the CCMC while also learning about the space weather from the NASA scientists.

  13. Research on Resource Development and Application of Water Resources Data Center%水利数据中心资源开发与应用研究

    Institute of Scientific and Technical Information of China (English)

    刘汉刚; 李光

    2014-01-01

    Faced with very complex issues as heterogeneous environments, business integration, management standard, as well as the practical needs of the water resources data center construction, according to the basic requirements of the national water resources data center, combined with the actual situation of Shandong province, water resources department of Shandong province adopts new technologies like cloud computing, virtualization, big data technology, wide network, initially builds water resources data centre. It improves water resources data center standard and establishes a standardized technology platform and a unified data center business processes. It builds the data center application services platform and constructs water resources information sharing environment and service system with water resource data center as its core. It realizes pooling storage and exchange sharing of all kinds of water resources data information.%面临异构环境、业务融合、管理规范等非常复杂的问题,以及水利数据中心建设的现实需求,山东省水利厅按照国家水利数据中心建设的基本技术要求,结合山东省实际情况,采用云计算、虚拟化、大数据、宽网络等新技术,初步建成水利数据中心;完善水利数据中心标准,建立统一规范的技术基础平台和数据中心业务流程;构建数据中心应用服务平台,建设以水利数据中心为核心的水利信息资源共享环境与服务体系;实现各类水利数据信息的汇集存储与交换共享。

  14. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  15. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  16. An Overview of the Data Products and Technologies Provided by the Global Hydrology Resource Center

    Science.gov (United States)

    Hardin, D.; Conover, H.; Graves, S.; Goodman, M.

    2008-12-01

    The Global Hydrology Resource Center (GHRC) is one of twelve data centers that make up the Distributed Active Archive Centers (DAAC) Alliance. The GHRC collects and distributes climate research quality data and associated products from satellite, aircraft and in-situ instruments, primarily in the fields of lightning detection, microwave imaging, and convective moisture. In addition the researchers at the GHRC working with atmospheric scientists have developed robust advanced information systems applications that enable the use of NASA and other data by scientists and the broader user community. The primary data of the GHRC is lightning data. Raw instrument data from the Lightning Imaging Sensor (LIS) and its precursor the Optical Transient detector (OTD) along with derived products, validation data, and ancillary in-situ lightning data (like that from the National Lightning Detection Network) make up the suite of lightning data sets. This is due in part because the LIS science computing facility is co-located with the GHRC and the LIS team utilizes GHRC services to acquire, process, and archive new and updated lightning datasets and products for their research. In this role, the GHRC serves the global lightning research community and is responsible for the sole archive of lightning data from NASA's LIS and OTD instruments. The GHRC has contributed to numerous NASA field campaigns in various roles dating back to the mid 1990s. During the series of Convection and Moisture experiments (CAMEX) beginning with CAMEX-3 in 1998, the GHRC provided mission support data to the science teams during the experiment, then archived and distributed the experiment data post mission. In 2001, during the CAMEX-4 mission, field experiment operations were revolutionized when project and mission scientists used the GHRC-developed on-line collaboration system for mission planning and execution, and to perform post-experiment analysis. Using web-based forms, flight and science reports were

  17. Translational bioinformatics in psychoneuroimmunology: methods and applications.

    Science.gov (United States)

    Yan, Qing

    2012-01-01

    Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease.

  18. Earth resources programs at the Langley Research Center. Part 1: Advanced Applications Flight Experiments (AAFE) and microwave remote sensing program

    Science.gov (United States)

    Parker, R. N.

    1972-01-01

    The earth resources activity is comprised of two basic programs as follows: advanced applications flight experiments, and microwave remote sensing. The two programs are in various stages of implementation, extending from experimental investigations within both the AAFE program and the microwave remote sensing program, to multidisciplinary studies and planning. The purpose of this paper is simply to identify the main thrust of the Langley Research Center activity in earth resources.

  19. String Mining in Bioinformatics

    Science.gov (United States)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  20. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  1. Analysis of metagenomics next generation sequence data for fungal ITS barcoding: Do you need advance bioinformatics experience?

    Directory of Open Access Journals (Sweden)

    Abdalla Osman Abdalla Ahmed

    2016-07-01

    Full Text Available During the last few decades, most of microbiology laboratories have become familiar in analyzing Sanger sequence data for ITS barcoding. However, with the availability of next-generation sequencing platforms in many centers, it has become important for medical mycologists to know how to make sense of the massive sequence data generated by these new sequencing technologies. In many reference laboratories, the analysis of such data is not a big deal, since suitable IT infrastructure and well-trained bioinformatics scientists are always available. However, in small research laboratories and clinical microbiology laboratories the availability of such resources are always lacking. In this report, simple and user-friendly bioinformatics work-flow is suggested for fast and reproducible ITS barcoding of fungi.

  2. The International Center for Integrated Water Resources Management (ICIWaRM): The United States' Contribution to UNESCO IHP's Global Network of Water Centers

    Science.gov (United States)

    Logan, W. S.

    2015-12-01

    The concept of a "category 2 center"—i.e., one that is closely affiliated with UNESCO, but not legally part of UNESCO—dates back many decades. However, only in the last decade has the concept been fully developed. Within UNESCO, the International Hydrological Programme (IHP) has led the way in creating a network of regional and global water-related centers.ICIWaRM—the International Center for Integrated Water Resources Management—is one member of this network. Approved by UNESCO's General Conference, the center has been operating since 2009. It was designed to fill a niche in the system for a center that was backed by an institution with on-the-ground water management experience, but that also had strong connections to academia, NGOs and other governmental agencies. Thus, ICIWaRM is hosted by the US Army Corps of Engineers' Institute for Water Resources (IWR), but established with an internal network of partner institutions. Three main factors have contributed to any success that ICIWaRM has achieved in its global work: A focus on practical science and technology which can be readily transferred. This includes the Corps' own methodologies and models for planning and water management, and those of our university and government partners. Collaboration with other UNESCO Centers on joint applied research, capacity-building and training. A network of centers needs to function as a network, and ICIWaRM has worked together with UNESCO-affiliated centers in Chile, Brazil, Paraguay, the Dominican Republic, Japan, China, and elsewhere. Partnering with and supporting existing UNESCO-IHP programs. ICIWaRM serves as the Global Technical Secretariat for IHP's Global Network on Water and Development Information in Arid Lands (G-WADI). In addition to directly supporting IHP, work through G-WADI helps the center to frame, prioritize and integrate its activities. With the recent release of the United Nation's 2030 Agenda for Sustainable Development, it is clear that

  3. Lessons Learned from Creating the Public Earthquake Resource Center at CERI

    Science.gov (United States)

    Patterson, G. L.; Michelle, D.; Johnston, A.

    2004-12-01

    The Center for Earthquake Research and Information (CERI) at the University of Memphis opened the Public Earthquake Resource Center (PERC) in May 2004. The PERC is an interactive display area that was designed to increase awareness of seismology, Earth Science, earthquake hazards, and earthquake engineering among the general public and K-12 teachers and students. Funding for the PERC is provided by the US Geological Survey, The NSF-funded Mid America Earthquake Center, and the University of Memphis, with input from the Incorporated Research Institutions for Seismology. Additional space at the facility houses local offices of the US Geological Survey. PERC exhibits are housed in a remodeled residential structure at CERI that was donated by the University of Memphis and the State of Tennessee. Exhibits were designed and built by CERI and US Geological Survey staff and faculty with the help of experienced museum display subcontractors. The 600 square foot display area interactively introduces the basic concepts of seismology, real-time seismic information, seismic network operations, paleoseismology, building response, and historical earthquakes. Display components include three 22" flat screen monitors, a touch sensitive monitor, 3 helicorder elements, oscilloscope, AS-1 seismometer, life-sized liquefaction trench, liquefaction shake table, and building response shake table. All displays include custom graphics, text, and handouts. The PERC website at www.ceri.memphis.edu/perc also provides useful information such as tour scheduling, ask a geologist, links to other institutions, and will soon include a virtual tour of the facility. Special consideration was given to address State science standards for teaching and learning in the design of the displays and handouts. We feel this consideration is pivotal to the success of any grass roots Earth Science education and outreach program and represents a valuable lesson that has been learned at CERI over the last several

  4. Genome Exploitation and Bioinformatics Tools

    Science.gov (United States)

    de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.

    Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.

  5. Clustering Techniques in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Masood

    2015-01-01

    Full Text Available Dealing with data means to group information into a set of categories either in order to learn new artifacts or understand new domains. For this purpose researchers have always looked for the hidden patterns in data that can be defined and compared with other known notions based on the similarity or dissimilarity of their attributes according to well-defined rules. Data mining, having the tools of data classification and data clustering, is one of the most powerful techniques to deal with data in such a manner that it can help researchers identify the required information. As a step forward to address this challenge, experts have utilized clustering techniques as a mean of exploring hidden structure and patterns in underlying data. Improved stability, robustness and accuracy of unsupervised data classification in many fields including pattern recognition, machine learning, information retrieval, image analysis and bioinformatics, clustering has proven itself as a reliable tool. To identify the clusters in datasets algorithm are utilized to partition data set into several groups based on the similarity within a group. There is no specific clustering algorithm, but various algorithms are utilized based on domain of data that constitutes a cluster and the level of efficiency required. Clustering techniques are categorized based upon different approaches. This paper is a survey of few clustering techniques out of many in data mining. For the purpose five of the most common clustering techniques out of many have been discussed. The clustering techniques which have been surveyed are: K-medoids, K-means, Fuzzy C-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN and Self-Organizing Map (SOM clustering.

  6. The Amarillo National Resource Center for Plutonium. Quarterly progress detailed report, 1 November 1996--31 January 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Progress for this quarter is given for each of the following Center programs: (1) plutonium information resource; (2) advisory function (DOE and state support); (3) environmental, public health and safety; (3) communication, education, and training; and (4) nuclear and other material studies. Both summaries of the activities and detailed reports are included.

  7. An On-Line Information Management System for Resources for Staff Development for the Professional Development Center Network.

    Science.gov (United States)

    Monroe, Eula Ewing

    The Professional Development Center Network (PDC), a consortium of twenty public school districts, parochial schools, and Western Kentucky University, seeks to identify and secure resources to assist in the design and delivery of activities appropriate to the educational development of individual staff members through the online Information…

  8. 2007 University Exemplary Department Award honors industrial and systems engineering; apparel, housing, and resource management; and University Academic Advising Center

    OpenAIRE

    Owczarski, Mark

    2007-01-01

    The Grado Department of Industrial and Systems Engineering in the College of Engineering; the Department of Apparel, Housing, and Resource Management in the College of Liberal Arts and Human Sciences; and University Academic Advising Center will receive the 2007 University Exemplary Department Awards at ceremonies to be held Tuesday, Nov. 27 at The Inn at Virginia Tech.

  9. Academic Relationships and Teaching Resources. Fogarty International Center Series on the Teaching of Preventive Medicine, Volume 6.

    Science.gov (United States)

    Clark, Duncan W., Ed.

    The monograph is one of the Fogarty International Center Series on the Teaching of Preventive Medicine, undertaken to: (1) review and evaluate the state of the art of prevention and control of human diseases; (2) identify deficiences in knowledge requiring further research (including analysis of financial resources, preventive techniques, and…

  10. Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools

    Science.gov (United States)

    Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson

    2010-01-01

    This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…

  11. An International Bioinformatics Infrastructure to Underpin the Arabidopsis Community

    Science.gov (United States)

    The future bioinformatics needs of the Arabidopsis community as well as those of other scientific communities that depend on Arabidopsis resources were discussed at a pair of recent meetings held by the Multinational Arabidopsis Steering Committee (MASC) and the North American Arabidopsis Steering C...

  12. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    Science.gov (United States)

    A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  13. Area Health Resources Files (AHRF) National Center for Health Workforce Analysis

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Area Health Resource Files (AHRF) website is are a set of query tools, interactive maps, and data downloads with extensive demographic, training, and resource...

  14. The NIH-NIAID Schistosomiasis Resource Center at the Biomedical Research Institute: Molecular Redux

    Science.gov (United States)

    Cody, James J.; Ittiprasert, Wannaporn; Miller, André N.; Henein, Lucie; Mentink-Kane, Margaret M.; Hsieh, Michael H.

    2016-01-01

    Schistosomiasis remains a health burden in many parts of the world. The complex life cycle of Schistosoma parasites and the economic and societal conditions present in endemic areas make the prospect of eradication unlikely in the foreseeable future. Continued and vigorous research efforts must therefore be directed at this disease, particularly since only a single World Health Organization (WHO)-approved drug is available for treatment. The National Institutes of Health (NIH)–National Institute of Allergy and Infectious Diseases (NIAID) Schistosomiasis Resource Center (SRC) at the Biomedical Research Institute provides investigators with the critical raw materials needed to carry out this important research. The SRC makes available, free of charge (including international shipping costs), not only infected host organisms but also a wide array of molecular reagents derived from all life stages of each of the three main human schistosome parasites. As the field of schistosomiasis research rapidly advances, it is likely to become increasingly reliant on omics, transgenics, epigenetics, and microbiome-related research approaches. The SRC has and will continue to monitor and contribute to advances in the field in order to support these research efforts with an expanding array of molecular reagents. In addition to providing investigators with source materials, the SRC has expanded its educational mission by offering a molecular techniques training course and has recently organized an international schistosomiasis-focused meeting. This review provides an overview of the materials and services that are available at the SRC for schistosomiasis researchers, with a focus on updates that have occurred since the original overview in 2008. PMID:27764112

  15. The Counseling Center: An Undervalued Resource in Recruitment, Retention, and Risk Management

    Science.gov (United States)

    Bishop, John B.

    2010-01-01

    A primary responsibility for directors of college and university counseling centers is to explain to various audiences the multiple ways such units are of value to their institutions. This article reviews the history of how counseling center directors have been encouraged to develop and describe the work of their centers. Often overlooked are the…

  16. Resources to Support Ethical Practice in Evaluation: An Interview with the Director of the National Center for Research and Professional Ethics

    Science.gov (United States)

    Goodyear, Leslie

    2012-01-01

    Where do evaluators find resources on ethics and ethical practice? This article highlights a relatively new online resource, a centerpiece project of the National Center for Professional and Research Ethics (NCPRE), which brings together information on best practices in ethics in research, academia, and business in an online portal and center. It…

  17. Using registries to integrate bioinformatics tools and services into workbench environments

    DEFF Research Database (Denmark)

    Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer;

    2016-01-01

    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especiall...

  18. Training Experimental Biologists in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Pedro Fernandes

    2012-01-01

    Full Text Available Bioinformatics, for its very nature, is devoted to a set of targets that constantly evolve. Training is probably the best response to the constant need for the acquisition of bioinformatics skills. It is interesting to assess the effects of training in the different sets of researchers that make use of it. While training bench experimentalists in the life sciences, we have observed instances of changes in their attitudes in research that, if well exploited, can have beneficial impacts in the dialogue with professional bioinformaticians and influence the conduction of the research itself.

  19. David Grant Medical Center energy use baseline and integrated resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Richman, E.E.; Hoshide, R.K.; Dittmer, A.L.

    1993-04-01

    The US Air Mobility Command (AMC) has tasked Pacific Northwest Laboratory (PNL) with supporting the US Department of Energy (DOE) Federal Energy Management Program's (FEMP) mission to identify, evaluate, and assist in acquiring all cost-effective energy resource opportunities (EROs) at the David Grant Medical Center (DGMC). This report describes the methodology used to identify and evaluate the EROs at DGMC, provides a life-cycle cost (LCC) analysis for each ERO, and prioritizes any life-cycle cost-effective EROs based on their net present value (NPV), value index (VI), and savings to investment ratio (SIR or ROI). Analysis results are presented for 17 EROs that involve energy use in the areas of lighting, fan and pump motors, boiler operation, infiltration, electric load peak reduction and cogeneration, electric rate structures, and natural gas supply. Typical current energy consumption is approximately 22,900 MWh of electricity (78,300 MBtu), 87,600 kcf of natural gas (90,300 MBtu), and 8,300 gal of fuel oil (1,200 MBtu). A summary of the savings potential by energy-use category of all independent cost-effective EROs is shown in a table. This table includes the first cost, yearly energy consumption savings, and NPV for each energy-use category. The net dollar savings and NPV values as derived by the life-cycle cost analysis are based on the 1992 federal discount rate of 4.6%. The implementation of all EROs could result in a yearly electricity savings of more than 6,000 MWh or 26% of current yearly electricity consumption. More than 15 MW of billable load (total billed by the utility for a 12-month period) or more than 34% of current billed demand could also be saved. Corresponding natural gas savings would be 1,050 kcf (just over 1% of current consumption). Total yearly net energy cost savings for all options would be greater than $343,340. This value does not include any operations and maintenance (O M) savings.

  20. David Grant Medical Center energy use baseline and integrated resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Richman, E.E.; Hoshide, R.K.; Dittmer, A.L.

    1993-04-01

    The US Air Mobility Command (AMC) has tasked Pacific Northwest Laboratory (PNL) with supporting the US Department of Energy (DOE) Federal Energy Management Program`s (FEMP) mission to identify, evaluate, and assist in acquiring all cost-effective energy resource opportunities (EROs) at the David Grant Medical Center (DGMC). This report describes the methodology used to identify and evaluate the EROs at DGMC, provides a life-cycle cost (LCC) analysis for each ERO, and prioritizes any life-cycle cost-effective EROs based on their net present value (NPV), value index (VI), and savings to investment ratio (SIR or ROI). Analysis results are presented for 17 EROs that involve energy use in the areas of lighting, fan and pump motors, boiler operation, infiltration, electric load peak reduction and cogeneration, electric rate structures, and natural gas supply. Typical current energy consumption is approximately 22,900 MWh of electricity (78,300 MBtu), 87,600 kcf of natural gas (90,300 MBtu), and 8,300 gal of fuel oil (1,200 MBtu). A summary of the savings potential by energy-use category of all independent cost-effective EROs is shown in a table. This table includes the first cost, yearly energy consumption savings, and NPV for each energy-use category. The net dollar savings and NPV values as derived by the life-cycle cost analysis are based on the 1992 federal discount rate of 4.6%. The implementation of all EROs could result in a yearly electricity savings of more than 6,000 MWh or 26% of current yearly electricity consumption. More than 15 MW of billable load (total billed by the utility for a 12-month period) or more than 34% of current billed demand could also be saved. Corresponding natural gas savings would be 1,050 kcf (just over 1% of current consumption). Total yearly net energy cost savings for all options would be greater than $343,340. This value does not include any operations and maintenance (O&M) savings.

  1. ANALYSIS OF THE RESULTS IN IMPLEMENTING THE OPERATIONAL PROGRAM FOR HUMAN RESOURCES DEVELOPMENT 2007-2013 FOR CENTER REGION, ROMANIA

    OpenAIRE

    Ionela Gavrila-Paven; Iulian Bogdan Dobra; Lucian Docea

    2013-01-01

    This study aims to highlight the results achieved through the implementation ofprojects financed by the European Social Fund through the Operational Program for HumanResources Development 2007-2013 at regional level. It was considered Center Region for thepresentation and analysis of data from the point of view of absorption and especially from the pointof view of the results obtained by analyzing outcome indicators reported by the recipients for 2007-2012. Although the degree of absorption i...

  2. Bioinformatics training: selecting an appropriate learning content management system--an example from the European Bioinformatics Institute.

    Science.gov (United States)

    Wright, Victoria Ann; Vaughan, Brendan W; Laurent, Thomas; Lopez, Rodrigo; Brooksbank, Cath; Schneider, Maria Victoria

    2010-11-01

    Today's molecular life scientists are well educated in the emerging experimental tools of their trade, but when it comes to training on the myriad of resources and tools for dealing with biological data, a less ideal situation emerges. Often bioinformatics users receive no formal training on how to make the most of the bioinformatics resources and tools available in the public domain. The European Bioinformatics Institute, which is part of the European Molecular Biology Laboratory (EMBL-EBI), holds the world's most comprehensive collection of molecular data, and training the research community to exploit this information is embedded in the EBI's mission. We have evaluated eLearning, in parallel with face-to-face courses, as a means of training users of our data resources and tools. We anticipate that eLearning will become an increasingly important vehicle for delivering training to our growing user base, so we have undertaken an extensive review of Learning Content Management Systems (LCMSs). Here, we describe the process that we used, which considered the requirements of trainees, trainers and systems administrators, as well as taking into account our organizational values and needs. This review describes the literature survey, user discussions and scripted platform testing that we performed to narrow down our choice of platform from 36 to a single platform. We hope that it will serve as guidance for others who are seeking to incorporate eLearning into their bioinformatics training programmes.

  3. Impact of Information Technology, Clinical Resource Constraints, and Patient-Centered Practice Characteristics on Quality of Care

    Directory of Open Access Journals (Sweden)

    JongDeuk Baek

    2015-02-01

    Full Text Available Objective: Factors in the practice environment, such as health information technology (IT infrastructure, availability of other clinical resources, and financial incentives, may influence whether practices are able to successfully implement the patient-centered medical home (PCMH model and realize its benefits. This study investigates the impacts of those PCMH-related elements on primary care physicians’ perception of quality of care. Methods: A multiple logistic regression model was estimated using the 2004 to 2005 CTS Physician Survey, a national sample of salaried primary care physicians (n = 1733. Results: The patient-centered practice environment and availability of clinical resources increased physicians’ perceived quality of care. Although IT use for clinical information access did enhance physicians’ ability to provide high quality of care, a similar positive impact of IT use was not found for e-prescribing or the exchange of clinical patient information. Lack of resources was negatively associated with physician perception of quality of care. Conclusion: Since health IT is an important foundation of PCMH, patient-centered practices are more likely to have health IT in place to support care delivery. However, despite its potential to enhance delivery of primary care, simply making health IT available does not necessarily translate into physicians’ perceptions that it enhances the quality of care they provide. It is critical for health-care managers and policy makers to ensure that primary care physicians fully recognize and embrace the use of new technology to improve both the quality of care provided and the patient outcomes.

  4. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    CERN Document Server

    Buyya, Rajkumar; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. This paper presents vision, challenges, and architectural elements for energy-efficient management of Cloud computing environments. We focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of ...

  5. Bioinformatics interoperability: all together now !

    NARCIS (Netherlands)

    Meganck, B.; Mergen, P.; Meirte, D.

    2009-01-01

    The following text presents some personal ideas about the way (bio)informatics2 is heading, along with some examples of how our institution – the Royal Museum for Central Africa (RMCA) – is gearing up for these new times ahead. It tries to find the important trends amongst the buzzwords, and to demo

  6. The secondary metabolite bioinformatics portal

    DEFF Research Database (Denmark)

    Weber, Tilmann; Kim, Hyun Uk

    2016-01-01

    . In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...

  7. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  8. Visualising "Junk" DNA through Bioinformatics

    Science.gov (United States)

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  9. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  10. Adaptive TrimTree: Green Data Center Networks through Resource Consolidation, Selective Connectedness and Energy Proportional Computing

    Directory of Open Access Journals (Sweden)

    Saima Zafar

    2016-10-01

    Full Text Available A data center is a facility with a group of networked servers used by an organization for storage, management and dissemination of its data. The increase in data center energy consumption over the past several years is staggering, therefore efforts are being initiated to achieve energy efficiency of various components of data centers. One of the main reasons data centers have high energy inefficiency is largely due to the fact that most organizations run their data centers at full capacity 24/7. This results into a number of servers and switches being underutilized or even unutilized, yet working and consuming electricity around the clock. In this paper, we present Adaptive TrimTree; a mechanism that employs a combination of resource consolidation, selective connectedness and energy proportional computing for optimizing energy consumption in a Data Center Network (DCN. Adaptive TrimTree adopts a simple traffic-and-topology-based heuristic to find a minimum power network subset called ‘active network subset’ that satisfies the existing network traffic conditions while switching off the residual unused network components. A ‘passive network subset’ is also identified for redundancy which consists of links and switches that can be required in future and this subset is toggled to sleep state. An energy proportional computing technique is applied to the active network subset for adapting link data rates to workload thus maximizing energy optimization. We have compared our proposed mechanism with fat-tree topology and ElasticTree; a scheme based on resource consolidation. Our simulation results show that our mechanism saves 50%–70% more energy as compared to fat-tree and 19.6% as compared to ElasticTree, with minimal impact on packet loss percentage and delay. Additionally, our mechanism copes better with traffic anomalies and surges due to passive network provision.

  11. Virginia Bioinformatics Institute awards Transdisciplinary Team Science

    OpenAIRE

    Bland, Susan

    2009-01-01

    The Virginia Bioinformatics Institute at Virginia Tech, in collaboration with Virginia Tech's Ph.D. program in genetics, bioinformatics, and computational biology, has awarded three fellowships in support of graduate work in transdisciplinary team science.

  12. 76 FR 24890 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-05-03

    ... for Research Resources, National Institutes of Health, 6701 Democracy Blvd., Dem. 1, Room 1078, MSC... Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702, ARRA Related Construction...

  13. 76 FR 55074 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-09-06

    ... Resources, National Institutes of Health, 6701 Democracy Blvd., Dem. 1, Room 1078, MSC 4874, Bethesda, MD..., Research Infrastructure, 93.306, 93.333; 93.702, ARRA Related Construction Awards, National Institutes...

  14. St. Edward Mercy Medical Center, Fort Smith, AR. Human resource planning identifies institutional need, available personnel.

    Science.gov (United States)

    Keith, J M

    1981-04-01

    Human resource planning, which allows health care facilities to identify future staffing needs and to project staffing availability, will increase as institutions seek to balance quality, costs, employees' needs.

  15. Polymer Chemistry in Science Centers and Museums: A Survey of Educational Resources.

    Science.gov (United States)

    Collard, David M.; McKee, Scott

    1998-01-01

    A survey of 129 science and technology-related centers and museums revealed a shortage of polymer chemistry exhibits. Describes those displays that do exist and suggests possibilities for future displays and exhibits. Contains 23 references. (WRM)

  16. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  17. Application of bioinformatics in tropical medicine

    Institute of Scientific and Technical Information of China (English)

    Wiwanitkit V

    2008-01-01

    Bioinformatics is a usage of information technology to help solve biological problems by designing novel and in-cisive algorithms and methods of analyses.Bioinformatics becomes a discipline vital in the era of post-genom-ics.In this review article,the application of bioinformatics in tropical medicine will be presented and dis-cussed.

  18. A Survey of Bioinformatics Database and Software Usage through Mining the Literature.

    Directory of Open Access Journals (Sweden)

    Geraint Duck

    Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.

  19. Bioinformatics meets user-centred design: a perspective.

    Directory of Open Access Journals (Sweden)

    Katrina Pavelin

    Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.

  20. Undergraduate Bioinformatics Workshops Provide Perceived Skills

    Directory of Open Access Journals (Sweden)

    Robin Herlands Cresiski

    2014-07-01

    Full Text Available Bioinformatics is becoming an important part of undergraduate curriculum, but expertise and well-evaluated teaching materials may not be available on every campus. Here, a guest speaker was utilized to introduce bioinformatics and web-available exercises were adapted for student investigation. Students used web-based nucleotide comparison tools to examine the medical and evolutionary relevance of a unidentified genetic sequence. Based on pre- and post-workshop surveys, there were significant gains in the students understanding of bioinformatics, as well as their perceived skills in using bioinformatics tools. The relevance of bioinformatics to a student’s career seemed dependent on career aspirations.

  1. Introducing Nagasaki Coal Mining Technology Training Center owned by the Mitsui Matsushima Resources Co., Ltd.

    Energy Technology Data Exchange (ETDEWEB)

    Kumakawa, K. [Mitsui Matsushima Resources Co., Ltd. (Japan)

    2006-09-15

    The Nagasaki Coal Mine Technology Training Center was established as a facility for 'The Training Project' on coal mine technology following the purchase of part of the mining area owned by the Matsushima Coal Mine which was closed in November 2001. The Training Center is located seaward at Ikeshima approximately 7 km west of the Nishisonogi Peninsula's western coast. Training is provided to personnel from Vietnam and Indonesia in subjects ranging from underground mining safety, exploration surveying and rock drivage, to electrical engineering. 3 figs., 4 tabs.

  2. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  3. An introduction to proteome bioinformatics.

    Science.gov (United States)

    Jones, Andrew R; Hubbard, Simon J

    2010-01-01

    This book is part of the Methods in Molecular Biology series, and provides a general overview of computational approaches used in proteome research. In this chapter, we give an overview of the scope of the book in terms of current proteomics experimental techniques and the reasons why computational approaches are needed. We then give a summary of each chapter, which together provide a picture of the state of the art in proteome bioinformatics research.

  4. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2010-08-01

    Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  5. Yoga for Stress Management Program as a Complementary Alternative Counseling Resource in a University Counseling Center

    Science.gov (United States)

    Milligan, Colleen K.

    2006-01-01

    A Yoga for Stress Management Program (YSMP) that served as a complementary alternative therapy resource was successfully implemented at a midsize, predominantly undergraduate university. It was offered in addition to traditional treatments for student mental health. Counselors, Residence Life staff, and faculty found that the program was useful…

  6. 76 FR 12123 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-03-04

    ...: National Institutes of Health/NCRR/OR, Democracy I, 6701 Democracy Blvd., 1078, Bethesda, MD (Virtual... for Research Resources, National Institutes of Health, 6701 Democracy Blvd., 10th FL., Bethesda, MD... Infrastructure, ] 93.306, 93.333; 93.702, ARRA Related Construction Awards, National Institutes of Health,...

  7. 76 FR 78014 - National Center for Research Resources; Notice of Closed Meetings

    Science.gov (United States)

    2011-12-15

    ... Research Resources, 6701 Democracy Blvd. Room 1068, Bethesda, MD 20892, (301) 435-0965, slicelw@mail.nih..., National Institutes of Health, 6701 Democracy Blvd., Dem. 1, Room 1064, Msc 4874, Bethesda, MD 20892-4874... Infrastructure, 93.306, 93.333; 93.702, ARRA Related Construction Awards, National Institutes of Health,...

  8. 76 FR 40384 - National Center for Research Resources; Notice of Closed Meetings

    Science.gov (United States)

    2011-07-08

    ...: National Institutes of Health/NCRR/OR, Democracy 1, 6701 Democracy Blvd., 1078, Bethesda, MD 20892. Contact... Resources, 6701 ] Democracy Blvd., Room 1068, Bethesda, MD 20892, 301-435-0965, slicelw@mail.nih.gov... Research; 93.371, Biomedical Technology; 93.389, Research Infrastructure, 93.306, 93.333; 93.702,...

  9. 75 FR 28262 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2010-05-20

    ... Health, NCRR, OR, One Democracy Plaza, 6701 Democracy Blvd., Rm. 1064, Bethesda, MD 20892. Contact Person... Resources, National Institutes of Health, 6701 Democracy Blvd., 1 Democracy Plaza, Rm. 1064, Bethesda, MD..., Clinical Research; 93.371, Biomedical Technology; 93.389, Research Infrastructure; 93.306, 93.333,...

  10. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, May 1--July 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Progress is reported on research projects related to the following: Electronic resource library; Environment, safety, and health; Communication, education, training, and community involvement; Nuclear and other materials; and Reporting, evaluation, monitoring, and administration. Technical studies investigate remedial action of high explosives-contaminated lands, radioactive waste management, nondestructive assay methods, and plutonium processing, handling, and storage.

  11. 77 FR 36034 - Notice of Funding Availability for the Small Business Transportation Resource Center Program

    Science.gov (United States)

    2012-06-15

    ... business enterprises (DBE), women owned small businesses (WOSB), HubZone, service disabled veteran owned... Office of the Secretary Notice of Funding Availability for the Small Business Transportation Resource...), Office of Small and Disadvantaged Business Utilization (OSDBU). ACTION: Notice of Funding...

  12. 78 FR 4973 - Notice of Funding Availability for the Small Business Transportation Resource Center Program

    Science.gov (United States)

    2013-01-23

    ... business enterprises (DBE), women owned small businesses (WOSB), HubZone, service disabled veteran owned... Office of the Secretary Notice of Funding Availability for the Small Business Transportation Resource...), Office of Small and Disadvantaged Business Utilization (OSDBU). ACTION: Notice of Funding...

  13. NASA Space Engineering Research Center for utilization of local planetary resources

    Science.gov (United States)

    1992-01-01

    Reports covering the period from 1 Nov. 1991 to 31 Oct. 1992 and documenting progress at the NASA Space Engineering Research Center are included. Topics covered include: (1) processing of propellants, volatiles, and metals; (2) production of structural and refractory materials; (3) system optimization discovery and characterization; (4) system automation and optimization; and (5) database development.

  14. 15 CFR 291.4 - National industry-specific pollution prevention and environmental compliance resource centers.

    Science.gov (United States)

    2010-01-01

    ... and develop the services to its customers. The centers should rarely, if ever, perform research, but... proposal must set forth clearly defined, effective mechanisms for delivery of services to target population..., external evaluation for assessing outcomes of the activity, and “customer satisfaction” measures...

  15. Virus variation resources at the National Center for Biotechnology Information: dengue virus

    Directory of Open Access Journals (Sweden)

    Rozanov Michael

    2009-04-01

    Full Text Available Abstract Background There is an increasing number of complete and incomplete virus genome sequences available in public databases. This large body of sequence data harbors information about epidemiology, phylogeny, and virulence. Several specialized databases, such as the NCBI Influenza Virus Resource or the Los Alamos HIV database, offer sophisticated query interfaces along with integrated exploratory data analysis tools for individual virus species to facilitate extracting this information. Thus far, there has not been a comprehensive database for dengue virus, a significant public health threat. Results We have created an integrated web resource for dengue virus. The technology developed for the NCBI Influenza Virus Resource has been extended to process non-segmented dengue virus genomes. In order to allow efficient processing of the dengue genome, which is large in comparison with individual influenza segments, we developed an offline pre-alignment procedure which generates a multiple sequence alignment of all dengue sequences. The pre-calculated alignment is then used to rapidly create alignments of sequence subsets in response to user queries. This improvement in technology will also facilitate the incorporation of additional virus species in the future. The set of virus-specific databases at NCBI, which will be referred to as Virus Variation Resources (VVR, allow users to build complex queries against virus-specific databases and then apply exploratory data analysis tools to the results. The metadata is automatically collected where possible, and extended with data extracted from the literature. Conclusion The NCBI Dengue Virus Resource integrates dengue sequence information with relevant metadata (sample collection time and location, disease severity, serotype, sequenced genome region and facilitates retrieval and preliminary analysis of dengue sequences using integrated web analysis and visualization tools.

  16. Center for fetal monkey gene transfer for heart, lung, and blood diseases: an NHLBI resource for the gene therapy community.

    Science.gov (United States)

    Tarantal, Alice F; Skarlatos, Sonia I

    2012-11-01

    The goals of the National Heart, Lung, and Blood Institute (NHLBI) Center for Fetal Monkey Gene Transfer for Heart, Lung, and Blood Diseases are to conduct gene transfer studies in monkeys to evaluate safety and efficiency; and to provide NHLBI-supported investigators with expertise, resources, and services to actively pursue gene transfer approaches in monkeys in their research programs. NHLBI-supported projects span investigators throughout the United States and have addressed novel approaches to gene delivery; "proof-of-principle"; assessed whether findings in small-animal models could be demonstrated in a primate species; or were conducted to enable new grant or IND submissions. The Center for Fetal Monkey Gene Transfer for Heart, Lung, and Blood Diseases successfully aids the gene therapy community in addressing regulatory barriers, and serves as an effective vehicle for advancing the field.

  17. [Laboral health in Penitentiary Center of Chile: a look from policies of human resources].

    Science.gov (United States)

    Güilgüiruca R, M; Herrera-Bascur, J

    2015-01-01

    This article examines the influence of human resources policies on occupational health variables, such as engagement and job satisfaction, with regard to Chilean prison employees. 80 workers at the Women's Prison of Iquique were evaluated and results show that 77% and 88 % have a moderate to high score in terms of engagement and job satisfaction respectively. The 24% variation in engagement of the workers studied can be explained by policies aimed at promoting personal interests, while 32% of the variation in job satisfaction could be explained by policies of self-efficacy and personal interests. The above data permits the assertion to be made that human resources policies have a role that is relevant and necessary to modify and improve the occupational health conditions of these public sector workers.

  18. Geological characteristics and resource potentials of oil shale in Ordos Basin, Center China

    Energy Technology Data Exchange (ETDEWEB)

    Yunlai, Bai; Yingcheng, Zhao; Long, Ma; Wu-jun, Wu; Yu-hu, Ma

    2010-09-15

    It has been shown that not only there are abundant oil, gas, coal, coal-bed gas, groundwater and giant uranium deposits but also there are abundant oil shale resources in Ordos basin. It has been shown also that the thickness of oil shale is, usually, 4-36m, oil-bearing 1.5%-13.7%, caloric value 1.66-20.98MJ/kg. The resource amount of oil shale with burial depth less than 2000 m is over 2000x108t (334). Within it, confirmed reserve is about 1x108t (121). Not only huge economic benefit but also precious experience in developing oil shale may be obtained in Ordos basin.

  19. Nonfuel mineral resources in the United States-Mexico border region; a progress report on information available from the Center for Inter-American Mineral Resource Investigations (CIMRI)

    Science.gov (United States)

    Orris, G.J.; Page, N.J.; Staude, J.G.; Bolm, K.S.; Carbonaro, M.M.; Gray, Floyd; Long, K.R.

    1993-01-01

    The exploitation of minerals has played a significant role in population growth and development of the U.S.Mexico border region. Recent proposed changes in regulations related to mining in the United States and changes in mining and investment regulations in Mexico have led to increased mineral exploration and development in Mexico, especially in the border region. As a preliminary step in the study of the mineral industry of this area, the Center for Inter-American Mineral Resource Investigations (CIMRI) of the U.S. Geological Survey has compiled mine and occurrence data for nonfuel minerals in the border region. Analysis of this information indicates that a wide variety of metallic and industrial mineral commodities are present which can be used in agriculture, infrastructure, environmental improvement, and other industries. Therefore, mining will continue to play a significant role in the economy of this region.

  20. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    Science.gov (United States)

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-04

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries.

  1. Resource Management in Internet-Oriented Data Centers%面向Internet数据中心的资源管理

    Institute of Scientific and Technical Information of China (English)

    张伟; 宋莹; 阮利; 祝明发; 肖利民

    2012-01-01

    Internet data centers are developing towards a diversified, intelligent, automated, large-scaled, and standardized direction. With the increasing of scale and complexity, it brings great challenges in how to effectively manage resources. Currently, resource management has become a major issue in Internet data centers, and its importance and urgency cannot be ignored. This paper analyzes two major challenges of resource management with which the Internet data center is facing: (1) meeting the compatibility of concurrent and multiple application SLAs (service level agreements); (2) improving the energy efficiency of system service. Based on the challenges, this paper thoroughly analyzes and summarizes the related work of resource management guaranteeing the SLA, reducing power, and incorporating the objectives of guaranteeing the SLA and reducing power simultaneously during the last ten years. Finally, the paper summaraizes the research and points out future research directions.%Internet数据中心向多元化、智能化、自动化、规模化与标准化道路发展,其规模越来越大、越来越复杂,这为如何有效管理资源带来极大的冲击与挑战.当前,资源管理已成为Internet数据中心亟待解决的重要问题,其重要性与紧迫性已不容忽视.分析了Internet数据中心资源管理面临的两大挑战:(1)满足并发多应用SLAs(servicelevel agreements)的兼容性;(2)提高系统服务的能量有效性.以挑战为主线,对近十几年来国内外在满足SLA、降低功耗、同时满足SLA和降低功耗方面所取得的资源管理研究成果进行了全面的概括总结和分析,最后进行总结并对未来的研究发展趋势提出观点.

  2. Amarillo National Resource Center for plutonium. Work plan progress report, November 1, 1995--January 31, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Cluff, D. [Texas Tech Univ., Lubbock, TX (United States)

    1996-04-01

    The Center operates under a cooperative agreement between DOE and the State of Texas and is directed and administered by an education consortium. Its programs include developing peaceful uses for the materials removed from dismantled weapons, studying effects of nuclear materials on environment and public health, remedying contaminated soils and water, studying storage, disposition, and transport of Pu, HE, and other hazardous materials removed from weapons, providing research and counsel to US in carrying out weapons reductions in cooperation with Russia, and conducting a variety of education and training programs.

  3. A Method for Determination of Resource Potantial of Cankırı Historical City Center

    Directory of Open Access Journals (Sweden)

    E. Erdogan

    2008-01-01

    Full Text Available The scope of this research, within the frame of Cankırı’s urban site area case, is to highlight the importance of urban landscape design in conservation of historical city centers in order to assure cultural continuity and their integration with modern living conditions to ensure their proper transfer to future generations as livable spaces. On the above mentioned extend, conservation of historical places, urban landscape design and development process of city centers in terms of space and history were touched upon. Besides, according to the findings obtained from preliminary works and analyses carried out in the research field, proposals were pronounced to maintain the sustainability of natural and cultural values in Cankırı’s urban site area. Additionally, existent maps covering the urban site under question were digitalized and computerized and by this way spatial data and attribute data were linked together and arranged in order to build up a reference data base for both city of Cankırı and similar studies.

  4. Bioinformatic science and devices for computer analysis and visualization of macromolecules

    Directory of Open Access Journals (Sweden)

    Yu.B. Porozov

    2010-06-01

    Full Text Available The goals and objectives of bioinformatic science are presented in the article. The main methods and approaches used in computer biology are highlighted. Areas in which bioinformatic science can greatly facilitate and speed up the work of practical biologist and pharmacologist are revealed. The features of both the basic packages and software devices for complete, thorough analysis of macromolecules and for development and modeling of ligands and binding centers are described

  5. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    Energy Technology Data Exchange (ETDEWEB)

    Jackman, Thomas [Desert Research Institute; Minor, Timothy [Desert Research Institute; Pohll, Gregory [Desert Research Institute

    2013-07-22

    Water is unquestionably a critical resource throughout the United States. In the semi-arid west -- an area stressed by increase in human population and sprawl of the built environment -- water is the most important limiting resource. Crucially, science must understand factors that affect availability and distribution of water. To sustain growing consumptive demand, science needs to translate understanding into reliable and robust predictions of availability under weather conditions that could be average but might be extreme. These predictions are needed to support current and long-term planning. Similar to the role of weather forecast and climate prediction, water prediction over short and long temporal scales can contribute to resource strategy, governmental policy and municipal infrastructure decisions, which are arguably tied to the natural variability and unnatural change to climate. Change in seasonal and annual temperature, precipitation, snowmelt, and runoff affect the distribution of water over large temporal and spatial scales, which impact the risk of flooding and the groundwater recharge. Anthropogenic influences and impacts increase the complexity and urgency of the challenge. The goal of this project has been to develop a decision support framework of data acquisition, digital modeling, and 3D visualization. This integrated framework consists of tools for compiling, discovering and projecting our understanding of processes that control the availability and distribution of water. The framework is intended to support the analysis of the complex interactions between processes that affect water supply, from controlled availability to either scarcity or deluge. The developed framework enables DRI to promote excellence in water resource management, particularly within the Lake Tahoe basin. In principle, this framework could be replicated for other watersheds throughout the United States. Phase II of this project builds upon the research conducted during

  6. Fort Collins Science Center Ecosystem Dynamics branch--interdisciplinary research for addressing complex natural resource issues across landscapes and time

    Science.gov (United States)

    Bowen, Zachary H.; Melcher, Cynthia P.; Wilson, Juliette T.

    2013-01-01

    The Ecosystem Dynamics Branch of the Fort Collins Science Center offers an interdisciplinary team of talented and creative scientists with expertise in biology, botany, ecology, geology, biogeochemistry, physical sciences, geographic information systems, and remote-sensing, for tackling complex questions about natural resources. As demand for natural resources increases, the issues facing natural resource managers, planners, policy makers, industry, and private landowners are increasing in spatial and temporal scope, often involving entire regions, multiple jurisdictions, and long timeframes. Needs for addressing these issues include (1) a better understanding of biotic and abiotic ecosystem components and their complex interactions; (2) the ability to easily monitor, assess, and visualize the spatially complex movements of animals, plants, water, and elements across highly variable landscapes; and (3) the techniques for accurately predicting both immediate and long-term responses of system components to natural and human-caused change. The overall objectives of our research are to provide the knowledge, tools, and techniques needed by the U.S. Department of the Interior, state agencies, and other stakeholders in their endeavors to meet the demand for natural resources while conserving biodiversity and ecosystem services. Ecosystem Dynamics scientists use field and laboratory research, data assimilation, and ecological modeling to understand ecosystem patterns, trends, and mechanistic processes. This information is used to predict the outcomes of changes imposed on species, habitats, landscapes, and climate across spatiotemporal scales. The products we develop include conceptual models to illustrate system structure and processes; regional baseline and integrated assessments; predictive spatial and mathematical models; literature syntheses; and frameworks or protocols for improved ecosystem monitoring, adaptive management, and program evaluation. The descriptions

  7. Establishing bioinformatics research in the Asia Pacific

    Directory of Open Access Journals (Sweden)

    Tammi Martti

    2006-12-01

    Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.

  8. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    Energy Technology Data Exchange (ETDEWEB)

    Jackman, Thomas [Desert Research Institute; Minor, Timothy [Desert Research Institute; Pohll, Gregory [Desert Research Institute

    2013-07-22

    Water is unquestionably a critical resource throughout the United States. In the semi-arid west -- an area stressed by increase in human population and sprawl of the built environment -- water is the most important limiting resource. Crucially, science must understand factors that affect availability and distribution of water. To sustain growing consumptive demand, science needs to translate understanding into reliable and robust predictions of availability under weather conditions that could be average but might be extreme. These predictions are needed to support current and long-term planning. Similar to the role of weather forecast and climate prediction, water prediction over short and long temporal scales can contribute to resource strategy, governmental policy and municipal infrastructure decisions, which are arguably tied to the natural variability and unnatural change to climate. Change in seasonal and annual temperature, precipitation, snowmelt, and runoff affect the distribution of water over large temporal and spatial scales, which impact the risk of flooding and the groundwater recharge. Anthropogenic influences and impacts increase the complexity and urgency of the challenge. The goal of this project has been to develop a decision support framework of data acquisition, digital modeling, and 3D visualization. This integrated framework consists of tools for compiling, discovering and projecting our understanding of processes that control the availability and distribution of water. The framework is intended to support the analysis of the complex interactions between processes that affect water supply, from controlled availability to either scarcity or deluge. The developed framework enables DRI to promote excellence in water resource management, particularly within the Lake Tahoe basin. In principle, this framework could be replicated for other watersheds throughout the United States. Phase II of this project builds upon the research conducted during

  9. RefSeq and LocusLink: NCBI gene-centered resources.

    Science.gov (United States)

    Pruitt, K D; Maglott, D R

    2001-01-01

    Thousands of genes have been painstakingly identified and characterized a few genes at a time. Many thousands more are being predicted by large scale cDNA and genomic sequencing projects, with levels of evidence ranging from supporting mRNA sequence and comparative genomics to computing ab initio models. This, coupled with the burgeoning scientific literature, makes it critical to have a comprehensive directory for genes and reference sequences for key genomes. The NCBI provides two resources, LocusLink and RefSeq, to meet these needs. LocusLink organizes information around genes to generate a central hub for accessing gene-specific information for fruit fly, human, mouse, rat and zebrafish. RefSeq provides reference sequence standards for genomes, transcripts and proteins; human, mouse and rat mRNA RefSeqs, and their corresponding proteins, are discussed here. Together, RefSeq and LocusLink provide a non-redundant view of genes and other loci to support research on genes and gene families, variation, gene expression and genome annotation. Additional information about LocusLink and RefSeq is available at http://www.ncbi.nlm.nih.gov/LocusLink/.

  10. Incorporating a New Bioinformatics Component into Genetics at a Historically Black College: Outcomes and Lessons

    Science.gov (United States)

    Holtzclaw, J. David; Eisen, Arri; Whitney, Erika M.; Penumetcha, Meera; Hoey, J. Joseph; Kimbro, K. Sean

    2006-01-01

    Many students at minority-serving institutions are underexposed to Internet resources such as the human genome project, PubMed, NCBI databases, and other Web-based technologies because of a lack of financial resources. To change this, we designed and implemented a new bioinformatics component to supplement the undergraduate Genetics course at…

  11. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra

    2013-06-25

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  12. Best practices in bioinformatics training for life scientists.

    Science.gov (United States)

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrönen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-09-01

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  13. 广东省水利数据中心数据资源化的主要环节%Key Steps of Data Resources Utilization for Guangdong Water Resources Data Center

    Institute of Scientific and Technical Information of China (English)

    杨帆; 岳兆新; 艾萍

    2014-01-01

    The development of information technology, such as IOT, has greatly enriched the data resources in water resources domain. How to standardize the data resources, realize the resources sharing, and achieve water resources business application collaboration, is an important problem to be solved in water resources domain. Combining the construction and practice of data resources utilization of Guangdong water resources data center, this paper analyzes the current main problems of the construction of data resources utilization, and describes the practice of data resources utilization of Guangdong water resources data center with the hope of providing a reference for data resources utilization of other water conservancy data center at all levels.%物联网等信息技术的发展,极大地丰富了水利行业的数据资源。如何对数据资源进行标准化操作,实现资源共享条件下水利业务应用协同的目标,是当前水利行业亟待解决的重要问题之一。结合广东省水利数据中心数据资源化建设与实践,讨论当前水利数据资源化建设存在的主要问题,系统阐述广东省水利数据中心数据资源化的主要环节,以期为其他各级水利数据中心的数据资源化建设提供参考。

  14. U.S. Department of Energy Regional Resource Centers Report: State of the Wind Industry in the Regions

    Energy Technology Data Exchange (ETDEWEB)

    Baranowski, Ruth [National Renewable Energy Lab. (NREL), Golden, CO (United St; Oteri, Frank [National Renewable Energy Lab. (NREL), Golden, CO (United St; Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United St; Tegen, Suzanne [National Renewable Energy Lab. (NREL), Golden, CO (United St

    2016-03-01

    The wind industry and the U.S. Department of Energy (DOE) are addressing technical challenges to increasing wind energy's contribution to the national grid (such as reducing turbine costs and increasing energy production and reliability), and they recognize that public acceptance issues can be challenges for wind energy deployment. Wind project development decisions are best made using unbiased information about the benefits and impacts of wind energy. In 2014, DOE established six wind Regional Resource Centers (RRCs) to provide information about wind energy, focusing on regional qualities. This document summarizes the status and drivers for U.S. wind energy development on regional and state levels. It is intended to be a companion to DOE's 2014 Distributed Wind Market Report, 2014 Wind Technologies Market Report, and 2014 Offshore Wind Market and Economic Analysis that provide assessments of the national wind markets for each of these technologies.

  15. The American Dental Association's Center for Evidence-Based Dentistry: a critical resource for 21st century dental practice.

    Science.gov (United States)

    Frantsve-Hawley, Julie; Jeske, Arthur

    2011-02-01

    Through its website (http:// www.ada.org/prof/resources/ebd/index.asp), the American Dental Association's Center for Evidence-Based Dentistry offers dental health professionals access to systematic reviews of oral health-related research findings, as well as Clinical Recommendations, which summarize large bodies of scientific evidence in the form of practice recommendations, e.g., the use of professionally-applied topical fluoride and pit-and-fissure sealants. Another feature of the site of great practical importance to the practicing dentist is the Critical Summary, which is a concise review of an individual systematic review's methodology and findings, as well as the importance and context of the outcomes, and the strengths and weaknesses of the systematic review and its implications for dental practice.

  16. A Summary of NASA Summer Faculty Fellowship Work in the E.O. Office and in the Educator Resources Center

    Science.gov (United States)

    Thompson, H. Wendell, Sr.

    2005-01-01

    The Office of Equal Opportunity supports a number of summer programs which are designed to: 1.) Increase the number of elementary and secondary students and teachers who are involved in NASA-related education opportunities; and 2.) Support higher education research capability and opportunities that attract and prepare increasing numbers of students and faculty for NASA-related careers. A part of my work in the E.O. office involved the evaluation of several of the programs in order to determine their level of success and to make recommendations for the improvement of those programs where necessary. As a part of the involvement with one of the programs, the PSTI, I had the great opportunity to interact with the students in a number of their sessions which involved problem-based learning in science, mathematics and technology. A summary of the evaluation of those programs is included in this report. The second part of my work involved assisting the coordinator of the Educator Resource Center at the Space and Rocket Center. I participated in space science workshops for in-service and pre-service teachers. There educational resources were made available to the participants including many hands-on activities that hey could take back to their classes. I participated in the three hour workshops that were offered on Tuesdays and Thursdays of each week, although there were workshops on other days. On Mondays, Wednesdays, and Fridays, I worked in the E.O. office. As a result of my work in the ERC, I developed a Directed Reading PowerPoint Lesson Plan Guide involving remote sensing entitled, Echo the Bat. This was based on a NASA published children's book entitled Echo The Bat, written by Ginger Butcher. I have included a description of the lesson in this report. A summary of the evaluations of several of the summer programs supported by the Equal Opportunity office are included in this report.

  17. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    Science.gov (United States)

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  18. Fish bone foreign body presenting with an acute fulminating retropharyngeal abscess in a resource-challenged center: a case report

    Directory of Open Access Journals (Sweden)

    Oyewole Ezekiel O

    2011-04-01

    Full Text Available Abstract Introduction A retropharyngeal abscess is a potentially life-threatening infection in the deep space of the neck, which can compromise the airway. Its management requires highly specialized care, including surgery and intensive care, to reduce mortality. This is the first case of a gas-forming abscess reported from this region, but not the first such report in the literature. Case presentation We present a case of a 16-month-old Yoruba baby girl with a gas-forming retropharyngeal abscess secondary to fish bone foreign body with laryngeal spasm that was managed in the recovery room. We highlight specific problems encountered in the management of this case in a resource-challenged center such as ours. Conclusion We describe an unusual presentation of a gas-forming organism causing a retropharyngeal abscess in a child. The patient's condition was treated despite the challenges of inadequate resources for its management. We recommend early recognition through adequate evaluation of any oropharyngeal injuries or infection and early referral to the specialist with prompt surgical intervention.

  19. The research subject advocate at minority Clinical Research Centers: an added resource for protection of human subjects.

    Science.gov (United States)

    Easa, David; Norris, Keith; Hammatt, Zoë; Kim, Kari; Hernandez, Esther; Kato, Kambrie; Balaraman, Venkataraman; Ho, Tammy; Shomaker, Samuel

    2005-01-01

    In early 2001, the National Institutes of Health (NIH) created the research subject advocate (RSA) position as an additional resource for human subjects protection at NIH-funded Clinical Research Centers (CRCs) to enhance the protection of human participants in clinical research studies. We describe the RSA position in the context of clinical research, with a particular emphasis on the role of the RSA in two of the five CRCs funded by the NIH Research Centers in Minority Institutions (RCMI) program. Through participation in protocol development, informed consent procedures, study implementation and follow-up with adverse events, the RSA works closely with research investigators and their staff to protect study participants. The RSA also conducts workshops, training and education sessions, and consultation with investigators to foster enhanced communication and adherence to ethical standards and safety regulations. Although we cannot yet provide substantive evidence of positive outcomes, this article illuminates the value of the RSA position in ensuring that safety of research participants is accorded the highest priority at CRCs. On the basis of initial results, we conclude that the RSA is an effective mechanism for achieving the NIH goal of maintaining the utmost scrutiny of protocols involving human subjects.

  20. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Directory of Open Access Journals (Sweden)

    Fristensky Brian

    2007-02-01

    Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  1. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  2. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  3. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  4. The Aspergillus Mine - publishing bioinformatics

    DEFF Research Database (Denmark)

    Vesth, Tammi Camilla; Rasmussen, Jane Lind Nybo; Theobald, Sebastian

    so with no computational specialist. Here we present a setup for analysis and publication of genome data of 70 species of Aspergillus fungi. The platform is based on R, Python and uses the RShiny framework to create interactive web‐applications. It allows all participants to create interactive...... analysis which can be shared with the team and in connection with publications. We present analysis for investigation of genetic diversity, secondary and primary metabolism and general data overview. The platform, the Aspergillus Mine, is a collection of analysis tools based on data from collaboration...... with the Joint Genome Institute. The Aspergillus Mine is not intended as a genomic data sharing service but instead focuses on creating an environment where the results of bioinformatic analysis is made available for inspection. The data and code is public upon request and figures can be obtained directly from...

  5. Bioinformatics clouds for big data manipulation

    KAUST Repository

    Dai, Lin

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.

  6. Bioinformatics clouds for big data manipulation

    Directory of Open Access Journals (Sweden)

    Dai Lin

    2012-11-01

    Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  7. BIOINFORMATICS FOR UNDERGRADUATES OF LIFE SCIENCE COURSES

    Directory of Open Access Journals (Sweden)

    J.F. De Mesquita

    2007-05-01

    Full Text Available In the recent years, Bioinformatics has emerged as an important research tool. Theability to mine large databases for relevant information has become essential fordifferent life science fields. On the other hand, providing education in bioinformatics toundergraduates is challenging from this multidisciplinary perspective. Therefore, it isimportant to introduced undergraduate students to the available information andcurrent methodologies in Bioinformatics. Here we report the results of a course usinga computer-assisted and problem -based learning model. The syllabus was comprisedof theoretical lectures covering different topics within bioinformatics and practicalactivities. For the latter, we developed a set of step-by-step tutorials based on casestudies. The course was applied to undergraduate students of biological andbiomedical courses. At the end of the course, the students were able to build up astep-by-step tutorial covering a bioinformatics issue.

  8. [Post-translational modification (PTM) bioinformatics in China: progresses and perspectives].

    Science.gov (United States)

    Zexian, Liu; Yudong, Cai; Xuejiang, Guo; Ao, Li; Tingting, Li; Jianding, Qiu; Jian, Ren; Shaoping, Shi; Jiangning, Song; Minghui, Wang; Lu, Xie; Yu, Xue; Ziding, Zhang; Xingming, Zhao

    2015-07-01

    Post-translational modifications (PTMs) are essential for regulating conformational changes, activities and functions of proteins, and are involved in almost all cellular pathways and processes. Identification of protein PTMs is the basis for understanding cellular and molecular mechanisms. In contrast with labor-intensive and time-consuming experiments, the PTM prediction using various bioinformatics approaches can provide accurate, convenient, and efficient strategies and generate valuable information for further experimental consideration. In this review, we summarize the current progresses made by Chineses bioinformaticians in the field of PTM Bioinformatics, including the design and improvement of computational algorithms for predicting PTM substrates and sites, design and maintenance of online and offline tools, establishment of PTM-related databases and resources, and bioinformatics analysis of PTM proteomics data. Through comparing similar studies in China and other countries, we demonstrate both advantages and limitations of current PTM bioinformatics as well as perspectives for future studies in China.

  9. The European Bioinformatics Institute in 2016: Data growth and integration.

    Science.gov (United States)

    Cook, Charles E; Bergman, Mary Todd; Finn, Robert D; Cochrane, Guy; Birney, Ewan; Apweiler, Rolf

    2016-01-04

    New technologies are revolutionising biological research and its applications by making it easier and cheaper to generate ever-greater volumes and types of data. In response, the services and infrastructure of the European Bioinformatics Institute (EMBL-EBI, www.ebi.ac.uk) are continually expanding: total disk capacity increases significantly every year to keep pace with demand (75 petabytes as of December 2015), and interoperability between resources remains a strategic priority. Since 2014 we have launched two new resources: the European Variation Archive for genetic variation data and EMPIAR for two-dimensional electron microscopy data, as well as a Resource Description Framework platform. We also launched the Embassy Cloud service, which allows users to run large analyses in a virtual environment next to EMBL-EBI's vast public data resources.

  10. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    Science.gov (United States)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data

  11. Website for avian flu information and bioinformatics

    Institute of Scientific and Technical Information of China (English)

    GAO; George; Fu

    2009-01-01

    Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioinformatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.

  12. Website for avian flu information and bioinformatics

    Institute of Scientific and Technical Information of China (English)

    LIU Di; LIU Quan-He; WU Lin-Huan; LIU Bin; WU Jun; LAO Yi-Mei; LI Xiao-Jing; GAO George Fu; MA Jun-Cai

    2009-01-01

    Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioin-formatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.

  13. Animation company "Fast Forwards" production with HP Utility Data Center; film built using Adaptive Enterprise framework enabled by shared, virtual resource

    CERN Multimedia

    2003-01-01

    Hewlett Packard have produced a commercial-quality animated film using an experimental rendering service from HP Labs and running on an HP Utility Data Center (UDC). The project demonstrates how computing resources can be managed virtually and illustrates the value of utility computing, in which an end-user taps into a large pool of virtual resources, but pays only for what is used (1 page).

  14. 国家啮齿类实验动物种子中心简介%A Brief Introduction to National Resource Center for Rodent Laboratory Animal

    Institute of Scientific and Technical Information of China (English)

    岳秉飞

    2003-01-01

    National Resource Center (NRLARC) for Rodent Laboratory Animal was established in 1998, ratified by State Commission of Science and Technology. Subordinated to laboratory animal center For National Institute for the Control of Pharmaceutical and Biological Products. The major aims are: importing, collecting and conserving the variety of LA and strain of LA, studying new LA protection techniques, developing the new strain and varieties of LA and supplying the standard breeding of LA to several client both at home and abroad.

  15. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  16. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  17. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  18. Bioinformática como recurso pedagógico para o curso de ciências biológicas na Universidade Estadual do Ceará – UECE – Fortaleza, Estado do Ceará = Bioinformatics as a pedagogical resource for the biology course in the State University of Ceara - UECE - Fortaleza, Ceará State

    Directory of Open Access Journals (Sweden)

    Howard Lopes Ribeiro Junior

    2012-01-01

    Full Text Available O objetivo deste estudo foi aplicar e avaliar conteúdos teórico-práticos de Bioinformática para estudantes do curso de Licenciatura Plena em Ciências Biológicas, matriculados nas disciplinas de Genética geral e Biologia Molecular na Universidade Estadual do Ceará, no ano de 2010. A abordagem teórica consistiu de uma apresentação de conceitos históricos, básicos e específicos dos atuais avanços das pesquisas envolvidas nas áreas da biologia Molecular. A prática de ‘Construção de uma Filogenia Molecular in Silico’ foi elaborada para tornar funcionais os conceitos apresentados na prática anterior (RIBEIRO JUNIOR, 2011, com a utilização do banco de dados do National Center for Biotechnology Information, NCBI, e sua ferramenta de alinhamento de sequências, o BLASTp (Basic Local Alignment Search Tool Protein-Protein. Os resultados positivos obtidos com a aplicação da aula teórica de Introdução à Bioinformática e das atividades práticas foram destacados com as caracterizações das filogenias moleculares das sequências hipotéticas propostas para a execução dos alinhamentos e com as falas dos alunos anteriormente citados. Essas atividades foram consideradas essenciais para que os alunos pudessem vivenciar o passo a passo para uma melhor compreensão da emergente área das ciências da vida: a Bioinformática.The objective of this study was to evaluate and apply the Bioinformatics theoretical contents and practical for the course students in Biological Sciences Degree Fully enrolled in the disciplines of General Genetics and Molecular Biology, State University of Ceara in 2010. The theoretical approach previously tested (RIBEIRO JUNIOR, 2011 consisted of a presentation of historical concepts, basic and specific to current advances in research involved the areas of molecular biology. The practice of "Building a Molecular Phylogeny in Silico" is designed to become functional in practice the concepts presented above

  19. Concepts and introduction to RNA bioinformatics

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.

    2014-01-01

    RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....

  20. Coronavirus Genomics and Bioinformatics Analysis

    Directory of Open Access Journals (Sweden)

    Kwok-Yung Yuen

    2010-08-01

    Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.

  1. Regulatory bioinformatics for food and drug safety.

    Science.gov (United States)

    Healy, Marion J; Tong, Weida; Ostroff, Stephen; Eichler, Hans-Georg; Patak, Alex; Neuspiel, Margaret; Deluyker, Hubert; Slikker, William

    2016-10-01

    "Regulatory Bioinformatics" strives to develop and implement a standardized and transparent bioinformatic framework to support the implementation of existing and emerging technologies in regulatory decision-making. It has great potential to improve public health through the development and use of clinically important medical products and tools to manage the safety of the food supply. However, the application of regulatory bioinformatics also poses new challenges and requires new knowledge and skill sets. In the latest Global Coalition on Regulatory Science Research (GCRSR) governed conference, Global Summit on Regulatory Science (GSRS2015), regulatory bioinformatics principles were presented with respect to global trends, initiatives and case studies. The discussion revealed that datasets, analytical tools, skills and expertise are rapidly developing, in many cases via large international collaborative consortia. It also revealed that significant research is still required to realize the potential applications of regulatory bioinformatics. While there is significant excitement in the possibilities offered by precision medicine to enhance treatments of serious and/or complex diseases, there is a clear need for further development of mechanisms to securely store, curate and share data, integrate databases, and standardized quality control and data analysis procedures. A greater understanding of the biological significance of the data is also required to fully exploit vast datasets that are becoming available. The application of bioinformatics in the microbiological risk analysis paradigm is delivering clear benefits both for the investigation of food borne pathogens and for decision making on clinically important treatments. It is recognized that regulatory bioinformatics will have many beneficial applications by ensuring high quality data, validated tools and standardized processes, which will help inform the regulatory science community of the requirements

  2. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs.

  3. ASSESSMENT OF REQUIREMENT OF THE POPULATION IN THE ORGAN TRANSPLANTATION, THE DONOR RESOURCE AND PLANNING OF THE EFFECTIVE NETWORK OF THE MEDICAL ORGANIZATIONS (THE CENTERS OF TRANSPLANTATION

    Directory of Open Access Journals (Sweden)

    S. V. Gautier

    2013-01-01

    Full Text Available Aim. To estimate the requirement of the population of the Russian Federation for an organ transplantation and donor resource, to offer approach to planning of an effective network of the medical organizations (the centers of transplantation. Materials and methods. The analysis and comparison of statistical data on population, number of the patients receiving a dialysis, data about medical care on an organ transplantation in Russia and foreign countries is made. Results. On the basis of what the assessment of requirement of the population of the Russian Federation in an organ transplantation and donor resource is carried out, approach to planning of an effective network of the medical organizations (the centers of transplantation and scenarios of development of organ do- nation and transplantation in Russia is offered. Conclusion. To provide the population of the Russian Federation with medical care on an organ transplantation according to real requirement and donor resource, in each region of the Russian Federation have to be organized deceased organ donation and transplantation of a cadaveric kidney. But the transplantation of extrarenal organs is better to develop in the federal centers of hi-tech medical care with donor providing from territories of adjacent regions. 

  4. Bioinformatics for transporter pharmacogenomics and systems biology: data integration and modeling with UML.

    Science.gov (United States)

    Yan, Qing

    2010-01-01

    Bioinformatics is the rational study at an abstract level that can influence the way we understand biomedical facts and the way we apply the biomedical knowledge. Bioinformatics is facing challenges in helping with finding the relationships between genetic structures and functions, analyzing genotype-phenotype associations, and understanding gene-environment interactions at the systems level. One of the most important issues in bioinformatics is data integration. The data integration methods introduced here can be used to organize and integrate both public and in-house data. With the volume of data and the high complexity, computational decision support is essential for integrative transporter studies in pharmacogenomics, nutrigenomics, epigenetics, and systems biology. For the development of such a decision support system, object-oriented (OO) models can be constructed using the Unified Modeling Language (UML). A methodology is developed to build biomedical models at different system levels and construct corresponding UML diagrams, including use case diagrams, class diagrams, and sequence diagrams. By OO modeling using UML, the problems of transporter pharmacogenomics and systems biology can be approached from different angles with a more complete view, which may greatly enhance the efforts in effective drug discovery and development. Bioinformatics resources of membrane transporters and general bioinformatics databases and tools that are frequently used in transporter studies are also collected here. An informatics decision support system based on the models presented here is available at http://www.pharmtao.com/transporter . The methodology developed here can also be used for other biomedical fields.

  5. Bioinformatics education in high school: implications for promoting science, technology, engineering, and mathematics careers.

    Science.gov (United States)

    Kovarik, Dina N; Patterson, Davis G; Cohen, Carolyn; Sanders, Elizabeth A; Peterson, Karen A; Porter, Sandra G; Chowning, Jeanne Ting

    2013-01-01

    We investigated the effects of our Bio-ITEST teacher professional development model and bioinformatics curricula on cognitive traits (awareness, engagement, self-efficacy, and relevance) in high school teachers and students that are known to accompany a developing interest in science, technology, engineering, and mathematics (STEM) careers. The program included best practices in adult education and diverse resources to empower teachers to integrate STEM career information into their classrooms. The introductory unit, Using Bioinformatics: Genetic Testing, uses bioinformatics to teach basic concepts in genetics and molecular biology, and the advanced unit, Using Bioinformatics: Genetic Research, utilizes bioinformatics to study evolution and support student research with DNA barcoding. Pre-post surveys demonstrated significant growth (n = 24) among teachers in their preparation to teach the curricula and infuse career awareness into their classes, and these gains were sustained through the end of the academic year. Introductory unit students (n = 289) showed significant gains in awareness, relevance, and self-efficacy. While these students did not show significant gains in engagement, advanced unit students (n = 41) showed gains in all four cognitive areas. Lessons learned during Bio-ITEST are explored in the context of recommendations for other programs that wish to increase student interest in STEM careers.

  6. REDIdb: an upgraded bioinformatics resource for organellar RNA editing sites.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa M R; Verbitskiy, Daniil; Brennicke, Axel; Quagliariello, Carla

    2011-03-01

    RNA editing is a post-transcriptional molecular process whereby the information in a genetic message is modified from that in the corresponding DNA template by means of nucleotide substitutions, insertions and/or deletions. It occurs mostly in organelles by clade-specific diverse and unrelated biochemical mechanisms. RNA editing events have been annotated in primary databases as GenBank and at more sophisticated level in the specialized databases REDIdb, dbRES and EdRNA. At present, REDIdb is the only freely available database that focuses on the organellar RNA editing process and annotates each editing modification in its biological context. Here we present an updated and upgraded release of REDIdb with a web-interface refurbished with graphical and computational facilities that improve RNA editing investigations. Details of the REDIdb features and novelties are illustrated and compared to other RNA editing databases. REDIdb is freely queried at http://biologia.unical.it/py_script/REDIdb/.

  7. A Bioinformatics Resource for TWEAK-Fn14 Signaling Pathway

    Directory of Open Access Journals (Sweden)

    Mitali Bhattacharjee

    2012-01-01

    Full Text Available TNF-related weak inducer of apoptosis (TWEAK is a new member of the TNF superfamily. It signals through TNFRSF12A, commonly known as Fn14. The TWEAK-Fn14 interaction regulates cellular activities including proliferation, migration, differentiation, apoptosis, angiogenesis, tissue remodeling and inflammation. Although TWEAK has been reported to be associated with autoimmune diseases, cancers, stroke, and kidney-related disorders, the downstream molecular events of TWEAK-Fn14 signaling are yet not available in any signaling pathway repository. In this paper, we manually compiled from the literature, in particular those reported in human systems, the downstream reactions stimulated by TWEAK-Fn14 interactions. Our manual amassment of the TWEAK-Fn14 pathway has resulted in cataloging of 46 proteins involved in various biochemical reactions and TWEAK-Fn14 induced expression of 28 genes. We have enabled the availability of data in various standard exchange formats from NetPath, a repository for signaling pathways. We believe that this composite molecular interaction pathway will enable identification of new signaling components in TWEAK signaling pathway. This in turn may lead to the identification of potential therapeutic targets in TWEAK-associated disorders.

  8. Hardware Acceleration of Bioinformatics Sequence Alignment Applications

    NARCIS (Netherlands)

    Hasan, L.

    2011-01-01

    Biological sequence alignment is an important and challenging task in bioinformatics. Alignment may be defined as an arrangement of two or more DNA or protein sequences to highlight the regions of their similarity. Sequence alignment is used to infer the evolutionary relationship between a set of pr

  9. A bioinformatics approach to marker development

    NARCIS (Netherlands)

    Tang, J.

    2008-01-01

    The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent imple

  10. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  11. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  12. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  13. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  14. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  15. Mass spectrometry and bioinformatics analysis data

    Directory of Open Access Journals (Sweden)

    Mainak Dutta

    2015-03-01

    Full Text Available 2DE and 2D-DIGE based proteomics analysis of serum from women with endometriosis revealed several proteins to be dysregulated. A complete list of these proteins along with their mass spectrometry data and subsequent bioinformatics analysis are presented here. The data is related to “Investigation of serum proteome alterations in human endometriosis” by Dutta et al. [1].

  16. Evolution of web services in bioinformatics

    NARCIS (Netherlands)

    Neerincx, P.B.T.; Leunissen, J.A.M.

    2005-01-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformatic

  17. Facilitating the use of large-scale biological data and tools in the era of translational bioinformatics

    DEFF Research Database (Denmark)

    Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren

    2014-01-01

    As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...

  18. Bioinformatics analysis of metastasis-related proteins in hepatocellular carcinoma

    Institute of Scientific and Technical Information of China (English)

    Pei-Ming Song; Yang Zhang; Yu-Fei He; Hui-Min Bao; Jian-Hua Luo; Yin-Kun Liu; Peng-Yuan Yang; Xian Chen

    2008-01-01

    AIM: To analyze the metastasis-related proteins in hepatocellular carcinoma (HCC) and discover the biomark-er candidates for diagnosis and therapeutic intervention of HCC metastasis with bioinformatics tools.METHODS: Metastasis-related proteins were determined by stable isotope labeling and MS analysis and analyzed with bioinformatics resources, including Phobius, Kyoto encyclopedia of genes and genomes (KEGG), online mendelian inheritance in man (OHIH) and human protein reference database (HPRD).RESULTS: All the metastasis-related proteins were linked to 83 pathways in KEGG, including MAPK and p53 signal pathways. Protein-protein interaction network showed that all the metastasis-related proteins were categorized into 19 function groups, including cell cycle, apoptosis and signal transcluction. OMIM analysis linked these proteins to 186 OMIM entries.CONCLUSION: Metastasis-related proteins provide HCC cells with biological advantages in cell proliferation, migration and angiogenesis, and facilitate metastasis of HCC cells. The bird's eye view can reveal a global charac-teristic of metastasis-related proteins and many differen-tially expressed proteins can be identified as candidates for diagnosis and treatment of HCC.

  19. MAPI: towards the integrated exploitation of bioinformatics Web Services

    Directory of Open Access Journals (Sweden)

    Karlsson Johan

    2011-10-01

    Full Text Available Abstract Background Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. Results To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. Conclusions The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others.

  20. Web services at the European Bioinformatics Institute-2009.

    Science.gov (United States)

    McWilliam, Hamish; Valentin, Franck; Goujon, Mickael; Li, Weizhong; Narayanasamy, Menaka; Martin, Jenny; Miyar, Teresa; Lopez, Rodrigo

    2009-07-01

    The European Bioinformatics Institute (EMBL-EBI) has been providing access to mainstream databases and tools in bioinformatics since 1997. In addition to the traditional web form based interfaces, APIs exist for core data resources such as EMBL-Bank, Ensembl, UniProt, InterPro, PDB and ArrayExpress. These APIs are based on Web Services (SOAP/REST) interfaces that allow users to systematically access databases and analytical tools. From the user's point of view, these Web Services provide the same functionality as the browser-based forms. However, using the APIs frees the user from web page constraints and are ideal for the analysis of large batches of data, performing text-mining tasks and the casual or systematic evaluation of mathematical models in regulatory networks. Furthermore, these services are widespread and easy to use; require no prior knowledge of the technology and no more than basic experience in programming. In the following we wish to inform of new and updated services as well as briefly describe planned developments to be made available during the course of 2009-2010.

  1. Promoting synergistic research and education in genomics and bioinformatics.

    Science.gov (United States)

    Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  2. Needs assessment of science teachers in secondary schools in Kumasi, Ghana: A basis for in-service education training programs at the Science Resource Centers

    Science.gov (United States)

    Gyamfi, Alexander

    The purpose of this study was twofold. First, it identified the priority needs common to all science teachers in secondary schools in Kumasi, Ghana. Second, it investigated the relationship existing between the identified priority needs and the teacher demographic variables (type of school, teacher qualification, teaching experience, subject discipline, and sex of teacher) to be used as a basis for implementing in-service education training programs at the Science Resource Centers in Kumasi Ghana. An adapted version of the Moore Assessment Profile (MAP) survey instrument and a set of open-ended questions were used to collect data from the science teachers. The researcher handed out one hundred and fifty questionnaire packets, and all one hundred and fifty (100%) were collected within a period of six weeks. The data were analyzed using descriptive statistics, content analysis, and inferential statistics. The descriptive statistics reported the frequency of responses, and it was used to calculate the Need Index (N) of the identified needs of teachers. Sixteen top-priority needs were identified, and the needs were arranged in a hierarchical order according to the magnitude of the Need Index (0.000 ≤ N ≤ 1.000). Content analysis was used to analyze the responses to the open-ended questions. One-way analysis of variance (ANOVA) was used to test the null hypotheses of the study on each of the sixteen identified top-priority needs and the teacher demographic variables. The findings of this study were as follows: (1) The science teachers identified needs related to "more effective use of instructional materials" as a crucial area for in-service training. (2) Host and Satellite schools exhibited significant difference on procuring supplementary science books for students. Subject discipline of teachers exhibited significant differences on utilizing the library and its facilities by students, obtaining information on where to get help on effective science teaching

  3. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  4. Mining Cancer Transcriptomes: Bioinformatic Tools and the Remaining Challenges.

    Science.gov (United States)

    Milan, Thomas; Wilhelm, Brian T

    2017-02-22

    The development of next-generation sequencing technologies has had a profound impact on the field of cancer genomics. With the enormous quantities of data being generated from tumor samples, researchers have had to rapidly adapt tools or develop new ones to analyse the raw data to maximize its value. While much of this effort has been focused on improving specific algorithms to get faster and more precise results, the accessibility of the final data for the research community remains a significant problem. Large amounts of data exist but are not easily available to researchers who lack the resources and experience to download and reanalyze them. In this article, we focus on RNA-seq analysis in the context of cancer genomics and discuss the bioinformatic tools available to explore these data. We also highlight the importance of developing new and more intuitive tools to provide easier access to public data and discuss the related issues of data sharing and patient privacy.

  5. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  6. How to operate an Energy Advisory Service. Volume II. New York Institute of Technology Energy Information Center and Referral Service resource material. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Spak, G.T.

    1978-06-01

    The NYIT Energy Information Center is a comprehensive information service covering every aspect of energy conservation and related technology, including conservation programs and practices, alternative energy systems, energy legislation, and public policy development in the United States and abroad. Materials in the Center can be located through a Card Catalog System and a Vertical File System. The Card Catalog System has entries which organize books and other printed materials according to authors/titles and according to the subject headings developed by the Library of Congress. The Vertical System contains pamphlets, newsclips, reprints, studies, announcements, product specifications and other ephemeral literature, and is organized according to subject headings based on the emerging vocabulary of the energy literature. The key to vertical file resources is the Thesaurus of Descriptors which is given. The Thesaurus includes all subject headings found in the Vertical File as well as other cross referenced terms likely to come to mind when seeking information on a specific energy area.

  7. Using registries to integrate bioinformatics tools and services into workbench environments

    DEFF Research Database (Denmark)

    Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer;

    2015-01-01

    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially...... within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them...

  8. Towards bioinformatics assisted infectious disease control

    Directory of Open Access Journals (Sweden)

    Gallego Blanca

    2009-02-01

    Full Text Available Abstract Background This paper proposes a novel framework for bioinformatics assisted biosurveillance and early warning to address the inefficiencies in traditional surveillance as well as the need for more timely and comprehensive infection monitoring and control. It leverages on breakthroughs in rapid, high-throughput molecular profiling of microorganisms and text mining. Results This framework combines the genetic and geographic data of a pathogen to reconstruct its history and to identify the migration routes through which the strains spread regionally and internationally. A pilot study of Salmonella typhimurium genotype clustering and temporospatial outbreak analysis demonstrated better discrimination power than traditional phage typing. Half of the outbreaks were detected in the first half of their duration. Conclusion The microbial profiling and biosurveillance focused text mining tools can enable integrated infectious disease outbreak detection and response environments based upon bioinformatics knowledge models and measured by outcomes including the accuracy and timeliness of outbreak detection.

  9. Bioinformatics Approaches for Human Gut Microbiome Research

    Directory of Open Access Journals (Sweden)

    Zhijun Zheng

    2016-07-01

    Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.

  10. Bioinformatics in New Generation Flavivirus Vaccines

    Directory of Open Access Journals (Sweden)

    Penelope Koraka

    2010-01-01

    Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.

  11. Bioinformatics for saffron (Crocus sativus L. improvement

    Directory of Open Access Journals (Sweden)

    Ghulam A. Parray

    2009-02-01

    Full Text Available Saffron (Crocus sativus L. is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots. Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The characterization of the transcriptome of saffron stigmas is the most vital for throwing light on the molecular basis of flavor, color biogenesis, genomic organization and biology of gynoecium of saffron. The information derived can be utilized for constructing biological pathways involved in the biosynthesis of principal components of saffron i.e., crocin, crocetin, safranal, picrocrocin and safchiA

  12. [Applied problems of mathematical biology and bioinformatics].

    Science.gov (United States)

    Lakhno, V D

    2011-01-01

    Mathematical biology and bioinformatics represent a new and rapidly progressing line of investigations which emerged in the course of work on the project "Human genome". The main applied problems of these sciences are grug design, patient-specific medicine and nanobioelectronics. It is shown that progress in the technology of mass sequencing of the human genome has set the stage for starting the national program on patient-specific medicine.

  13. Genome bioinformatics of tomato and potato

    OpenAIRE

    E Datema

    2011-01-01

    In the past two decades genome sequencing has developed from a laborious and costly technology employed by large international consortia to a widely used, automated and affordable tool used worldwide by many individual research groups. Genome sequences of many food animals and crop plants have been deciphered and are being exploited for fundamental research and applied to improve their breeding programs. The developments in sequencing technologies have also impacted the associated bioinformat...

  14. VLSI Microsystem for Rapid Bioinformatic Pattern Recognition

    Science.gov (United States)

    Fang, Wai-Chi; Lue, Jaw-Chyng

    2009-01-01

    A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).

  15. Application of bioinformatics in chronobiology research.

    Science.gov (United States)

    Lopes, Robson da Silva; Resende, Nathalia Maria; Honorio-França, Adenilda Cristina; França, Eduardo Luzía

    2013-01-01

    Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through "omics" projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.

  16. Application of Bioinformatics in Chronobiology Research

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2013-01-01

    Full Text Available Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.

  17. Chapter 16: text mining for translational bioinformatics.

    Directory of Open Access Journals (Sweden)

    K Bretonnel Cohen

    2013-04-01

    Full Text Available Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  18. Bioinformatics tools for analysing viral genomic data.

    Science.gov (United States)

    Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J

    2016-04-01

    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing.

  19. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  20. Chapter 16: text mining for translational bioinformatics.

    Science.gov (United States)

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  1. A discrete system simulation study in scheduling and resource allocation for the John A. Burns School of Medicine Clinical Skills Center.

    Science.gov (United States)

    Glaspie, Henry W; Oshiro Wong, Celeste M

    2015-03-01

    The Center for Clinical Skills (CCS) at the University of Hawai'i's John A. Burns School of Medicine (JABSOM) trains medical students in a variety of medical practice education experiences aimed at improving patient care skills of history taking, physical examination, communication, and counseling. Increasing class sizes accentuate the need for efficient scheduling of faculty and students for clinical skills examinations. This research reports an application of a discrete simulation methodology, using a computerized commercial business simulation optimization software package Arena® by Rockwell Automation Inc, to model the flow of students through an objective structure clinical exam (OSCE) using the basic physical examination sequence (BPSE). The goal was to identify the most efficient scheduling of limited volunteer faculty resources to enable all student teams to complete the OSCE within the allocated 4 hours. The simulation models 11 two-person student teams, using resources of 10 examination rooms where physical examination skills are demonstrated on fellow student subjects and assessed by volunteer faculty. Multiple faculty availability models with constrained time parameters and other resources were evaluated. The results of the discrete event simulation suggest that there is no statistical difference in the baseline model and the alternative models with respect to faculty utilization, but statistically significant changes in student wait times. Two models significantly reduced student wait times without compromising faculty utilization.

  2. USDA Stakeholder Workshop on Animal Bioinformatics: Summary and Recommendations

    Directory of Open Access Journals (Sweden)

    David L. Adelson

    2006-04-01

    Full Text Available An electronic workshop was conducted on 4 November–13 December 2002 to discuss current issues and needs in animal bioinformatics. The electronic (e-mail listserver format was chosen to provide a relatively speedy process that is broad in scope, cost-efficient and easily accessible to all participants. Approximately 40 panelists with diverse species and discipline expertise communicated through the panel e-mail listserver. The panel included scientists from academia, industry and government, in the USA, Australia and the UK. A second ‘stakeholder’ e-mail listserver was used to obtain input from a broad audience with general interests in animal genomics. The objectives of the electronic workshop were: (a to define priorities for animal genome database development; and (b to recommend ways in which the USDA could provide leadership in the area of animal genome database development. E-mail messages from panelists and stakeholders are archived at http://genome.cvm.umn.edu/bioinfo/. Priorities defined for animal genome database development included: (a data repository; (b tools for genome analysis; (c annotation; (d practical application of genomic data; and (e a biological framework for DNA sequence. A stable source of funding, such as the USDA Agricultural Research Service (ARS, was recommended to support maintenance of data repositories and data curation. Continued support for competitive grants programs within the USDA Cooperative State Research, Education and Extension Service (CSREES was recommended for tool development and hypothesis-driven research projects in genome analysis. Additional stakeholder input will be required to continuously refine priorities and maximize the use of limited resources for animal bioinformatics within the USDA.

  3. USDA Stakeholder Workshop on Animal Bioinformatics: Summary and Recommendations.

    Science.gov (United States)

    Hamernik, Debora L; Adelson, David L

    2003-01-01

    An electronic workshop was conducted on 4 November-13 December 2002 to discuss current issues and needs in animal bioinformatics. The electronic (e-mail listserver) format was chosen to provide a relatively speedy process that is broad in scope, cost-efficient and easily accessible to all participants. Approximately 40 panelists with diverse species and discipline expertise communicated through the panel e-mail listserver. The panel included scientists from academia, industry and government, in the USA, Australia and the UK. A second 'stakeholder' e-mail listserver was used to obtain input from a broad audience with general interests in animal genomics. The objectives of the electronic workshop were: (a) to define priorities for animal genome database development; and (b) to recommend ways in which the USDA could provide leadership in the area of animal genome database development. E-mail messages from panelists and stakeholders are archived at http://genome.cvm.umn.edu/bioinfo/. Priorities defined for animal genome database development included: (a) data repository; (b) tools for genome analysis; (c) annotation; (d) practical application of genomic data; and (e) a biological framework for DNA sequence. A stable source of funding, such as the USDA Agricultural Research Service (ARS), was recommended to support maintenance of data repositories and data curation. Continued support for competitive grants programs within the USDA Cooperative State Research, Education and Extension Service (CSREES) was recommended for tool development and hypothesis-driven research projects in genome analysis. Additional stakeholder input will be required to continuously refine priorities and maximize the use of limited resources for animal bioinformatics within the USDA.

  4. Bioinformatics for Diagnostics, Forensics, and Virulence Characterization and Detection

    Energy Technology Data Exchange (ETDEWEB)

    Gardner, S; Slezak, T

    2005-04-05

    We summarize four of our group's high-risk/high-payoff research projects funded by the Intelligence Technology Innovation Center (ITIC) in conjunction with our DHS-funded pathogen informatics activities. These are (1) quantitative assessment of genomic sequencing needs to predict high quality DNA and protein signatures for detection, and comparison of draft versus finished sequences for diagnostic signature prediction; (2) development of forensic software to identify SNP and PCR-RFLP variations from a large number of viral pathogen sequences and optimization of the selection of markers for maximum discrimination of those sequences; (3) prediction of signatures for the detection of virulence, antibiotic resistance, and toxin genes and genetic engineering markers in bacteria; (4) bioinformatic characterization of virulence factors to rapidly screen genomic data for potential genes with similar functions and to elucidate potential health threats in novel organisms. The results of (1) are being used by policy makers to set national sequencing priorities. Analyses from (2) are being used in collaborations with the CDC to genotype and characterize many variola strains, and reports from these collaborations have been made to the President. We also determined SNPs for serotype and strain discrimination of 126 foot and mouth disease virus (FMDV) genomes. For (3), currently >1000 probes have been predicted for the specific detection of >4000 virulence, antibiotic resistance, and genetic engineering vector sequences, and we expect to complete the bioinformatic design of a comprehensive ''virulence detection chip'' by August 2005. Results of (4) will be a system to rapidly predict potential virulence pathways and phenotypes in organisms based on their genomic sequences.

  5. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  6. Multiobjective optimization in bioinformatics and computational biology.

    Science.gov (United States)

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  7. Microbial bioinformatics for food safety and production.

    Science.gov (United States)

    Alkema, Wynand; Boekhorst, Jos; Wels, Michiel; van Hijum, Sacha A F T

    2016-03-01

    In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput 'omics' technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety.

  8. Translational Bioinformatics:Past, Present, and Future

    Institute of Scientific and Technical Information of China (English)

    Jessica D. Tenenbaum

    2016-01-01

    Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contex-tualization of the term TBI, describes the discipline’s brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field.

  9. Introducing bioinformatics, the biosciences' genomic revolution

    CERN Document Server

    Zanella, Paolo

    1999-01-01

    The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.

  10. Diabetes - resources

    Science.gov (United States)

    Resources - diabetes ... The following sites provide further information on diabetes: American Diabetes Association -- www.diabetes.org Juvenile Diabetes Research Foundation International -- www.jdrf.org National Center for Chronic Disease Prevention and Health Promotion -- ...

  11. Arthritis - resources

    Science.gov (United States)

    Resources - arthritis ... The following organizations provide more information on arthritis : American Academy of Orthopaedic Surgeons -- orthoinfo.aaos.org/menus/arthritis.cfm Arthritis Foundation -- www.arthritis.org Centers for Disease Control and Prevention -- www. ...

  12. Hemophilia - resources

    Science.gov (United States)

    Resources - hemophilia ... The following organizations provide further information on hemophilia : Centers for Disease Control and Prevention -- www.cdc.gov/ncbddd/hemophilia/index.html National Heart, Lung, and Blood Institute -- www.nhlbi.nih.gov/ ...

  13. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    Directory of Open Access Journals (Sweden)

    Enis Afgan

    Full Text Available Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise.We designed and implemented the Genomics Virtual Laboratory (GVL as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic.This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints

  14. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    Science.gov (United States)

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  15. Hands-on, online, and workshop-based K-12 weather and climate education resources from the Center for Multi-scale Modeling of Atmospheric Processes

    Science.gov (United States)

    Foster, S. Q.; Johnson, R. M.; Randall, D. A.; Denning, A.; Burt, M. A.; Gardiner, L.; Genyuk, J.; Hatheway, B.; Jones, B.; La Grave, M. L.; Russell, R. M.

    2009-12-01

    The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. Now in its fourth year, the National Science Foundation-funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University (CSU) is addressing this problem through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interaction processes that are active in cloud systems. CMMAP has set ambitious education and human-resource goals to share basic information about the atmosphere, clouds, weather, climate, and modeling with diverse K-12 and public audiences. This is accomplished through collaborations in resource development and dissemination between CMMAP scientists, CSU’s Little Shop of Physics (LSOP) program, and the Windows to the Universe (W2U) program at University Corporation for Atmospheric Research (UCAR). Little Shop of Physics develops new hands on science activities demonstrating basic science concepts fundamental to understanding atmospheric characteristics, weather, and climate. Videos capture demonstrations of children completing these activities which are broadcast to school districts and public television programs. CMMAP and LSOP educators and scientists partner in teaching a summer professional development workshops for teachers at CSU with a semester's worth of college-level content on the basic physics of the atmosphere, weather, climate, climate modeling, and climate change, as well as dozens of LSOP inquiry-based activities suitable for use in classrooms. The W2U project complements these efforts by developing and broadly disseminating new CMMAP-related online content pages, animations, interactives, image galleries, scientists’ biographies, and LSOP videos to K-12 and public audiences. Reaching nearly 20 million users annually, W2U is highly valued as a curriculum enhancement

  16. A Dynamic Allocation Method on Virtual Resource in Cloud Call Center%云呼叫中心系统中对虚拟化资源进行动态分配的方法

    Institute of Scientific and Technical Information of China (English)

    凌颖; 徐伟

    2013-01-01

    构建在云计算业务平台资源池上的云呼叫中心系统,不仅能够在业务层面上进行负载均衡控制,实现呼叫中心的智能化资源调度和分配,自动均衡处理来话负荷、座席签人负荷,支撑话务、服务资源的统一调度以及业务的统一运营管理;而且由于云呼叫中心业务系统部署在云数据中心资源池上,与其他应用可以共享基础设施,因此还需要实现资源动态伸缩分配.提出了一种云呼叫中心系统中对虚拟化资源进行动态分配的方法,这是一种根据资源池上层应用系统的运行情况决定资源池资源动态分配的方法,该方法包括云呼叫中心系统发起虚拟化资源动态分配请求的触发机制、云呼叫中心系统与资源管理平台之间进行资源动态分配的接口等.%Business platform built on cloud computing cloud resource pool call center system,can not only do load balancing control at the operational level,realize intelligent call center scheduling and resource allocation,automatically do the equalization processing of incoming load and the agent check-in load,support word service,unified scheduling of service resources and unified operation and management of business,but also need to achieve dynamic resource allocation stretching because the cloud call center business systems deployed in the cloud data center resources pool can be shared with other applications infrastructure.A cloud call center system for dynamic virtualized resources allocation method was presented,which determines the operation of the system resource pool dynamically allocates resources based on the upper application resource pool.The method contains virtual resource dynamic allocation request trigger mechanism initiated by the cloud call center system,the interfaces of cloud call center system and resource management platform for dynamic allocation of resources and so on.

  17. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  18. The Revolution in Viral Genomics as Exemplified by the Bioinformatic Analysis of Human Adenoviruses

    Directory of Open Access Journals (Sweden)

    Sarah Torres

    2010-06-01

    Full Text Available Over the past 30 years, genomic and bioinformatic analysis of human adenoviruses has been achieved using a variety of DNA sequencing methods; initially with the use of restriction enzymes and more currently with the use of the GS FLX pyrosequencing technology. Following the conception of DNA sequencing in the 1970s, analysis of adenoviruses has evolved from 100 base pair mRNA fragments to entire genomes. Comparative genomics of adenoviruses made its debut in 1984 when nucleotides and amino acids of coding sequences within the hexon genes of two human adenoviruses (HAdV, HAdV–C2 and HAdV–C5, were compared and analyzed. It was determined that there were three different zones (1-393, 394-1410, 1411-2910 within the hexon gene, of which HAdV–C2 and HAdV–C5 shared zones 1 and 3 with 95% and 89.5% nucleotide identity, respectively. In 1992, HAdV-C5 became the first adenovirus genome to be fully sequenced using the Sanger method. Over the next seven years, whole genome analysis and characterization was completed using bioinformatic tools such as blastn, tblastx, ClustalV and FASTA, in order to determine key proteins in species HAdV-A through HAdV-F. The bioinformatic revolution was initiated with the introduction of a novel species, HAdV-G, that was typed and named by the use of whole genome sequencing and phylogenetics as opposed to traditional serology. HAdV bioinformatics will continue to advance as the latest sequencing technology enables scientists to add to and expand the resource databases. As a result of these advancements, how novel HAdVs are typed has changed. Bioinformatic analysis has become the revolutionary tool that has significantly accelerated the in-depth study of HAdV microevolution through comparative genomics.

  19. Bioinformatics for cancer immunology and immunotherapy.

    Science.gov (United States)

    Charoentong, Pornpimol; Angelova, Mihaela; Efremova, Mirjana; Gallasch, Ralf; Hackl, Hubert; Galon, Jerome; Trajanoski, Zlatko

    2012-11-01

    Recent mechanistic insights obtained from preclinical studies and the approval of the first immunotherapies has motivated increasing number of academic investigators and pharmaceutical/biotech companies to further elucidate the role of immunity in tumor pathogenesis and to reconsider the role of immunotherapy. Additionally, technological advances (e.g., next-generation sequencing) are providing unprecedented opportunities to draw a comprehensive picture of the tumor genomics landscape and ultimately enable individualized treatment. However, the increasing complexity of the generated data and the plethora of bioinformatics methods and tools pose considerable challenges to both tumor immunologists and clinical oncologists. In this review, we describe current concepts and future challenges for the management and analysis of data for cancer immunology and immunotherapy. We first highlight publicly available databases with specific focus on cancer immunology including databases for somatic mutations and epitope databases. We then give an overview of the bioinformatics methods for the analysis of next-generation sequencing data (whole-genome and exome sequencing), epitope prediction tools as well as methods for integrative data analysis and network modeling. Mathematical models are powerful tools that can predict and explain important patterns in the genetic and clinical progression of cancer. Therefore, a survey of mathematical models for tumor evolution and tumor-immune cell interaction is included. Finally, we discuss future challenges for individualized immunotherapy and suggest how a combined computational/experimental approaches can lead to new insights into the molecular mechanisms of cancer, improved diagnosis, and prognosis of the disease and pinpoint novel therapeutic targets.

  20. Proceedings of the 2013 MidSouth Computational Biology and Bioinformatics Society (MCBIOS) Conference.

    Science.gov (United States)

    Wren, Jonathan D; Dozmorov, Mikhail G; Burian, Dennis; Kaundal, Rakesh; Perkins, Andy; Perkins, Ed; Kupfer, Doris M; Springer, Gordon K

    2013-01-01

    The tenth annual conference of the MidSouth Computational Biology and Bioinformatics Society (MCBIOS 2013), "The 10th Anniversary in a Decade of Change: Discovery in a Sea of Data", took place at the Stoney Creek Inn & Conference Center in Columbia, Missouri on April 5-6, 2013. This year's Conference Chairs were Gordon Springer and Chi-Ren Shyu from the University of Missouri and Edward Perkins from the US Army Corps of Engineers Engineering Research and Development Center, who is also the current MCBIOS President (2012-3). There were 151 registrants and a total of 111 abstracts (51 oral presentations and 60 poster session abstracts).

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    OpenAIRE

    Krampis Konstantinos; Booth Tim; Chapman Brad; Tiwari Bela; Bicak Mesude; Field Dawn; Nelson Karen E

    2012-01-01

    Abstract Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the ...

  2. 数据中心硬件资源的虚拟化建设%The Virtual Construction Hardware Resources Data Center

    Institute of Scientific and Technical Information of China (English)

    李志飞; 宿建波; 郑确

    2015-01-01

    为了实现企业信息化水平的进一步跨越,雅砻江水域水电开发有限公司(以下简称雅砻江公司)数据中心开展了硬件资源的虚拟化建设,并在此基础上实现了企业云管理平台的搭建,大幅提高了硬件资源的利用率和安全性。%In order to further across the implementation of enterprise informationization level, the Yalong River Hydropower Development Company Limited domain (hereinafter referred to as the Yalong company) data center to carry out the virtual construction of hardware resources, and on the basis of this implementation to build enterprise cloud management platform, a substantial increase in the rate and safety of the use of hardware resources.

  3. Development of a gene-centered ssr atlas as a resource for papaya (Carica papaya marker-assisted selection and population genetic studies.

    Directory of Open Access Journals (Sweden)

    Newton Medeiros Vidal

    Full Text Available Carica papaya (papaya is an economically important tropical fruit. Molecular marker-assisted selection is an inexpensive and reliable tool that has been widely used to improve fruit quality traits and resistance against diseases. In the present study we report the development and validation of an atlas of papaya simple sequence repeat (SSR markers. We integrated gene predictions and functional annotations to provide a gene-centered perspective for marker-assisted selection studies. Our atlas comprises 160,318 SSRs, from which 21,231 were located in genic regions (i.e. inside exons, exon-intron junctions or introns. A total of 116,453 (72.6% of all identified repeats were successfully mapped to one of the nine papaya linkage groups. Primer pairs were designed for markers from 9,594 genes (34.5% of the papaya gene complement. Using papaya-tomato orthology assessments, we assembled a list of 300 genes (comprising 785 SSRs potentially involved in fruit ripening. We validated our atlas by screening 73 SSR markers (including 25 fruit ripening genes, achieving 100% amplification rate and uncovering 26% polymorphism rate between the parental genotypes (Sekati and JS12. The SSR atlas presented here is the first comprehensive gene-centered collection of annotated and genome positioned papaya SSRs. These features combined with thousands of high-quality primer pairs make the atlas an important resource for the papaya research community.

  4. Development of a gene-centered ssr atlas as a resource for papaya (Carica papaya) marker-assisted selection and population genetic studies.

    Science.gov (United States)

    Vidal, Newton Medeiros; Grazziotin, Ana Laura; Ramos, Helaine Christine Cancela; Pereira, Messias Gonzaga; Venancio, Thiago Motta

    2014-01-01

    Carica papaya (papaya) is an economically important tropical fruit. Molecular marker-assisted selection is an inexpensive and reliable tool that has been widely used to improve fruit quality traits and resistance against diseases. In the present study we report the development and validation of an atlas of papaya simple sequence repeat (SSR) markers. We integrated gene predictions and functional annotations to provide a gene-centered perspective for marker-assisted selection studies. Our atlas comprises 160,318 SSRs, from which 21,231 were located in genic regions (i.e. inside exons, exon-intron junctions or introns). A total of 116,453 (72.6%) of all identified repeats were successfully mapped to one of the nine papaya linkage groups. Primer pairs were designed for markers from 9,594 genes (34.5% of the papaya gene complement). Using papaya-tomato orthology assessments, we assembled a list of 300 genes (comprising 785 SSRs) potentially involved in fruit ripening. We validated our atlas by screening 73 SSR markers (including 25 fruit ripening genes), achieving 100% amplification rate and uncovering 26% polymorphism rate between the parental genotypes (Sekati and JS12). The SSR atlas presented here is the first comprehensive gene-centered collection of annotated and genome positioned papaya SSRs. These features combined with thousands of high-quality primer pairs make the atlas an important resource for the papaya research community.

  5. A middleware-based platform for the integration of bioinformatic services

    Directory of Open Access Journals (Sweden)

    Guzmán Llambías

    2015-08-01

    Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.

  6. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology.

    Science.gov (United States)

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-11-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another.

  7. Data Analysis and Assessment Center

    Data.gov (United States)

    Federal Laboratory Consortium — The DoD Supercomputing Resource Center (DSRC) Data Analysis and Assessment Center (DAAC) provides classified facilities to enhance customer interactions with the ARL...

  8. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    Science.gov (United States)

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  9. Assessment of a Bioinformatics across Life Science Curricula Initiative

    Science.gov (United States)

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  10. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  11. The bioinformatics of next generation sequencing: a meeting report

    Institute of Scientific and Technical Information of China (English)

    Ravi Shankar

    2011-01-01

    @@ The Studio of Computational Biology & Bioinformatics (SCBB), IHBT, CSIR,Palampur, India organized one of the very first national workshop funded by DBT,Govt.of India, on the Bioinformatics issues associated with next generation sequencing approaches.The course structure was designed by SCBB, IHBT.The workshop took place in the IHBT premise on 17 and 18 June 2010.

  12. The 2015 Bioinformatics Open Source Conference (BOSC 2015.

    Directory of Open Access Journals (Sweden)

    Nomi L Harris

    2016-02-01

    Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  13. Storage, data management, and retrieval in bioinformatics

    Science.gov (United States)

    Wong, Stephen T. C.; Patwardhan, Anil

    2001-12-01

    The evolution of biology into a large-scale quantitative molecular science has been paralleled by concomitant advances in computer storage systems, processing power, and data-analysis algorithms. The application of computer technologies to molecular biology data has given rise to a new system-based approach to biological research. Bioinformatics addresses problems related to the storage, retrieval and analysis of information about biological structure, sequence and function. Its goals include the development of integrated storage systems and analysis tools to interpret molecular biology data in a biologically meaningful manner in normal and disease processes and in efforts for drug discovery. This paper reviews recent developments in data management, storage, and retrieval that are central to the effective use of structural and functional genomics in fulfilling these goals.

  14. Bioinformatics analysis of estrogen-responsive genes

    Science.gov (United States)

    Handel, Adam E.

    2016-01-01

    Estrogen is a steroid hormone that plays critical roles in a myriad of intracellular pathways. The expression of many genes is regulated through the steroid hormone receptors ESR1 and ESR2. These bind to DNA and modulate the expression of target genes. Identification of estrogen target genes is greatly facilitated by the use of transcriptomic methods, such as RNA-seq and expression microarrays, and chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq). Combining transcriptomic and ChIP-seq data enables a distinction to be drawn between direct and indirect estrogen target genes. This chapter will discuss some methods of identifying estrogen target genes that do not require any expertise in programming languages or complex bioinformatics. PMID:26585125

  15. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  16. Using Cluster Computers in Bioinformatics Research

    Institute of Scientific and Technical Information of China (English)

    周澄; 郁松年

    2003-01-01

    In the last ten years, high-performance and massively parallel computing technology comes into a high speed developing phase and is used in all fields. The cluster computer systems are also being widely used for their low cost and high performance. In bioinformatics research, solving a problem with computer usually takes hours even days. To speed up research, high-performance cluster computers are considered to be a good platform. Moving into the new MPP (massively parallel processing) system, the original algorithm should be parallelized in a proper way. In this paper, a new parallelizing method of useful sequence alignment algorithm (Smith-Waterman) is designed based on its optimizing algorithm already exists. The result is gratifying.

  17. Bioinformatics methods for identifying candidate disease genes

    Directory of Open Access Journals (Sweden)

    van Driel Marc A

    2006-06-01

    Full Text Available Abstract With the explosion in genomic and functional genomics information, methods for disease gene identification are rapidly evolving. Databases are now essential to the process of selecting candidate disease genes. Combining positional information with disease characteristics and functional information is the usual strategy by which candidate disease genes are selected. Enrichment for candidate disease genes, however, depends on the skills of the operating researcher. Over the past few years, a number of bioinformatics methods that enrich for the most likely candidate disease genes have been developed. Such in silico prioritisation methods may further improve by completion of datasets, by development of standardised ontologies across databases and species and, ultimately, by the integration of different strategies.

  18. Cotton Databases and Web Resources%棉花Databases和Web资源

    Institute of Scientific and Technical Information of China (English)

    Russell J. KOHEL; John Z. YU; Piyush GUPTA; Rajeev AGRAWAL

    2002-01-01

    @@ There are several web sites for which information is available to the cotton research community. Most of these sites relate to resources developed or available to the research community. Few provide bioinformatic tools,which usually relate to the specific data sets and materials presented in the database. Just as the bioinformatics area is evolving, the available resources reflect this evolution.

  19. Training community resource center and clinic personnel to prompt patients in listing questions for doctors: Follow-up interviews about barriers and facilitators to the implementation of consultation planning

    Directory of Open Access Journals (Sweden)

    Sepucha Karen

    2008-01-01

    Full Text Available Abstract Background Visit preparation interventions help patients prepare to meet with a medical provider. Systematic reviews have found some positive effects, but there are no reports describing implementation experiences. Consultation Planning (CP is a visit preparation technique in which a trained coach or facilitator elicits and documents patient questions for an upcoming medical appointment. We integrated CP into a university breast cancer clinic beginning in 1998. Representatives of other organizations expressed interest in CP, so we invited them to training workshops in 2000, 2001, and 2002. Objectives In order to learn from experience and generate hypotheses, we asked: 1 How many trainees implemented CP? 2 What facilitated implementation? 3 How have trainees, patients, physicians, and administrative leaders of implementing organizations reacted to CP? 4 What were the barriers to implementation? Methods We attempted to contact 32 trainees and scheduled follow-up, semi-structured, audio-recorded telephone interviews with 18. We analyzed quantitative data by tabulating frequencies and qualitative data by coding transcripts and identifying themes. Results Trainees came from two different types of organizations, clinics (which provide medical care versus resource centers (which provide patient support services but not medical care. We found that: 1 Fourteen of 21 respondents, from five of eight resource centers, implemented CP. Four of the five implementing resource centers were rural. 2 Implementers identified the championing of CP by an internal staff member as a critical success factor. 3 Implementers reported that modified CP has been productive. 4 Four respondents, from two resource centers and two clinics, did not implement CP, reporting resource limitations or conflicting priorities as the critical barriers. Conclusion CP training workshops have been associated with subsequent CP implementations at resource centers but not clinics. We

  20. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... Community Outreach Resource Center Toy Recall Statistics CO Poster Contest Pool Safely Business & Manufacturing Business & Manufacturing Business ... Featured Resources CPSC announces winners of carbon monoxide poster contest Video View the blog Clues You Can ...

  1. Hydrologic Engineering Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Hydrologic Engineering Center (HEC), an organization within the Institute for Water Resources, is the designated Center of Expertise for the U.S. Army Corps of...

  2. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  3. 厦门市医院承办社区卫生服务中心人力资源现状调查%Investigation on human resources of municipal hospital delivered community health center in Xiamen

    Institute of Scientific and Technical Information of China (English)

    林民强

    2014-01-01

    Objectives:To learn status quo and problems on human resources of municipal hospital delivered community health center in Xiamen and give evidence on human resources reform in community health center. Methods:Self designed questionnaire was used in 15 municipal hospital delivered community health centers to investigate human resources status. Data were analyzed by Excel. Results:Problems in Xiamen community health centers can be described as insufficient and under qualified of human resources. Primary education and title staffs are still the main body. Conclusions:It needs to transfer the conception of high value on hospital and look down upon community health center, make active human resources policy, strengthen integrated hospital and community health center management and continuous improve community health center human resources training to promote human resources' capacity in community health center.%目的:了解厦门市医院承办社区卫生服务中心人力资源的现状及存在问题,为社区卫生服务中心人力资源的改革发展提供依据。方法:采用自制调查问卷,对2012年厦门市15家医院承办社区卫生服务中心人员状况进行调查,运用Excel进行数据统计分析。结果:厦门市社区卫生服务中心人力资源总量不足,人员的质量较差,仍以低学历和低职称人员为主。结论:通过转变“重医院、轻社区”观念、制定积极的人才政策、强化医院-社区一体化管理、不断完善社区人才的培养政策等策略加强社区卫生服务中心人才队伍建设,提高其卫生服务水平。

  4. 国家基础研究数据资源中心建设规划研究%CONSTRUCTION PLANNING OF NATIONAL BASIC RESEARCH DATA RESOURCE CENTER

    Institute of Scientific and Technical Information of China (English)

    袁芳; 闫术卓; 邵正隆; 俞春; 宋树仁

    2013-01-01

    Basic research has become the strategic focus of science and technology development all around the world. It is the fundamental guarantee for a country to have great powers among the world. This project research is to build an integrated framework of national basic scientific data resource center. It can also establish a normative and efficient data resource management system and plan an open and shared data resource application service system that could offer an organising support for the basic demand of research innovation and basic scientific development.%基础研究已经成为当今世界各国科学技术发展的战略重点,利用当代科学技术,充分共享和分析基础研究管理数据,是推进基础研究的前沿性、国际化和持续发展的关键支撑.本文提出了拟建立国家基础研究数据资源中心的设想和总体框架,设计集成、整合的国家基础研究数据资源库,实现规范、高效的数据资源管理,规划开放、共享的数据资源应用服务平台,从而构建“布局合理、功能完善、体系健全、共享高效”的基础研究数据资源中心,为实现基础研究“科学技术人才发现与培养、基础学科研究与创新能力的提高以及国家科技整体发展提升”的战略目标提供体系化支撑.

  5. Bioinformatics approaches for identifying new therapeutic bioactive peptides in food

    Directory of Open Access Journals (Sweden)

    Nora Khaldi

    2012-10-01

    Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.

  6. The Enzyme Portal: a case study in applying user-centred design methods in bioinformatics.

    Science.gov (United States)

    de Matos, Paula; Cham, Jennifer A; Cao, Hong; Alcántara, Rafael; Rowland, Francis; Lopez, Rodrigo; Steinbeck, Christoph

    2013-03-20

    User-centred design (UCD) is a type of user interface design in which the needs and desires of users are taken into account at each stage of the design process for a service or product; often for software applications and websites. Its goal is to facilitate the design of software that is both useful and easy to use. To achieve this, you must characterise users' requirements, design suitable interactions to meet their needs, and test your designs using prototypes and real life scenarios.For bioinformatics, there is little practical information available regarding how to carry out UCD in practice. To address this we describe a complete, multi-stage UCD process used for creating a new bioinformatics resource for integrating enzyme information, called the Enzyme Portal (http://www.ebi.ac.uk/enzymeportal). This freely-available service mines and displays data about proteins with enzymatic activity from public repositories via a single search, and includes biochemical reactions, biological pathways, small molecule chemistry, disease information, 3D protein structures and relevant scientific literature.We employed several UCD techniques, including: persona development, interviews, 'canvas sort' card sorting, user workflows, usability testing and others. Our hope is that this case study will motivate the reader to apply similar UCD approaches to their own software design for bioinformatics. Indeed, we found the benefits included more effective decision-making for design ideas and technologies; enhanced team-working and communication; cost effectiveness; and ultimately a service that more closely meets the needs of our target audience.

  7. Using Bioinformatics to Develop and Test Hypotheses: E. coli-Specific Virulence Determinants

    Directory of Open Access Journals (Sweden)

    Joanna R. Klein

    2012-09-01

    Full Text Available Bioinformatics, the use of computer resources to understand biological information, is an important tool in research, and can be easily integrated into the curriculum of undergraduate courses. Such an example is provided in this series of four activities that introduces students to the field of bioinformatics as they design PCR based tests for pathogenic E. coli strains. A variety of computer tools are used including BLAST searches at NCBI, bacterial genome searches at the Integrated Microbial Genomes (IMG database, protein analysis at Pfam and literature research at PubMed. In the process, students also learn about virulence factors, enzyme function and horizontal gene transfer. Some or all of the four activities can be incorporated into microbiology or general biology courses taken by students at a variety of levels, ranging from high school through college. The activities build on one another as they teach and reinforce knowledge and skills, promote critical thinking, and provide for student collaboration and presentation. The computer-based activities can be done either in class or outside of class, thus are appropriate for inclusion in online or blended learning formats. Assessment data showed that students learned general microbiology concepts related to pathogenesis and enzyme function, gained skills in using tools of bioinformatics and molecular biology, and successfully developed and tested a scientific hypothesis.

  8. Evaluating an Inquiry-based Bioinformatics Course Using Q Methodology

    Science.gov (United States)

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-06-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and backgrounds of a diverse set of students, predominantly computer science and biology undergraduate and graduate students. Although the researchers desired to investigate student views of the course, they were interested in the potentially different perspectives. Q methodology, a measure of subjectivity, allowed the researchers to determine the various student perspectives in the bioinformatics course.

  9. Survey of MapReduce frame operation in bioinformatics.

    Science.gov (United States)

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics.

  10. Thriving in multidisciplinary research: advice for new bioinformatics students.

    Science.gov (United States)

    Auerbach, Raymond K

    2012-09-01

    The sciences have seen a large increase in demand for students in bioinformatics and multidisciplinary fields in general. Many new educational programs have been created to satisfy this demand, but navigating these programs requires a non-traditional outlook and emphasizes working in teams of individuals with distinct yet complementary skill sets. Written from the perspective of a current bioinformatics student, this article seeks to offer advice to prospective and current students in bioinformatics regarding what to expect in their educational program, how multidisciplinary fields differ from more traditional paths, and decisions that they will face on the road to becoming successful, productive bioinformaticists.

  11. Evaluating the effectiveness of a practical inquiry-based learning bioinformatics module on undergraduate student engagement and applied skills.

    Science.gov (United States)

    Brown, James A L

    2016-05-01

    A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion, qualitative student-based module evaluation and the novelty, scientific validity and quality of written student reports. Bioinformatics is often the starting point for laboratory-based research projects, therefore high importance was placed on allowing students to individually develop and apply processes and methods of scientific research. Students led a bioinformatic inquiry-based project (within a framework of inquiry), discovering, justifying and exploring individually discovered research targets. Detailed assessable reports were produced, displaying data generated and the resources used. Mimicking research settings, undergraduates were divided into small collaborative groups, with distinctive central themes. The module was evaluated by assessing the quality and originality of the students' targets through reports, reflecting students' use and understanding of concepts and tools required to generate their data. Furthermore, evaluation of the bioinformatic module was assessed semi-quantitatively using pre- and post-module quizzes (a non-assessable activity, not contributing to their grade), which incorporated process- and content-specific questions (indicative of their use of the online tools). Qualitative assessment of the teaching intervention was performed using post-module surveys, exploring student satisfaction and other module specific elements. Overall, a positive experience was found, as was a post module increase in correct process-specific answers. In conclusion, an inquiry-based peer-assisted learning module increased students' engagement, practical bioinformatic skills and process-specific knowledge. © 2016 by

  12. Evolution of web services in bioinformatics.

    Science.gov (United States)

    Neerincx, Pieter B T; Leunissen, Jack A M

    2005-06-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.

  13. An interdepartmental Ph.D. program in computational biology and bioinformatics: the Yale perspective.

    Science.gov (United States)

    Gerstein, Mark; Greenbaum, Dov; Cheung, Kei; Miller, Perry L

    2007-02-01

    Computational biology and bioinformatics (CBB), the terms often used interchangeably, represent a rapidly evolving biological discipline. With the clear potential for discovery and innovation, and the need to deal with the deluge of biological data, many academic institutions are committing significant resources to develop CBB research and training programs. Yale formally established an interdepartmental Ph.D. program in CBB in May 2003. This paper describes Yale's program, discussing the scope of the field, the program's goals and curriculum, as well as a number of issues that arose in implementing the program. (Further updated information is available from the program's website, www.cbb.yale.edu.)

  14. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  15. Bioconductor: open software development for computational biology and bioinformatics

    DEFF Research Database (Denmark)

    Gentleman, R.C.; Carey, V.J.; Bates, D.M.;

    2004-01-01

    into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....

  16. Research on Resource Allocation for R & D Project Portfolio in Automobile Test Center%汽车试验室研发项目组合资源配置研究

    Institute of Scientific and Technical Information of China (English)

    周伶

    2015-01-01

    文章以项目组合管理理论为指导,结合汽车行业研发的现有流程和特点,分析资源与产品研发型项目的关系,并研究了资源配置过程中出现的问题。分析试验室的资源配置情况,引入资源配置的方法和模型,如元启发式算法、遗传算法等解决资源配置问题。结合公司的实际情况,匹配本企业的项目管理流程,文章就长期试验计划应用遗传算法模拟资源需求,最终能够达到及时、准确、高效的资源配置,保障项目的资源需求并按时完成。%Based on portfolio management theory,combined the existing process and characteristics during auto industry development,this dissertation analyzes the relations between testing resource and R&D projects. Further more, this dissertation analyzes some issues exiting in the process of resource allocation such as resource constraints, resource congestion and some conflicts in multi-project. The paper analyzes some resource allocation situation in testing center, integrated with some resource allocation methods and models to solve the problem. Consider with real situation, this dissertation presents to manage test resource separating with different duration. Focued on long term testing resource plan with using genetic algorithm simulation so as to be ready much earlier. And the simulation result is one of most important factors for testing center resource extension.

  17. A high-throughput bioinformatics distributed computing platform

    OpenAIRE

    Keane, Thomas M; Page, Andrew J.; McInerney, James O; Naughton, Thomas J.

    2005-01-01

    In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...

  18. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  19. A library-based bioinformatics services program*

    OpenAIRE

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups ident...

  20. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Directory of Open Access Journals (Sweden)

    Krampis Konstantinos

    2012-03-01

    Full Text Available Abstract Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds

  1. Biopipe: a flexible framework for protocol-based bioinformatics analysis.

    Science.gov (United States)

    Hoon, Shawn; Ratnapu, Kiran Kumar; Chia, Jer-Ming; Kumarasamy, Balamurugan; Juguang, Xiao; Clamp, Michele; Stabenau, Arne; Potter, Simon; Clarke, Laura; Stupka, Elia

    2003-08-01

    We identify several challenges facing bioinformatics analysis today. Firstly, to fulfill the promise of comparative studies, bioinformatics analysis will need to accommodate different sources of data residing in a federation of databases that, in turn, come in different formats and modes of accessibility. Secondly, the tsunami of data to be handled will require robust systems that enable bioinformatics analysis to be carried out in a parallel fashion. Thirdly, the ever-evolving state of bioinformatics presents new algorithms and paradigms in conducting analysis. This means that any bioinformatics framework must be flexible and generic enough to accommodate such changes. In addition, we identify the need for introducing an explicit protocol-based approach to bioinformatics analysis that will lend rigorousness to the analysis. This makes it easier for experimentation and replication of results by external parties. Biopipe is designed in an effort to meet these goals. It aims to allow researchers to focus on protocol design. At the same time, it is designed to work over a compute farm and thus provides high-throughput performance. A common exchange format that encapsulates the entire protocol in terms of the analysis modules, parameters, and data versions has been developed to provide a powerful way in which to distribute and reproduce results. This will enable researchers to discuss and interpret the data better as the once implicit assumptions are now explicitly defined within the Biopipe framework.

  2. The SOL Genomics Network. A Comparative Resource for Solanaceae Biology and Beyond1

    Science.gov (United States)

    Mueller, Lukas A.; Solow, Teri H.; Taylor, Nicolas; Skwarecki, Beth; Buels, Robert; Binns, John; Lin, Chenwei; Wright, Mark H.; Ahrens, Robert; Wang, Ying; Herbst, Evan V.; Keyder, Emil R.; Menda, Naama; Zamir, Dani; Tanksley, Steven D.

    2005-01-01

    The SOL Genomics Network (SGN; http://sgn.cornell.edu) is a rapidly evolving comparative resource for the plants of the Solanaceae family, which includes important crop and model plants such as potato (Solanum tuberosum), eggplant (Solanum melongena), pepper (Capsicum annuum), and tomato (Solanum lycopersicum). The aim of SGN is to relate these species to one another using a comparative genomics approach and to tie them to the other dicots through the fully sequenced genome of Arabidopsis (Arabidopsis thaliana). SGN currently houses map and marker data for Solanaceae species, a large expressed sequence tag collection with computationally derived unigene sets, an extensive database of phenotypic information for a mutagenized tomato population, and associated tools such as real-time quantitative trait loci. Recently, the International Solanaceae Project (SOL) was formed as an umbrella organization for Solanaceae research in over 30 countries to address important questions in plant biology. The first cornerstone of the SOL project is the sequencing of the entire euchromatic portion of the tomato genome. SGN is collaborating with other bioinformatics centers in building the bioinformatics infrastructure for the tomato sequencing project and implementing the bioinformatics strategy of the larger SOL project. The overarching goal of SGN is to make information available in an intuitive comparative format, thereby facilitating a systems approach to investigations into the basis of adaptation and phenotypic diversity in the Solanaceae family, other species in the Asterid clade such as coffee (Coffea arabica), Rubiaciae, and beyond. PMID:16010005

  3. What can bioinformatics do for Natural History museums?

    Directory of Open Access Journals (Sweden)

    Becerra, José María

    2003-06-01

    Full Text Available We propose the founding of a Natural History bioinformatics framework, which would solve one of the main problems in Natural History: data which is scattered around in many incompatible systems (not only computer systems, but also paper ones. This framework consists of computer resources (hardware and software, methodologies that ease the circulation of data, and staff expert in dealing with computers, who will develop software solutions to the problems encountered by naturalists. This system is organized in three layers: acquisition, data and analysis. Each layer is described, and an account of the elements that constitute it given.

    Se presentan las bases de una estructura bioinformática para Historia Natural, que trata de resolver uno de los principales problemas en ésta: la presencia de datos distribuidos a lo largo de muchos sistemas incompatibles entre sí (y no sólo hablamos de sistemas informáticos, sino también en papel. Esta estructura se sustenta en recursos informáticos (en sus dos vertientes: hardware y software, en metodologías que permitan la fácil circulación de los datos, y personal experto en el uso de ordenadores que se encargue de desarrollar soluciones software a los problemas que plantean los naturalistas. Este sistema estaría organizado en tres capas: de adquisición, de datos y de análisis. Cada una de estas capas se describe, indicando los elementos que la componen.

  4. Automatic Discovery and Inferencing of Complex Bioinformatics Web Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A; Rocco, D; Critchlow, T; Buttler, D

    2003-12-22

    The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.

  5. Bioinformatics: Cheap and robust method to explore biomaterial from Indonesia biodiversity

    Science.gov (United States)

    Widodo

    2015-02-01

    Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another

  6. Vermont Natural Resources Atlas

    Data.gov (United States)

    Vermont Center for Geographic Information — The purpose of the Natural Resources Atlas is to provide geographic information about environmental features and sites that the Vermont Agency of Natural Resources...

  7. 传统人力资源管理到战略人力资源管理的转型路径--基于人力资源共享服务中心模式%The Transition Path of Traditional Human Resources Management to Strategic Human Resources Management---Based on Human Resources Service Center Mode

    Institute of Scientific and Technical Information of China (English)

    解海美; 陈进

    2014-01-01

    随着经济全球化和信息化的进程加快,越来越多的企业进入大规模、跨区域发展时代,然而传统人力资源管理弊病显现,战略人力资源管理急需推进。文章通过比较传统人力资源管理与人力资源共享服务中心两种模式,表明后者是实现战略管理转型的有效路径,并阐明这一模式的运行机制及其目前的实践进展,以期促进企业战略人力资源管理的实现。%With the economic globalization and information process accelerate, more and more enterprises enter the large-scale, cross regional development era. However, the traditional human resource management problems appeared, strategic human resource management need to promote.By the comparison of the traditional HRM and HR shared service center, this paper shows that HR shared services center is the excellent transition to strategic management, and explains the operation mechanism of this model, and the current practice of progress, in order to promote the implementation of strategic HRM.

  8. Integrating bioinformatics into senior high school: design principles and implications.

    Science.gov (United States)

    Machluf, Yossy; Yarden, Anat

    2013-09-01

    Bioinformatics is an integral part of modern life sciences. It has revolutionized and redefined how research is carried out and has had an enormous impact on biotechnology, medicine, agriculture and related areas. Yet, it is only rarely integrated into high school teaching and learning programs, playing almost no role in preparing the next generation of information-oriented citizens. Here, we describe the design principles of bioinformatics learning environments, including our own, that are aimed at introducing bioinformatics into senior high school curricula through engaging learners in scientifically authentic inquiry activities. We discuss the bioinformatics-related benefits and challenges that high school teachers and students face in the course of the implementation process, in light of previous studies and our own experience. Based on these lessons, we present a new approach for characterizing the questions embedded in bioinformatics teaching and learning units, based on three criteria: the type of domain-specific knowledge required to answer each question (declarative knowledge, procedural knowledge, strategic knowledge, situational knowledge), the scientific approach from which each question stems (biological, bioinformatics, a combination of the two) and the associated cognitive process dimension (remember, understand, apply, analyze, evaluate, create). We demonstrate the feasibility of this approach using a learning environment, which we developed for the high school level, and suggest some of its implications. This review sheds light on unique and critical characteristics related to broader integration of bioinformatics in secondary education, which are also relevant to the undergraduate level, and especially on curriculum design, development of suitable learning environments and teaching and learning processes.

  9. Application of bioinformatics in plant breeding

    NARCIS (Netherlands)

    Vassilev, D.; Leunissen, J.; Atanassov, A.; Nenov, A.; Dimov, G.

    2005-01-01

    The goal of plant genomics is to understand the genetic and molecular basis of all biological processes in plants that are relevant to the specie. This understanding is fundamental to allow efficient exploitation of plants as biological resources in the development of new cultivars with improved qua

  10. Integrative genomic analysis by interoperation of bioinformatics tools in GenomeSpace

    Science.gov (United States)

    Thorvaldsdottir, Helga; Liefeld, Ted; Ocana, Marco; Borges-Rivera, Diego; Pochet, Nathalie; Robinson, James T.; Demchak, Barry; Hull, Tim; Ben-Artzi, Gil; Blankenberg, Daniel; Barber, Galt P.; Lee, Brian T.; Kuhn, Robert M.; Nekrutenko, Anton; Segal, Eran; Ideker, Trey; Reich, Michael; Regev, Aviv; Chang, Howard Y.; Mesirov, Jill P.

    2015-01-01

    Integrative analysis of multiple data types to address complex biomedical questions requires the use of multiple software tools in concert and remains an enormous challenge for most of the biomedical research community. Here we introduce GenomeSpace (http://www.genomespace.org), a cloud-based, cooperative community resource. Seeded as a collaboration of six of the most popular genomics analysis tools, GenomeSpace now supports the streamlined interaction of 20 bioinformatics tools and data resources. To facilitate the ability of non-programming users’ to leverage GenomeSpace in integrative analysis, it offers a growing set of ‘recipes’, short workflows involving a few tools and steps to guide investigators through high utility analysis tasks. PMID:26780094

  11. BioXSD: the common data-exchange format for everyday bioinformatics web services

    DEFF Research Database (Denmark)

    Kalas, M.; Puntervoll, P.; Joseph, A.;

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...... and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth...... interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web....

  12. Application of Bioinformatics and Systems Biology in Medicinal Plant Studies

    Institute of Scientific and Technical Information of China (English)

    DENG You-ping; AI Jun-mei; XIAO Pei-gen

    2010-01-01

    One important purpose to investigate medicinal plants is to understand genes and enzymes that govern the biological metabolic process to produce bioactive compounds.Genome wide high throughput technologies such as genomics,transcriptomics,proteomics and metabolomics can help reach that goal.Such technologies can produce a vast amount of data which desperately need bioinformatics and systems biology to process,manage,distribute and understand these data.By dealing with the"omics"data,bioinformatics and systems biology can also help improve the quality of traditional medicinal materials,develop new approaches for the classification and authentication of medicinal plants,identify new active compounds,and cultivate medicinal plant species that tolerate harsh environmental conditions.In this review,the application of bioinformatics and systems biology in medicinal plants is briefly introduced.

  13. Approaches in integrative bioinformatics towards the virtual cell

    CERN Document Server

    Chen, Ming

    2014-01-01

    Approaches in Integrative Bioinformatics provides a basic introduction to biological information systems, as well as guidance for the computational analysis of systems biology. This book also covers a range of issues and methods that reveal the multitude of omics data integration types and the relevance that integrative bioinformatics has today. Topics include biological data integration and manipulation, modeling and simulation of metabolic networks, transcriptomics and phenomics, and virtual cell approaches, as well as a number of applications of network biology. It helps to illustrat

  14. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    Science.gov (United States)

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  15. Bioinformatic scaling of allosteric interactions in biomedical isozymes

    Science.gov (United States)

    Phillips, J. C.

    2016-09-01

    Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.

  16. High-performance computational solutions in protein bioinformatics

    CERN Document Server

    Mrozek, Dariusz

    2014-01-01

    Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for

  17. Incorporating bioinformatics into biological science education in Nigeria: prospects and challenges.

    Science.gov (United States)

    Ojo, O O; Omabe, M

    2011-06-01

    The urgency to process and analyze the deluge of data created by proteomics and genomics studies worldwide has caused bioinformatics to gain prominence and importance. However, its multidisciplinary nature has created a unique demand for specialist trained in both biology and computing. Several countries, in response to this challenge, have developed a number of manpower training programmes. This review presents a description of the meaning, scope, history and development of bioinformatics with focus on prospects and challenges facing bioinformatics education worldwide. The paper also provides an overview of attempts at the introduction of bioinformatics in Nigeria; describes the existing bioinformatics scenario in Nigeria and suggests strategies for effective bioinformatics education in Nigeria.

  18. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    Energy Technology Data Exchange (ETDEWEB)

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics

  19. Health Center Controlled Network

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Health Center Controlled Network (HCCN) tool is a locator tool designed to make data and information concerning HCCN resources more easily available to our...

  20. Bioinformatic approaches to interrogating vitamin D receptor signaling.

    Science.gov (United States)

    Campbell, Moray J

    2017-03-10

    Bioinformatics applies unbiased approaches to develop statistically-robust insight into health and disease. At the global, or "20,000 foot" view bioinformatic analyses of vitamin D receptor (NR1I1/VDR) signaling can measure where the VDR gene or protein exerts a genome-wide significant impact on biology; VDR is significantly implicated in bone biology and immune systems, but not in cancer. With a more VDR-centric, or "2000 foot" view, bioinformatic approaches can interrogate events downstream of VDR activity. Integrative approaches can combine VDR ChIP-Seq in cell systems where significant volumes of publically available data are available. For example, VDR ChIP-Seq studies can be combined with genome-wide association studies to reveal significant associations to immune phenotypes. Similarly, VDR ChIP-Seq can be combined with data from Cancer Genome Atlas (TCGA) to infer the impact of VDR target genes in cancer progression. Therefore, bioinformatic approaches can reveal what aspects of VDR downstream networks are significantly related to disease or phenotype.

  1. Robust enzyme design: bioinformatic tools for improved protein stability.

    Science.gov (United States)

    Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas

    2015-03-01

    The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation.

  2. An evaluation of ontology exchange languages for bioinformatics.

    Science.gov (United States)

    McEntire, R; Karp, P; Abernethy, N; Benton, D; Helt, G; DeJongh, M; Kent, R; Kosky, A; Lewis, S; Hodnett, D; Neumann, E; Olken, F; Pathak, D; Tarczy-Hornoch, P; Toldo, L; Topaloglou, T

    2000-01-01

    Ontologies are specifications of the concepts in a given field, and of the relationships among those concepts. The development of ontologies for molecular-biology information and the sharing of those ontologies within the bioinformatics community are central problems in bioinformatics. If the bioinformatics community is to share ontologies effectively, ontologies must be exchanged in a form that uses standardized syntax and semantics. This paper reports on an effort among the authors to evaluate alternative ontology-exchange languages, and to recommend one or more languages for use within the larger bioinformatics community. The study selected a set of candidate languages, and defined a set of capabilities that the ideal ontology-exchange language should satisfy. The study scored the languages according to the degree to which they satisfied each capability. In addition, the authors performed several ontology-exchange experiments with the two languages that received the highest scores: OML and Ontolingua. The result of those experiments, and the main conclusion of this study, was that the frame-based semantic model of Ontolingua is preferable to the conceptual graph model of OML, but that the XML-based syntax of OML is preferable to the Lisp-based syntax of Ontolingua.

  3. A Tool for Creating and Parallelizing Bioinformatics Pipelines

    Science.gov (United States)

    2007-06-01

    well as that are incorporated into InterPro (Mulder, et al., 2005). other users’ work. PUMA2 ( Maltsev , et al., 2006) incorporates more than 20 0-7695...pipeline for protocol-based bioinformatics analysis." Genome Res., 13(8), pp. 1904-1915, 2003. Maltsev , N. and E. Glass, et al., "PUMA2--grid-based 4

  4. Bioinformatics: Tools to accelerate population science and disease control research.

    Science.gov (United States)

    Forman, Michele R; Greene, Sarah M; Avis, Nancy E; Taplin, Stephen H; Courtney, Paul; Schad, Peter A; Hesse, Bradford W; Winn, Deborah M

    2010-06-01

    Population science and disease control researchers can benefit from a more proactive approach to applying bioinformatics tools for clinical and public health research. Bioinformatics utilizes principles of information sciences and technologies to transform vast, diverse, and complex life sciences data into a more coherent format for wider application. Bioinformatics provides the means to collect and process data, enhance data standardization and harmonization for scientific discovery, and merge disparate data sources. Achieving interoperability (i.e. the development of an informatics system that provides access to and use of data from different systems) will facilitate scientific explorations and careers and opportunities for interventions in population health. The National Cancer Institute's (NCI's) interoperable Cancer Biomedical Informatics Grid (caBIG) is one of a number of illustrative tools in this report that are being mined by population scientists. Tools are not all that is needed for progress. Challenges persist, including a lack of common data standards, proprietary barriers to data access, and difficulties pooling data from studies. Population scientists and informaticists are developing promising and innovative solutions to these barriers. The purpose of this paper is to describe how the application of bioinformatics systems can accelerate population health research across the continuum from prevention to detection, diagnosis, treatment, and outcome.

  5. BioRuby: Bioinformatics software for the Ruby programming language

    NARCIS (Netherlands)

    Goto, N.; Prins, J.C.P.; Nakao, M.; Bonnal, R.; Aerts, J.; Katayama, A.

    2010-01-01

    The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it suppor

  6. BioRuby : bioinformatics software for the Ruby programming language

    NARCIS (Netherlands)

    Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki

    2010-01-01

    The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it suppor

  7. CROSSWORK for Glycans: Glycan Identificatin Through Mass Spectrometry and Bioinformatics

    DEFF Research Database (Denmark)

    Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter

      We have developed "GLYCANthrope " - CROSSWORKS for glycans:  a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...

  8. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  9. Hidden in the Middle: Culture, Value and Reward in Bioinformatics

    Science.gov (United States)

    Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul

    2016-01-01

    Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…

  10. Intrageneric Primer Design: Bringing Bioinformatics Tools to the Class

    Science.gov (United States)

    Lima, Andre O. S.; Garces, Sergio P. S.

    2006-01-01

    Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…

  11. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    Science.gov (United States)

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  12. Anticipating Viral Species Jumps: Bioinformatics and Data Needs

    Science.gov (United States)

    2011-06-01

    with a function of propelling or steering the evolution of a gene, phenotypic trait or species (Prakash 2008). Bioinformatics Research, development...platforms like SJOne. Ebola Viruses There are five known species of Ebola virus: Bundibugyo, Cote d’Ivoire, Reston, Sudan and Zaire. The relative

  13. Bioinformatics Assisted Gene Discovery and Annotation of Human Genome

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    As the sequencing stage of human genome project is near the end, the work has begun for discovering novel genes from genome sequences and annotating their biological functions. Here are reviewed current major bioinformatics tools and technologies available for large scale gene discovery and annotation from human genome sequences. Some ideas about possible future development are also provided.

  14. WIWS: a protein structure bioinformatics Web service collection.

    NARCIS (Netherlands)

    Hekkelman, M.L.; Beek, T.A.H. te; Pettifer, S.R.; Thorne, D.; Attwood, T.K.; Vriend, G.

    2010-01-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of p

  15. A Bioinformatic Approach to Inter Functional Interactions within Protein Sequences

    Science.gov (United States)

    2009-02-23

    Geoffrey Webb Prof James Whisstock Dr Jianging Song Mr Khalid Mahmood Mr Cyril Reboul Ms Wan Ting Kan Publications: List peer-reviewed...Khalid Mahmood, Jianging Song, Cyril Reboul , Wan Ting Kan, Geoffrey I. Webb and James C. Whisstock. To be submitted to BMC Bioinformatics. Outline

  16. Mathematics and evolutionary biology make bioinformatics education comprehensible

    Science.gov (United States)

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  17. An Abstract Description Approach to the Discovery and Classification of Bioinformatics Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Critchlow, T J

    2003-05-01

    The World Wide Web provides an incredible resource to genomics researchers in the form of dynamic data sources--e.g. BLAST sequence homology search interfaces. The growth rate of these sources outpaces the speed at which they can be manually classified, meaning that the available data is not being utilized to its full potential. Existing research has not addressed the problems of automatically locating, classifying, and integrating classes of bioinformatics data sources. This paper presents an overview of a system for finding classes of bioinformatics data sources and integrating them behind a unified interface. We examine an approach to classifying these sources automatically that relies on an abstract description format: the service class description. This format allows a domain expert to describe the important features of an entire class of services without tying that description to any particular Web source. We present the features of this description format in the context of BLAST sources to show how the service class description relates to Web sources that are being described. We then show how a service class description can be used to classify an arbitrary Web source to determine if that source is an instance of the described service. To validate the effectiveness of this approach, we have constructed a prototype that can correctly classify approximately two-thirds of the BLAST sources we tested. We then examine these results, consider the factors that affect correct automatic classification, and discuss future work.

  18. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach.

  19. Bioinformatics analysis of two-component regulatory systems in Staphylococcus epidermidis

    Institute of Scientific and Technical Information of China (English)

    QIN Zhiqiang; ZHONG Yang; ZHANG Jian; HE Youyu; WU Yang; JIANG Juan; CHEN Jiemin; LUO Xiaomin; QU Di

    2004-01-01

    Sixteen pairs of two-component regulatory systems are identified in the genome of Staphylococcus epidermidis ATCC12228 strain, which is newly sequenced by our laboratory for Medical Molecular Virology and Chinese National Human Genome Center at Shanghai, by using bioinformatics analysis. Comparative analysis of the twocomponent regulatory systems in S. epidermidis and that of S.aureus and Bacillus subtilis shows that these systems may regulate some important biological functions, e.g. growth,biofilm formation, and expression of virulence factors in S.epidermidis. Two conserved domains, i.e. HATPase_c and REC domains, are found in all 16 pairs of two-component proteins.Homologous modelling analysis indicates that there are 4similar HATPase_c domain structures of histidine kinases and 13 similar REC domain structures of response regulators,and there is one AMP-PNP binding pocket in the HATPase_c domain and three active aspartate residues in the REC domain. Preliminary experiment reveals that the bioinformatics analysis of the conserved domain structures in the two-component regulatory systems in S. epidermidis may provide useful information for discovery of potential drug target.

  20. 区域性数字化教育资源中心建设研究--以珠海市为例%Study on the Construction of Regional Digital Education Resource Center-- A case study of Zhuhai city

    Institute of Scientific and Technical Information of China (English)

    郭玲

    2014-01-01

    In the process of deepening the construction of learning -type City of Zhuhai, relying on community college construction of lifelong education system in Zhuhai, by integrating the educational resources, it is a strategic development target for Zhuhai education information to achieve integration of academic education and the integration of non-diploma education. Based on the analysis of the present situation of the construction of digital educational resources in Zhuhai, on the basis of Zhuhai education resources construction in the plight of around regional digitized education resource center progressive service oriented management thinking, as well as the characteristics of the region and culture, this paper explored the application mode of digital education resource center.%在深化珠海学习型城市的建设过程中,依托珠海社区大学构建终身教育体系,整合教育资源以实现学历教育与非学历教育的一体化融合已成为珠海教育信息化的战略发展目标。本文在对珠海目前数字化教育资源建设的现状分析的基础上,针对面临的困境,围绕区域性数字化教育资源中心逐步面向服务的管理方式展开思考,结合本地区特色,对区域性数字化教育资源中心的应用模式进行了探索。

  1. Contact Center Manager Administration (CCMA)

    Data.gov (United States)

    Social Security Administration — CCMA is the server that provides a browser-based tool for contact center administrators and supervisors. It is used to manage and configure contact center resources...

  2. Virginia Bioinformatics Institute offers fellowships for graduate work in transdisciplinary science

    OpenAIRE

    Bland, Susan

    2008-01-01

    The Virginia Bioinformatics Institute at Virginia Tech, in collaboration with Virginia Tech's Ph.D. program in genetics, bioinformatics, and computational biology, is providing substantial fellowships in support of graduate work in transdisciplinary team science.

  3. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  4. Elastic resource adjustment method for cloud computing data center%面向云计算数据中心的弹性资源调整方法

    Institute of Scientific and Technical Information of China (English)

    申京; 吴晨光; 郝洋; 殷波; 蔺艳斐

    2015-01-01

    To make resources purchase plan of service which has a variety of service quality requirements, this paper proposes an application performance oriented-cloud computing elastic resources adjustment method through the platform as a service and infrastructure as a service to sign agreement on allocation of resources based on Service-Level Agreement. Using the automatic scaling algorithm,this method adjusts virtual machine resources of load demand in the vertical level. In order to dynamically adjust allocation of resources to meet the needs of application service level,the cloud computing resources utilization rate is optimized. Simulation results are provided to show the effectiveness of the proposed method.%为了制定多种业务质量要求服务的资源购买方案,通过平台服务商与基础设施服务商之间签订基于服务等级协议的资源分配协议,提出一种面向应用性能的云计算弹性资源调整方法。该方法利用自动伸缩算法,在垂直层次上对负载需求的波动进行虚拟机资源调整,以实现动态调整分配资源量来满足应用的服务级别的需求,优化云计算资源利用率。最后通过仿真验证该算法的有效性。

  5. LEARNING HORMONE ACTION MECHANISMS WITH BIOINFORMATICS

    Directory of Open Access Journals (Sweden)

    João Carlos Sousa

    2007-05-01

    Full Text Available The ability to manage the constantly growing information in genetics availableon the internet is becoming crucial in biochemical education and medicalpractice. Therefore, developing students skills in working with bioinformaticstools is a challenge to undergraduate courses in the molecular life sciences.The regulation of gene transcription by hormones and vitamins is a complextopic that influences all body systems. We describe a student centered activityused in a multidisciplinary “Functional Organ System“ course on the EndocrineSystem. By receiving, as teams, a nucleotide sequence of a hormone orvitamin-response element, students navigate through internet databases to findthe gene to which it belongs. Subsequently, student’s search how thecorresponding hormone/vitamin influences the expression of that particulargene and how a dysfunctional interaction might cause disease. This activity,proposed for 4 consecutive years to cohorts of 50-60 students/year enrolled inthe 2nd year our undergraduate medical degree, revealed that 90% of thestudents developed a better understanding of the usefulness of bioinformaticsand that 98% intend to use them in the future. Since hormones and vitaminsregulate genes of all body organ systems, this web-based activity successfullyintegrates the whole body physiology of the medical curriculum and can be ofrelevance to other courses on molecular life sciences.

  6. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    Science.gov (United States)

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  7. Introductory Bioinformatics Exercises Utilizing Hemoglobin and Chymotrypsin to Reinforce the Protein Sequence-Structure-Function Relationship

    Science.gov (United States)

    Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany

    2007-01-01

    We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…

  8. Instructional Media Initiatives: Focusing on the Educational Resources Center at Thirteen/wnet, New York, New York-- Slavery and the Making of America

    Science.gov (United States)

    Donlevy, Jim

    2005-01-01

    Slavery and the Making of America, a four-part series from PBS, is airing throughout the United States during February 2005. This landmark series examines the history of slavery in the United States and the significant role it played in shaping the development of the Nation. This article describes the series, including online resources, and…

  9. Household Living Arrangements and Economic Resources among Mexican Immigrant Families with Children. University of Kentucky Center for Poverty Research Discussion Paper Series, DP2010-10

    Science.gov (United States)

    Leach, Mark A.

    2010-01-01

    Using data from the 2000 Census, this study examines the relationship between household living arrangements and economic resources among Mexican immigrant families with children. I model separately the relationships between family income and household structure and proportion of total household income contributed and household structure. The…

  10. Swamp Works: A New Approach to Develop Space Mining and Resource Extraction Technologies at the National Aeronautics Space Administration (NASA) Kennedy Space Center (KSC)

    Science.gov (United States)

    Mueller, R. P.; Sibille, L.; Leucht, K.; Smith, J. D.; Townsend, I. I.; Nick, A. J.; Schuler, J. M.

    2015-01-01

    The first steps for In Situ Resource Utilization (ISRU) on target bodies such as the Moon, Mars and Near Earth Asteroids (NEA), and even comets, involve the same sequence of steps as in the terrestrial mining of resources. First exploration including prospecting must occur, and then the resource must be acquired through excavation methods if it is of value. Subsequently a load, haul and dump sequence of events occurs, followed by processing of the resource in an ISRU plant, to produce useful commodities. While these technologies and related supporting operations are mature in terrestrial applications, they will be different in space since the environment and indigenous materials are different than on Earth. In addition, the equipment must be highly automated, since for the majority of the production cycle time, there will be no humans present to assist or intervene. This space mining equipment must withstand a harsh environment which includes vacuum, radical temperature swing cycles, highly abrasive lofted dust, electrostatic effects, van der Waals forces effects, galactic cosmic radiation, solar particle events, high thermal gradients when spanning sunlight terminators, steep slopes into craters / lava tubes and cryogenic temperatures as low as 40 K in permanently shadowed regions. In addition the equipment must be tele-operated from Earth or a local base where the crew is sheltered. If the tele-operation occurs from Earth then significant communications latency effects mandate the use of autonomous control systems in the mining equipment. While this is an extremely challenging engineering design scenario, it is also an opportunity, since the technologies developed in this endeavor could be used in the next generations of terrestrial mining equipment, in order to mine deeper, safer, more economical and with a higher degree of flexibility. New space technologies could precipitate new mining solutions here on Earth. The NASA KSC Swamp Works is an innovation

  11. An implementation case study. Implementation of the Indian Health Service's Resource and Patient Management System Electronic Health Record in the ambulatory care setting at the Phoenix Indian Medical Center.

    Science.gov (United States)

    Dunnigan, Anthony; John, Karen; Scott, Andrea; Von Bibra, Lynda; Walling, Jeffrey

    2010-01-01

    The Phoenix Indian Medical Center (PIMC) has successfully implemented the Resource and Patient Management System Electronic Health Record (RPMS-EHR) in its Ambulatory Care departments. One-hundred and twenty-six providers use the system for essentially all elements of documentation, ordering, and coding. Implementation of one function at a time, in one clinical area at a time, allowed for focused training and support. Strong departmental leadership and the development of 'super-users' were key elements. Detailed assessments of each clinic prior to implementation were vital, resulting in optimal workstation utilization and a greater understanding of each clinic's unique flow. Each phase saw an increasing reluctance to revert to old paper processes. The success of this implementation has placed pressure on the remainder of the hospital to implement the RPMS-EHR, and has given the informatics team an increased awareness of what resources are required to achieve this result.

  12. Experimental Design and Bioinformatics Analysis for the Application of Metagenomics in Environmental Sciences and Biotechnology.

    Science.gov (United States)

    Ju, Feng; Zhang, Tong

    2015-11-01

    Recent advances in DNA sequencing technologies have prompted the widespread application of metagenomics for the investigation of novel bioresources (e.g., industrial enzymes and bioactive molecules) and unknown biohazards (e.g., pathogens and antibiotic resistance genes) in natural and engineered microbial systems across multiple disciplines. This review discusses the rigorous experimental design and sample preparation in the context of applying metagenomics in environmental sciences and biotechnology. Moreover, this review summarizes the principles, methodologies, and state-of-the-art bioinformatics procedures, tools and database resources for metagenomics applications and discusses two popular strategies (analysis of unassembled reads versus assembled contigs/draft genomes) for quantitative or qualitative insights of microbial community structure and functions. Overall, this review aims to facilitate more extensive application of metagenomics in the investigation of uncultured microorganisms, novel enzymes, microbe-environment interactions, and biohazards in biotechnological applications where microbial communities are engineered for bioenergy production, wastewater treatment, and bioremediation.

  13. The World-Wide Web: An Interface between Research and Teaching in Bioinformatics

    Directory of Open Access Journals (Sweden)

    James F. Aiton

    1994-01-01

    Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.

  14. Cloning and bioinformatic analysis of lovastatin biosynthesis regulatory gene lovE

    Institute of Scientific and Technical Information of China (English)

    HUANG Xin; LI Hao-ming

    2009-01-01

    Background Lovastatin is an effective drug for treatment of hyperlipidemia.This study aimed to clone Iovastatin biosynthesis regulatory gene lovE and analyze the structure and function of its encoding protein.Methods According to the Iovastatin synthase gene sequence from genebank,primers were designed to amplify and clone the Iovastatin biosynthesis regulatory gene lovE from Aspergillus terms genomic DNA.Bioinformatic analysis of lovE and its encoding animo acid sequence was performed through intemet resources and software like DNAMAN.Results Target fragment lovE,almost 1500 bp in length,was amplified from Aspergillus terms genomic DNA and the secondary and three-dimensional structures of LovE protein were predicted.Conclusion In the Iovastatin biosynthesis process lovE is a regulatory gene and LovE protein is a GAL4-1ike transcriptional factor.

  15. TogoWS: integrated SOAP and REST APIs for interoperable bioinformatics Web services.

    Science.gov (United States)

    Katayama, Toshiaki; Nakao, Mitsuteru; Takagi, Toshihisa

    2010-07-01

    Web services have become widely used in bioinformatics analysis, but there exist incompatibilities in interfaces and data types, which prevent users from making full use of a combination of these services. Therefore, we have developed the TogoWS service to provide an integrated interface with advanced features. In the TogoWS REST (REpresentative State Transfer) API (application programming interface), we introduce a unified access method for major database resources through intuitive URIs that can be used to search, retrieve, parse and convert the database entries. The TogoWS SOAP API resolves compatibility issues found on the server and client-side SOAP implementations. The TogoWS service is freely available at: http://togows.dbcls.jp/.

  16. Dynamic partial reconfiguration implementation of the SVM/KNN multi-classifier on FPGA for bioinformatics application.

    Science.gov (United States)

    Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin

    2015-01-01

    Bioinformatics data tend to be highly dimensional in nature thus impose significant computational demands. To resolve limitations of conventional computing methods, several alternative high performance computing solutions have been proposed by scientists such as Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The latter have shown to be efficient and high in performance. In recent years, FPGAs have been benefiting from dynamic partial reconfiguration (DPR) feature for adding flexibility to alter specific regions within the chip. This work proposes combing the use of FPGAs and DPR to build a dynamic multi-classifier architecture that can be used in processing bioinformatics data. In bioinformatics, applying different classification algorithms to the same dataset is desirable in order to obtain comparable, more reliable and consensus decision, but it can consume long time when performed on conventional PC. The DPR implementation of two common classifiers, namely support vector machines (SVMs) and K-nearest neighbor (KNN) are combined together to form a multi-classifier FPGA architecture which can utilize specific region of the FPGA to work as either SVM or KNN classifier. This multi-classifier DPR implementation achieved at least ~8x reduction in reconfiguration time over the single non-DPR classifier implementation, and occupied less space and hardware resources than having both classifiers. The proposed architecture can be extended to work as an ensemble classifier.

  17. FY02 CBNP Annual Report Input: Bioinformatics Support for CBNP Research and Deployments

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T; Wolinsky, M

    2002-10-31

    The events of FY01 dynamically reprogrammed the objectives of the CBNP bioinformatics support team, to meet rapidly-changing Homeland Defense needs and requests from other agencies for assistance: Use computational techniques to determine potential unique DNA signature candidates for microbial and viral pathogens of interest to CBNP researcher and to our collaborating partner agencies such as the Centers for Disease Control and Prevention (CDC), U.S. Department of Agriculture (USDA), Department of Defense (DOD), and Food and Drug Administration (FDA). Develop effective electronic screening measures for DNA signatures to reduce the cost and time of wet-bench screening. Build a comprehensive system for tracking the development and testing of DNA signatures. Build a chain-of-custody sample tracking system for field deployment of the DNA signatures as part of the BASIS project. Provide computational tools for use by CBNP Biological Foundations researchers.

  18. Research on R & D Centers' Resource Demand Position Embedding in Science Parks%嵌入科技园的研发中心资源需求定位研究

    Institute of Scientific and Technical Information of China (English)

    杨震宁; 李德辉; 张皓博

    2015-01-01

    The high-tech enterprises' R&D center location of strategic decision making has a system change, in the international Science Parks setting 'R&D center' has become a Chinese high-tech enterprises' the significant movement of independent innovation, development of new technologies and to chase the market. In this paper, through literature review, to establish the research hypothesis, according to the theory and practice to design re-search scale, based on the 15 Science Parks high-tech enterprises' 'R&D center' management and Science Parks' Managers' Questionnaire (457 samples), to set up research model. The research found that: Science Parks should be of high-tech enterprises' 'R&D center' offers four major innovation resources. Namely, R&D technology re-source support, cluster system support, R&D innovation culture support and technical R&D of hardware support. Science Parks for high-tech enterprises' 'R&D center' of the 'R&D technology resource support' is the most impor-tant resource supply. Science Parks should be to the enterprises' 'R&D center' classified management. Different si-zes, different industry segments and the nature of the high-tech enterprises, embedded in the different types of Sci-ence Parks, the resource demand position is different. According to the research results, this paper also as high-tech enterprises and Science Parks' Managers offers valuable management policy suggestions.%高技术企业对研发中心的选址战略决策发生了一系列变化,在国际化的科技园内建立“研发中心”已经成为跨国公司和中国本土高技术企业自主创新、开发新技术以及追赶市场的重要举措。通过文献回顾,建立研究假设,根据理论和实践研究设计量表,通过对国内15个科技园的高技术企业“研发中心”管理人员和科技园管理者的问卷调查(457个有效样本),建立研究模型,研究发现:科技园应为高技术企业“研发中心”提供四种主

  19. National Science Resources Center Project for Improving Science Teaching in Elementary Schools. Appendix A. School Systems With Exemplary Elementary Science Programs. Appendix B. Elementary Science Network

    Science.gov (United States)

    1988-12-01

    Reynolds , Douglas S., Chief, Bureau of Science Education New York Department of Education 09/03/88 NSRC Elementary Science Network Page 109 Reynolds , Jack...Regional Education Center State Department of Education Reynolds , Larry, Pacific Middle School Rezba, Richard, Rhinehart, W. E., Pecan Park...Elementary Science Consultant Georgia Department of Education Wilson, Jessie , Mayfield Elementary School Wilson, John Tuzo, Wilson, Linda, Curator of

  20. Quantum Bio-Informatics II From Quantum Information to Bio-Informatics

    Science.gov (United States)

    Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori

    2009-02-01

    / H. Kamimura -- Massive collection of full-length complementary DNA clones and microarray analyses: keys to rice transcriptome analysis / S. Kikuchi -- Changes of influenza A(H5) viruses by means of entropic chaos degree / K. Sato and M. Ohya -- Basics of genome sequence analysis in bioinformatics - its fundamental ideas and problems / T. Suzuki and S. Miyazaki -- A basic introduction to gene expression studies using microarray expression data analysis / D. Wanke and J. Kilian -- Integrating biological perspectives: a quantum leap for microarray expression analysis / D. Wanke ... [et al.].

  1. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  2. Bioinformatics Analysis of Zinc Transporter from Baoding Alfalfa

    Institute of Scientific and Technical Information of China (English)

    Haibo WANG; Junyun GUO

    2012-01-01

    [Objective] This study aimed to perform the bioinformatics analysis of Zinc transporter (ZnT) from Baoding Alfalfa. [Method] Based on the amino acid sequence, the physical and chemical properties, hydrophilicity/hydrophobicity, secondary structure of ZnT from Baoding alfalfa were predicted by a series of bioinformatics software. And the transmembrane domains were predicted by using different online tools. [Result] ZnT is a hydrophobic protein containing 408 amino acids with the theoretical pl of 5.94, and it has 7 potential transmembrane hydrophobic regions. In the sec- ondary structure, co-helix (Hh) accounted for 48.04%, extended strand (Ee) for 9.56%, random coil (Cc) for 42.40%, which was accored with the characteristic of transmembrane protein. [Conclusion] mZnT is a member of CDF family, responsible for transporting Zn^2+ out of the cell membrane to reduce the concentration and toxicity of Zn^2+.

  3. Bioinformatics Data Distribution and Integration via Web Services and XML

    Institute of Scientific and Technical Information of China (English)

    Xiao Li; Yizheng Zhang

    2003-01-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biological data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium)and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  4. Some statistics in bioinformatics: the fifth Armitage Lecture.

    Science.gov (United States)

    Solomon, Patricia J

    2009-10-15

    The spirit and content of the 2007 Armitage Lecture are presented in this paper. To begin, two areas of Peter Armitage's early work are distinguished: his pioneering research on sequential methods intended for use in medical trials and the comparison of survival curves. Their influence on much later work is highlighted, and motivate the proposal of several statistical 'truths' that are presented in the paper. The illustration of these truths demonstrates biology's new morphology and its dominance over statistics in this century. An overview of a recent proteomics ovarian cancer study is given as a warning of what can happen when bioinformatics meets epidemiology badly, in particular, when the study design is poor. A statistical bioinformatics success story is outlined, in which gene profiling is helping to identify novel genes and networks involved in mouse embryonic stem cell development. Some concluding thoughts are given.

  5. 2nd Colombian Congress on Computational Biology and Bioinformatics

    CERN Document Server

    Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan

    2014-01-01

    This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis.  .

  6. State of the nation in data integration for bioinformatics.

    Science.gov (United States)

    Goble, Carole; Stevens, Robert

    2008-10-01

    Data integration is a perennial issue in bioinformatics, with many systems being developed and many technologies offered as a panacea for its resolution. The fact that it is still a problem indicates a persistence of underlying issues. Progress has been made, but we should ask "what lessons have been learnt?", and "what still needs to be done?" Semantic Web and Web 2.0 technologies are the latest to find traction within bioinformatics data integration. Now we can ask whether the Semantic Web, mashups, or their combination, have the potential to help. This paper is based on the opening invited talk by Carole Goble given at the Health Care and Life Sciences Data Integration for the Semantic Web Workshop collocated with WWW2007. The paper expands on that talk. We attempt to place some perspective on past efforts, highlight the reasons for success and failure, and indicate some pointers to the future.

  7. Rise and demise of bioinformatics? Promise and progress.

    Directory of Open Access Journals (Sweden)

    Christos A Ouzounis

    Full Text Available The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself.

  8. Architecture exploration of FPGA based accelerators for bioinformatics applications

    CERN Document Server

    Varma, B Sharat Chandra; Balakrishnan, M

    2016-01-01

    This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.

  9. WIWS: a protein structure bioinformatics Web service collection.

    Science.gov (United States)

    Hekkelman, M L; Te Beek, T A H; Pettifer, S R; Thorne, D; Attwood, T K; Vriend, G

    2010-07-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of programmatically accessible web services. We report here a collection of WHAT IF-based protein structure bioinformatics web services: these relate to structure quality, the use of symmetry in crystal structures, structure correction and optimization, adding hydrogens and optimizing hydrogen bonds and a series of geometric calculations. The freely accessible web services are based on the industry standard WS-I profile and the EMBRACE technical guidelines, and are available via both REST and SOAP paradigms. The web services run on a dedicated computational cluster; their function and availability is monitored daily.

  10. Bioinformatics for whole-genome shotgun sequencing of microbial communities.

    Directory of Open Access Journals (Sweden)

    Kevin Chen

    2005-07-01

    Full Text Available The application of whole-genome shotgun sequencing to microbial communities represents a major development in metagenomics, the study of uncultured microbes via the tools of modern genomic analysis. In the past year, whole-genome shotgun sequencing projects of prokaryotic communities from an acid mine biofilm, the Sargasso Sea, Minnesota farm soil, three deep-sea whale falls, and deep-sea sediments have been reported, adding to previously published work on viral communities from marine and fecal samples. The interpretation of this new kind of data poses a wide variety of exciting and difficult bioinformatics problems. The aim of this review is to introduce the bioinformatics community to this emerging field by surveying existing techniques and promising new approaches for several of the most interesting of these computational problems.

  11. Bioinformatic prediction and functional characterization of human KIAA0100 gene

    OpenAIRE

    He Cui; Xi Lan; Shemin Lu; Fujun Zhang; Wanggang Zhang

    2017-01-01

    Our previous study demonstrated that human KIAA0100 gene was a novel acute monocytic leukemia-associated antigen (MLAA) gene. But the functional characterization of human KIAA0100 gene has remained unknown to date. Here, firstly, bioinformatic prediction of human KIAA0100 gene was carried out using online softwares; Secondly, Human KIAA0100 gene expression was downregulated by the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) 9 system in U937 cells...

  12. The web server of IBM's Bioinformatics and Pattern Discovery group

    OpenAIRE

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel,; Shibuya, Tetsuo

    2003-01-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic ...

  13. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  14. p3d – Python module for structural bioinformatics

    Directory of Open Access Journals (Sweden)

    Fufezan Christian

    2009-08-01

    Full Text Available Abstract Background High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. Results p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files. p3d's strength arises from the combination of a very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP tree, b set theory and c functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. Conclusion p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  15. High-throughput protein analysis integrating bioinformatics and experimental assays.

    Science.gov (United States)

    del Val, Coral; Mehrle, Alexander; Falkenhahn, Mechthild; Seiler, Markus; Glatting, Karl-Heinz; Poustka, Annemarie; Suhai, Sandor; Wiemann, Stefan

    2004-01-01

    The wealth of transcript information that has been made publicly available in recent years requires the development of high-throughput functional genomics and proteomics approaches for its analysis. Such approaches need suitable data integration procedures and a high level of automation in order to gain maximum benefit from the results generated. We have designed an automatic pipeline to analyse annotated open reading frames (ORFs) stemming from full-length cDNAs produced mainly by the German cDNA Consortium. The ORFs are cloned into expression vectors for use in large-scale assays such as the determination of subcellular protein localization or kinase reaction specificity. Additionally, all identified ORFs undergo exhaustive bioinformatic analysis such as similarity searches, protein domain architecture determination and prediction of physicochemical characteristics and secondary structure, using a wide variety of bioinformatic methods in combination with the most up-to-date public databases (e.g. PRINTS, BLOCKS, INTERPRO, PROSITE SWISSPROT). Data from experimental results and from the bioinformatic analysis are integrated and stored in a relational database (MS SQL-Server), which makes it possible for researchers to find answers to biological questions easily, thereby speeding up the selection of targets for further analysis. The designed pipeline constitutes a new automatic approach to obtaining and administrating relevant biological data from high-throughput investigations of cDNAs in order to systematically identify and characterize novel genes, as well as to comprehensively describe the function of the encoded proteins.

  16. Bioinformatics analysis and detection of gelatinase encoded gene in Lysinibacillussphaericus

    Science.gov (United States)

    Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat

    2016-11-01

    In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.

  17. An Integrative Study on Bioinformatics Computing Concepts, Issues and Problems

    Directory of Open Access Journals (Sweden)

    Muhammad Zakarya

    2011-11-01

    Full Text Available Bioinformatics is the permutation and mishmash of biological science and 4IT. The discipline covers every computational tools and techniques used to administer, examine and manipulate huge sets of biological statistics. The discipline also helps in creation of databases to store up and supervise biological statistics, improvement of computer algorithms to find out relations in these databases and use of computer tools for the study and understanding of biological information, including DNA, RNA, protein sequences, gene expression profiles, protein structures, and biochemical pathways. The study of this paper implements an integrative solution. As we know that solution to a problem in a specific discipline may be a solution to another problem in a different discipline. For example entropy that has been rented from physical sciences is solution to most of the problems and issues in computer science. Another example is bioinformatics, where computing method and applications are implemented over biological information. This paper shows an initiative step towards that and will discuss upon the needs for integration of multiple discipline and sciences. Similarly green chemistry gives birth to a new kind of computing i.e. green computing. In next versions of this paper we will study biological fuel cell and will discuss to develop a mobile battery that will be life time charged using the concepts of biological fuel cell. Another issue that we are going to discuss in our series is brain tumor detection. This paper is a review on BI i.e. bioinformatics to start with.

  18. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  19. 多技能呼叫中心的I型与V型路由策略人力需求仿真分析%Simulation on Human Resource Requirement of "I" and "V" Design in Multi-skill Call Center

    Institute of Scientific and Technical Information of China (English)

    张星玥; 张艳霞; 霍佳震

    2012-01-01

    Implementing an appropriate Skill-Based Routing is significant to call centers, which concerns the most expensive resource: human resource. We consider three Skill-Based Routing strategies based on "I" design and "V" design. Two types of calls input, measured by service level and utility of call center agents and simulation model developed and implemented, we find "V" design with priority needs the least agents under the same service level, which has the same agent utility as other Skill-Based Routing strategies.%呼叫中心路由策略的选择对多技能呼叫中心至关重要,它影响到呼叫中心最昂贵资源:人力资源的支出.本文针对拓扑结构中的Ⅰ型、V型设计三种路由策略,研究这三种路由策略在两种呼叫类型下人力资源需求的情况.通过仿真模型的建立与运行,发现V型有优先级的路由策略在同样的服务水平和座席占用率下,所需的人力资源最少.

  20. Study on current situation of nursing human resources in community health service centers of Kunming City%昆明市社区卫生服务中心护理人力资源现状调查

    Institute of Scientific and Technical Information of China (English)

    卢蓉; 姜润生; 黄丽; 徐正英; 沈海文; 张杪

    2013-01-01

    目的 调查昆明市社区卫生服务中心护理人力资源现状.方法 采用自设问卷对昆明市11所社区卫生服务中心的109名社区护士进行调查.结果 社区护理队伍结构不合理,社区护士绩效考核体系不健全,缺乏科学有效的激励措施.结论应重视社区护理,调整队伍结构,建立健全管理体质,加快人才培养步伐.%Objective To investigate the current situation of nursing human resources of community health service centers of Kunming city.Methods A self-designed questionnaire was performed among 109 nurses of 11 community health services centers in Kunming.Results Structure of community nursing team was found unreasonable and the management system of human resources was not well organized.The scientific and effective intentive measures was lack.Conclusion It is suggested to pay more attention to the community nursing services in order to adjust the structure of community nursing team,pefect the management system and speed up the pace of personnel training.

  1. EROS resources for the classroom

    Science.gov (United States)

    ,

    2015-01-01

    The U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center has several educational resources that demonstrate how satellite imagery is used to understand our changing world.

  2. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  3. Resources for International Partners

    Science.gov (United States)

    Learn about NCI's Center for Global Health, which facilitates global collaboration by leveraging research resources with U.S. government agencies, foreign governments, non-government organizations, and pharmaceutical and biotechnology companies.

  4. Entangled microwaves as a resource for entangling spatially separate solid-state qubits: Superconducting qubits, nitrogen-vacancy centers, and magnetic molecules

    Science.gov (United States)

    Gómez, Angela Viviana; Rodríguez, Ferney Javier; Quiroga, Luis; García-Ripoll, Juan José

    2016-06-01

    Quantum correlations present in a broadband two-line squeezed microwave state can induce entanglement in a spatially separated bipartite system consisting of either two single qubits or two-qubit ensembles. By using an appropriate master equation for a bipartite quantum system in contact with two separate but entangled baths, the generating entanglement process in spatially separated quantum systems is thoroughly characterized. Decoherence thermal effects on the entanglement transfer are also discussed. Our results provide evidence that this entanglement transfer by dissipation is feasible, yielding to a steady-state amount of entanglement in the bipartite quantum system which can be optimized for a wide range of realistic physical systems that include state-of-the-art experiments with nitrogen-vacancy centers in diamond, superconducting qubits, or even magnetic molecules embedded in a crystalline matrix.

  5. [Factors affecting the adoption of ICT tools in experiments with bioinformatics in biopharmaceutical organizations: a case study in the Brazilian Cancer Institute].

    Science.gov (United States)

    Pitassi, Claudio; Gonçalves, Antonio Augusto; Moreno Júnior, Valter de Assis

    2014-01-01

    The scope of this article is to identify and analyze the factors that influence the adoption of ICT tools in experiments with bioinformatics at the Brazilian Cancer Institute (INCA). It involves a descriptive and exploratory qualitative field study. Evidence was collected mainly based on in-depth interviews with the management team at the Research Center and the IT Division. The answers were analyzed using the categorical content method. The categories were selected from the scientific literature and consolidated in the Technology-Organization-Environment (TOE) framework created for this study. The model proposed made it possible to demonstrate how the factors selected impacted INCA´s adoption of bioinformatics systems and tools, contributing to the investigation of two critical areas for the development of the health industry in Brazil, namely technological innovation and bioinformatics. Based on the evidence collected, a research question was posed: to what extent can the alignment of the factors related to the adoption of ICT tools in experiments with bioinformatics increase the innovation capacity of a Brazilian biopharmaceutical organization?

  6. Building the Medical Imaging Resource Center Based on PACS to Improve the Teaching Medical Imaging%构建基于PACS的医学影像教学资源中心提升教学质量

    Institute of Scientific and Technical Information of China (English)

    王世威; 姜慧萍; 韩浙; 许茂盛

    2013-01-01

    [目的]构建基于图像存档与传输系统(picture archiving and communication system,PACS)的数字化医学影像教学资源中心,提升医学影像学教学质量。[方法]设计电子教案(electronic teaching file,ETF)生成模块,并把它整合在PACS系统报告工作站,配置电子教案服务器,建立电子教案数据库,并通过与PACS系统交互的接口模块,实现影像电子教案的制作,然后利用医院PACS系统中海量的数字化医学影像资料,建立数字化医学影像教学资源中心。[结果]通过整合在PACS系统报告工作站中的电子教案生成模块,选择典型病例、感兴趣病例,并经过简单的处理后自动快速地生成电子教案,成功构建数字化医学影像教学资源中心。[结论]基于PACS系统成功构建数字化医学影像教学资源中心,最大限度实现医学影像教学资源共享,将改变医学影像学教学模式,极大地提升医学影像学教学质量。%[Purpose] To build the digital Medical Imaging Resource Center based on the picture archiving and communication system(PACS) to improve the teaching medical imaging. [Methods] The generating module of the electronic teaching file(ETF) was developed and integrated into the report work-station of PACS. The ETF server was configurated. The ETF database was created. The ETF was created through the interface module with PACS, and then the medical imaging electronic teaching resources center was built. [Result] Using the generation module of the ETF integrated in report workstation of PACS, the ETF can be automatical y created quickly through simply processing after selecting typical cases or the cases of interest. The medical imaging electronic teaching resources center has been built successful y by using the great number of medical image data in the hospital PACS. [Conclusion]The medical imaging electronic teaching resources center can be built successful y based on PACS

  7. Computing Resource Virtualization in Next Generation Data Center of Shen hua Group%计算资源虚拟化在神华集团新一代数据中心的应用实践

    Institute of Scientific and Technical Information of China (English)

    孟君; 张延生; 狄广义

    2014-01-01

    This article comprehensively combs, analyzes and proves existing outstanding issues of traditional data center in the as-pect of Computing Resource Arrangement, under the background of next generation of enterprise-level data center construction of Shenhua Group. It further inherits and improves the concept of computing resource pool with virtualization technology as the core, based on which it respectively plans, designs and constructs the basic operation platform based on VMware vSphere appli-cation resource pool and Oracle Exadata all-in-one machine database resource pool for more than 50 enterprise-level core busi-ness applications (ERP, CRM, SRM, etc.) of Shenhua Group. Operation practice proves that after the introduction of resource pool, Shenhua data center has achieved good performance in several aspects, such as server integration ratio, resource delivery mode, Five-high KPIs, framework flexibility, even in aspects like IT cost control and energy-saving and emission-reduction, which completely meets the design expectation. It not only provides robust support on reaching Shenhua strategic goal of Scien-tific Development, Re-engineering of Shenhua and Five-Year Economic Output Doubled; but also can be used as an important source of reference for large-scale enterprise information construction and better industry promotional value.%以神华集团新一代企业级数据中心建设为背景,全面梳理、分析和论证了传统数据中心在计算资源部署方面存在的突出问题,进一步继承并完善了以虚拟化技术为内核的计算资源池概念,并依此为基础,分别规划、设计和构建了基于VMware vSphere的应用资源池和基于Oracle Exadata一体机的数据库资源池,作为承载神华集团包括ERP、CRM、SRM等50多个企业级关键业务的基础运行平台。运营实践证明,引入计算资源池之后,集团数据中心在服务器整合比、资源交付模式、“五高”指标、架构

  8. 全国疾病预防控制中心人力资源现状分析%Analysis of human resources current situation in nationwide centers for disease control and prevention

    Institute of Scientific and Technical Information of China (English)

    杨洋; 王松旺; 张英杰; 傅罡

    2013-01-01

    Objective:To analyze the current status of human resources in the centers for disease control and prevention in the whole country,so as to provide foundation for making scientific human resource plans.Methods:The data,such as the number,age,years of working,education background,title of technical post,and profession of personnel in the centers for disease control and prevention at the levels of province,city and county,were collect-ed from the basic information system of China centers for disease control and prevention and analyzed with Excel.Results:Those at the age from 35 to 44 accounted for the largest proportion in the personnel of centers for disease control and prevention.Their years of working were mainly between 20 and 29 years with the education background of technical secondary school,just second to junior college and university.Their titles of posts are mainly at middle and primary level and their scope of practice is mainly in five health items,communicable disease control,planned immunization and health inspection.Conclusion:The governments at all levels should strengthen the personnel construction of centers for disease control and prevention,improve the quality of professional personnel and make appropriate human resource plans for centers for disease control and prevention.%目的:分析全国各级疾病预防控制中心人力资源现状,为今后制定科学的人力资源规划提供依据.方法:通过中国疾病预防控制基本信息系统收集各省、市、县(区)疾病预防控制中心的人员数、年龄、工作年限、学历、职称、从事的专业等数据,用Excel软件进行统计分析.结果:全国疾病预防控制机构人员年龄在35岁~44岁组这个年龄段所占比例最高;工作年限主要集中在20年-29年;学历以大专、大学为主,中专学历次之;职称集中在中级、初级水平;执业范围主要分布在五大卫生、传染病控制、计划免疫和卫生检验这四个范围.结论:

  9. 全国疾病预防控制中心人力资源现状分析%Analysis of Human Resources of Chinese Center for Disease Control And Prevention

    Institute of Scientific and Technical Information of China (English)

    杨洋; 王松旺; 张英杰; 傅罡

    2013-01-01

    Objective:to analyze the status of human resources in the centers for disease control and prevention to provide foundation for making scientific human resource plans. Methods:To collecting such data as the number, age, years of working, education background, title of technical post, and profession of personnel in the centers for disease control and prevention at the level of province, city and county through the basic information system of Chinese centers for disease control and prevention;to analyze the data with Excel. Results:Those at the age from 35 to 44 account for the largest proportion in the personnel of centers for disease control and prevention. Their years of working are mainly from 20 to 29 with the education background of technical secondary school just second to junior college and university. Their titles of posts are mainly at middle and primary level and their scope of practice is mainly in five health, communicable disease control, planned immunization and health inspection. Conclusion:The governments at all levels should strengthen the personnel construction of centers for disease control and prevention, improve the quality of professional personnel and make appropriate human resource plans for centers for disease control and prevention.%目的:分析全国各级疾病预防控制中心人力资源现状,为今后制定科学的人力资源规划提供依据。方法:通过中国疾病预防控制基本信息系统收集各省、市、县(区)疾病预防控制中心的人员数、年龄、工作年限、学历、职称、从事的专业等数据,用Excel软件进行统计分析。结果:全国疾病预防控制机构人员年龄在35-44岁组这个年龄段所占比例最高;工作年限主要集中在20-29年;学历以大专、大学为主,中专学历次之;职称集中在中级、初级水平;执业范围主要分布在五大卫生、传染病控制、计划免疫和卫生检验这四个范围。结论:各级政府应

  10. Mortality among HIV-Infected Patients in Resource Limited Settings: A Case Controlled Analysis of Inpatients at a Community Care Center

    Directory of Open Access Journals (Sweden)

    Nirmala Rajagopalan

    2009-01-01

    Full Text Available Problem statement: Despite massive national efforts to scale up Antiretroviral Therapy (ART access in India since 2004, the AIDS death rate was 17.2 per 100,000 persons during 2003-2005. In the era of HAART in resource poor settings, it is imperative to understand and address the causes of AIDS related mortality. This collaborative study aimed at defining the predictors of mortality among people living with HIV/AIDS (PLHA admitted during 2003-2005 to the Freedom Foundation (FF Care and Support facility, Bangalore, India. Approach: Fifty consecutively selected HIV-infected patients who died during the study period and 50 HIV-infected patients matched by age, gender, route of transmission, nutrition status and stage of disease who survived at least 12 months post-ART were included in this study. The impact on mortality by factors such as: Hemoglobin, CD4+T lymphocyte counts, weight loss and Opportunistic Infections (OIs were studied. Statistical analyses were done by Chi-square, Fisher’s Exact Test, Kaplan-Meier and multivariate logistic regression. Results: Recurrent diarrhea was a significant risk factor for mortality (OR = 12.25, p = 0.004, followed by a diagnosis of pulmonary tuberculosis (TB at first admission (OR = 4.86 while TB in general also negatively impacted survival (p = 0.002. Though not statistically significant, Pneumocystis carinii pneumonia, Cryptococcal meningitis and Toxoplasmosis also negatively affected survival. Mortality was high among those not on HAART (81% while it was significantly reduced (28% among those on HAART (pConclusion: Interventions that facilitate early OI diagnosis and treatment especially diarrhea and TB may reduce mortality in HIV. HAART alone without proper OI management and nutrition did not prevent mortality among PLHA. In resource poor settings, it becomes imperative to focus on low cost tools and increased capacity building along with regular clinical follow-up for diagnosis and early treatment of

  11. Bioinformatics in microbial biotechnology – a mini review

    Directory of Open Access Journals (Sweden)

    Bansal Arvind K

    2005-06-01

    Full Text Available Abstract The revolutionary growth in the computation speed and memory storage capability has fueled a new era in the analysis of biological data. Hundreds of microbial genomes and many eukaryotic genomes including a cleaner draft of human genome have been sequenced raising the expectation of better control of microorganisms. The goals are as lofty as the development of rational drugs and antimicrobial agents, development of new enhanced bacterial strains for bioremediation and pollution control, development of better and easy to administer vaccines, the development of protein biomarkers for various bacterial diseases, and better understanding of host-bacteria interaction to prevent bacterial infections. In the last decade the development of many new bioinformatics techniques and integrated databases has facilitated the realization of these goals. Current research in bioinformatics can be classified into: (i genomics – sequencing and comparative study of genomes to identify gene and genome functionality, (ii proteomics – identification and characterization of protein related properties and reconstruction of metabolic and regulatory pathways, (iii cell visualization and simulation to study and model cell behavior, and (iv application to the development of drugs and anti-microbial agents. In this article, we will focus on the techniques and their limitations in genomics and proteomics. Bioinformatics research can be classified under three major approaches: (1 analysis based upon the available experimental wet-lab data, (2 the use of mathematical modeling to derive new information, and (3 an integrated approach that integrates search techniques with mathematical modeling. The major impact of bioinformatics research has been to automate the genome sequencing, automated development of integrated genomics and proteomics databases, automated genome comparisons to identify the genome function, automated derivation of metabolic pathways, gene

  12. RandD management center for energy and resources: Trend of expanding the use of localization in waste-incineration heat, solar power generation, etc

    Energy Technology Data Exchange (ETDEWEB)

    Lee, I.Y. [R and D Management Center for Energy and Resources, Seoul (Korea, Republic of)

    1998-03-01

    Restructuring is carried out recently throughout whole sectors of society to overcome the IMF management system and enhance national development. It is more serious in the energy sector, and it is a well-known fact that Korea spent USD 27.2 billion for energy imports which is equivalent to 18% of total import amount of 1997. While several policies are carried out to improve this, they can be largely concentrated in restructuring to energy saving industrial structure, austerity resulting from energy use rationalization and efficiency improvements, and the development of energy saving and new and renewable energy technologies. But considering the energy problems with environmental issues, we cannot overemphasize the importance of developing and expanding the use of new and renewable energy technology that hardly shows any secondary environmental pollution such as generation of carbon dioxide while using domestic natural resources. The contents and current conditions of new and renewable energy technology development as well as the result of development and cases of utilization are introduced as Korea established and carried out the law for promoting new and renewable energy technology development at the end of 1980s under the impact of first and second oil crisis.

  13. The bioinformatics of microarrays to study cancer: Advantages and disadvantages

    Science.gov (United States)

    Rodríguez-Segura, M. A.; Godina-Nava, J. J.; Villa-Treviño, S.

    2012-10-01

    Microarrays are devices designed to analyze simultaneous expression of thousands of genes. However, the process will adds noise into the information at each stage of the study. To analyze these thousands of data is necessary to use bioinformatics tools. The traditional analysis begins by normalizing data, but the obtained results are highly dependent on how it is conducted the study. It is shown the need to develop new strategies to analyze microarray. Liver tissue taken from an animal model in which is chemically induced cancer is used as an example.

  14. Biophysics and bioinformatics of transcription regulation in bacteria and bacteriophages

    Science.gov (United States)

    Djordjevic, Marko

    2005-11-01

    Due to rapid accumulation of biological data, bioinformatics has become a very important branch of biological research. In this thesis, we develop novel bioinformatic approaches and aid design of biological experiments by using ideas and methods from statistical physics. Identification of transcription factor binding sites within the regulatory segments of genomic DNA is an important step towards understanding of the regulatory circuits that control expression of genes. We propose a novel, biophysics based algorithm, for the supervised detection of transcription factor (TF) binding sites. The method classifies potential binding sites by explicitly estimating the sequence-specific binding energy and the chemical potential of a given TF. In contrast with the widely used information theory based weight matrix method, our approach correctly incorporates saturation in the transcription factor/DNA binding probability. This results in a significant reduction in the number of expected false positives, and in the explicit appearance---and determination---of a binding threshold. The new method was used to identify likely genomic binding sites for the Escherichia coli TFs, and to examine the relationship between TF binding specificity and degree of pleiotropy (number of regulatory targets). We next address how parameters of protein-DNA interactions can be obtained from data on protein binding to random oligos under controlled conditions (SELEX experiment data). We show that 'robust' generation of an appropriate data set is achieved by a suitable modification of the standard SELEX procedure, and propose a novel bioinformatic algorithm for analysis of such data. Finally, we use quantitative data analysis, bioinformatic methods and kinetic modeling to analyze gene expression strategies of bacterial viruses. We study bacteriophage Xp10 that infects rice pathogen Xanthomonas oryzae. Xp10 is an unusual bacteriophage, which has morphology and genome organization that most closely

  15. Bioinformatics pipeline for functional identification and characterization of proteins

    Science.gov (United States)

    Skarzyńska, Agnieszka; Pawełkowicz, Magdalena; Krzywkowski, Tomasz; Świerkula, Katarzyna; PlÄ der, Wojciech; Przybecki, Zbigniew

    2015-09-01

    The new sequencing methods, called Next Generation Sequencing gives an opportunity to possess a vast amount of data in short time. This data requires structural and functional annotation. Functional identification and characterization of predicted proteins could be done by in silico approches, thanks to a numerous computational tools available nowadays. However, there is a need to confirm the results of proteins function prediction using different programs and comparing the results or confirm experimentally. Here we present a bioinformatics pipeline for structural and functional annotation of proteins.

  16. Bioinformatics Tools for the Discovery of New Nonribosomal Peptides

    DEFF Research Database (Denmark)

    Leclère, Valérie; Weber, Tilmann; Jacques, Philippe

    2016-01-01

    -dimensional structure of the peptides can be compared with the structural patterns of all known NRPs. The presented workflow leads to an efficient and rapid screening of genomic data generated by high throughput technologies. The exploration of such sequenced genomes may lead to the discovery of new drugs (i......This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes...

  17. Resources for Teaching Astronomy.

    Science.gov (United States)

    Grafton, Teresa; Suggett, Martin

    1991-01-01

    Resources that are available for teachers presenting astronomy in the National Curriculum are listed. Included are societies and organizations, resource centers and places to visit, planetaria, telescopes and binoculars, planispheres, star charts, night sky diaries, equipment, audiovisual materials, computer software, books, and magazines. (KR)

  18. Human Resource Construction

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Centering on strategic objective of reform and development,CIAE formulated its objectives in human resource construction for the 13th Five-year Plan period,and achieved new apparent progress in human resource construction in 2015.1 Implementation of"LONGMA Project"

  19. Tolerance and toxicity of neoadjuvant docetaxel, cisplatin and 5 fluorouracil regimen in technically unresectable oral cancer in resource limited rural based tertiary cancer center

    Directory of Open Access Journals (Sweden)

    V M Patil

    2014-01-01

    Full Text Available Background: Recent studies indicate neoadjuvant chemotherapy (NACT can result in R0 resection in a substantial proportion of patients with technically unresectable oral cavity cancers. However, data regarding the efficacy and safety of docetaxel, cisplatin and 5 fluorouracil (TPF NACT in our setting is lacking. The present audit was proposed to evaluate the toxicities encountered during administration of this regimen. It was hypothesized that TPF NACT would be considered feasible for routine administration if an average relative dose intensity (ARDI of ≥0.90 or more in at least 70% of the patients. Materials and Methods: Technically unresectable oral cancers with Eastern Cooperative Oncology Group PS 0-2, with biopsy proven squamous cell carcinoma underwent two cycles of NACT with TPF regimen. Toxicity and response rates were noted following the CTCAE 4.03 and RECIST criteria. Descriptive analysis of completion rates (completing 2 cycles of planned chemotherapy with ARDI of 0.85 or more, reason for delay, toxicity, and response are presented. Results: The NACT was completed by all patients. The number of subjects who completed all planned cycles of chemotherapy are with the ARDI of the delivered chemotherapy been equal to or >0.85 was 11 (91.67%. All toxicity inclusive Grade 3-5 toxicity was seen in 11 patients (91.67%. The response rate of chemotherapy was 83.33%. There were three complete response, seven partial response, and two stable disease seen post NACT in this study. Conclusion: Docetaxel, cisplatin and 5 fluorouracil regimen can be routinely administered at our center with the supportive care methods and precautionary methods used in our study.

  20. Technosciences in Academia: Rethinking a Conceptual Framework for Bioinformatics Undergraduate Curricula

    Science.gov (United States)

    Symeonidis, Iphigenia Sofia

    This paper aims to elucidate guiding concepts for the design of powerful undergraduate bioinformatics degrees which will lead to a conceptual framework for the curriculum. "Powerful" here should be understood as having truly bioinformatics objectives rather than enrichment of existing computer science or life science degrees on which bioinformatics degrees are often based. As such, the conceptual framework will be one which aims to demonstrate intellectual honesty in regards to the field of bioinformatics. A synthesis/conceptual analysis approach was followed as elaborated by Hurd (1983). The approach takes into account the following: bioinfonnatics educational needs and goals as expressed by different authorities, five undergraduate bioinformatics degrees case-studies, educational implications of bioinformatics as a technoscience and approaches to curriculum design promoting interdisciplinarity and integration. Given these considerations, guiding concepts emerged and a conceptual framework was elaborated. The practice of bioinformatics was given a closer look, which led to defining tool-integration skills and tool-thinking capacity as crucial areas of the bioinformatics activities spectrum. It was argued, finally, that a process-based curriculum as a variation of a concept-based curriculum (where the concepts are processes) might be more conducive to the teaching of bioinformatics given a foundational first year of integrated science education as envisioned by Bialek and Botstein (2004). Furthermore, the curriculum design needs to define new avenues of communication and learning which bypass the traditional disciplinary barriers of academic settings as undertaken by Tador and Tidmor (2005) for graduate studies.

  1. Resource Limitation, Tolerance, and the Future of Ecological Plant Classification

    Directory of Open Access Journals (Sweden)

    Joseph M Craine

    2012-10-01

    Full Text Available Throughout the evolutionary history of plants, drought, shade, and scarcity of nutrients have structured ecosystems and communities globally. Humans have begun to drastically alter the prevalence of these environmental factors with untold consequences for plant communities and ecosystems worldwide. Given limitations in using organ-level traits to predict ecological performance of species, recent advances using tolerances of low resource availability as plant functional traits are revealing the often hidden roles these factors have in structuring communities and are becoming central to classifying plants ecologically. For example, measuring the physiological drought tolerance of plants has increased the predictability of differences among species in their ability to survive drought as well as the distribution of species within and among ecosystems. Quantifying the shade tolerance of species has improved our understanding of local and regional species diversity and how species have sorted within and among regions. As the stresses on ecosystems continue to shift, coordinated studies of whole-plant growth centered on tolerance of low resource availability will be central in predicting future ecosystem functioning and biodiversity. This will require efforts that quantify tolerances for large numbers of species and develop bioinformatic and other techniques for comparing large number of species.

  2. Resource limitation, tolerance, and the future of ecological plant classification.

    Science.gov (United States)

    Craine, Joseph M; Engelbrecht, Bettina M J; Lusk, Christopher H; McDowell, Nate G; Poorter, Hendrik

    2012-01-01

    Throughout the evolutionary history of plants, drought, shade, and scarcity of nutrients have structured ecosystems and communities globally. Humans have begun to drastically alter the prevalence of these environmental factors with untold consequences for plant communities and ecosystems worldwide. Given limitations in using organ-level traits to predict ecological performance of species, recent advances using tolerances of low resource availability as plant functional traits are revealing the often hidden roles these factors have in structuring communities and are becoming central to classifying plants ecologically. For example, measuring the physiological drought tolerance of plants has increased the predictability of differences among species in their ability to survive drought as well as the distribution of species within and among ecosystems. Quantifying the shade tolerance of species has improved our understanding of local and regional species diversity and how species have sorted within and among regions. As the stresses on ecosystems continue to shift, coordinated studies of whole-plant growth centered on tolerance of low resource availability will be central in predicting future ecosystem functioning and biodiversity. This will require efforts that quantify tolerances for large numbers of species and develop bioinformatic and other techniques for comparing large number of species.

  3. 斑马鱼资源的开发保藏与国家斑马鱼资源中心%Development and maintenance of zebrafish resources, and the China Zebrafish Resouce Center

    Institute of Scientific and Technical Information of China (English)

    李阔宇; 潘鲁湲; 孙永华

    2014-01-01

    Zebrafish is a relatively new and booming vertebrate animal model.Over the past three decades, ze-brafish has been applied in various aspects of life science, as well as health sciences, environmental studies and aquacul-ture research.To meet the requirement for different research purposes, large amounts of zebrafish resources, including mu-tant and transgenic lines, have been developed with different techniques.All of these resources need well and careful col-lection and maintenance, therefore several zebrafish resource facilities have been built worldwide.As one of them, the Chi-na Zebrafish Resource Center (CZRC, http://zfish.cn) was founded in 2012.This review is trying to introduce the devel-opment and maintenance of zebrafish scientific resources, and the updated progress of CZRC.%斑马鱼是一种新兴的脊椎模式动物。在过去的30年中,斑马鱼已被广泛应用于生命科学、健康科学、环境农业等诸多科研领域。为了满足不同的科研需要,研究人员开发和利用各种技术创建了大量的斑马鱼基因突变和转基因品系,这些品系已成为开展相关科学研究的宝贵资源。为了更好地保藏和利用这些资源,在全球范围内建设有多个规模不一的斑马鱼资源库。2012年,我国的国家斑马鱼资源中心( http://zfish.cn)在中国科学院水生生物研究所正式成立。本文将重点介绍全球斑马鱼资源的开发和保藏情况,以及我国国家斑马鱼资源中心的最新建设进展。

  4. GOVERNMENT INFORMATION RESOURCE CENTER FRAME DESIGN AND ITS KEY TECHNOLOGIES STUDY%政务信息资源中心框架设计及关键技术研究

    Institute of Scientific and Technical Information of China (English)

    张家锐

    2011-01-01

    At present, government information sharing study and applications are still staying at the stage of network interconnection, information integration, and peer data exchange, far from achieving the " interoperability" objective. In view of this, the concept of government information resource center is put forward to achieve government information resources unified access, unified management, and disciplined and scaled services. By taking into account practical working experience, its constitution, backbone and technical structure are designed and its key technologies that are involved at realization is studied.%目前国内有关政务信息资源共享的研究及应用还停留在网络互联互通、信息资源整合、点对点数据交换阶段,并没有达到"协同工作"的要求.鉴于此,提出了政务信息资源中心的概念,实现政务信息资源的统一获取、统一管理、规则化和规模化服务.结合工作实践,对其组成、总体框架、技术架构进行了设计,并就实现时需要解决的关键技术进行了研究.

  5. Making sense of genomes of parasitic worms: Tackling bioinformatic challenges.

    Science.gov (United States)

    Korhonen, Pasi K; Young, Neil D; Gasser, Robin B

    2016-01-01

    Billions of people and animals are infected with parasitic worms (helminths). Many of these worms cause diseases that have a major socioeconomic impact worldwide, and are challenging to control because existing treatment methods are often inadequate. There is, therefore, a need to work toward developing new intervention methods, built on a sound understanding of parasitic worms at molecular level, the relationships that they have with their animal hosts and/or the diseases that they cause. Decoding the genomes and transcriptomes of these parasites brings us a step closer to this goal. The key focus of this article is to critically review and discuss bioinformatic tools used for the assembly and annotation of these genomes and transcriptomes, as well as various post-genomic analyses of transcription profiles, biological pathways, synteny, phylogeny, biogeography and the prediction and prioritisation of drug target candidates. Bioinformatic pipelines implemented and established recently provide practical and efficient tools for the assembly and annotation of genomes of parasitic worms, and will be applicable to a wide range of other parasites and eukaryotic organisms. Future research will need to assess the utility of long-read sequence data sets for enhanced genomic assemblies, and develop improved algorithms for gene prediction and post-genomic analyses, to enable comprehensive systems biology explorations of parasitic organisms.

  6. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    Science.gov (United States)

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  7. The MPI Bioinformatics Toolkit for protein sequence analysis.

    Science.gov (United States)

    Biegert, Andreas; Mayer, Christian; Remmert, Michael; Söding, Johannes; Lupas, Andrei N

    2006-07-01

    The MPI Bioinformatics Toolkit is an interactive web service which offers access to a great variety of public and in-house bioinformatics tools. They are grouped into different sections that support sequence searches, multiple alignment, secondary and tertiary structure prediction and classification. Several public tools are offered in customized versions that extend their functionality. For example, PSI-BLAST can be run against regularly updated standard databases, customized user databases or selectable sets of genomes. Another tool, Quick2D, integrates the results of various secondary structure, transmembrane and disorder prediction programs into one view. The Toolkit provides a friendly and intuitive user interface with an online help facility. As a key feature, various tools are interconnected so that the results of one tool can be forwarded to other tools. One could run PSI-BLAST, parse out a multiple alignment of selected hits and send the results to a cluster analysis tool. The Toolkit framework and the tools developed in-house will be packaged and freely available under the GNU Lesser General Public Licence (LGPL). The Toolkit can be accessed at http://toolkit.tuebingen.mpg.de.

  8. Bioinformatic prediction and functional characterization of human KIAA0100 gene

    Directory of Open Access Journals (Sweden)

    He Cui

    2017-02-01

    Full Text Available Our previous study demonstrated that human KIAA0100 gene was a novel acute monocytic leukemia-associated antigen (MLAA gene. But the functional characterization of human KIAA0100 gene has remained unknown to date. Here, firstly, bioinformatic prediction of human KIAA0100 gene was carried out using online softwares; Secondly, Human KIAA0100 gene expression was downregulated by the clustered regularly interspaced short palindromic repeats (CRISPR/CRISPR-associated (Cas 9 system in U937 cells. Cell proliferation and apoptosis were next evaluated in KIAA0100-knockdown U937 cells. The bioinformatic prediction showed that human KIAA0100 gene was located on 17q11.2, and human KIAA0100 protein was located in the secretory pathway. Besides, human KIAA0100 protein contained a signalpeptide, a transmembrane region, three types of secondary structures (alpha helix, extended strand, and random coil , and four domains from mitochondrial protein 27 (FMP27. The observation on functional characterization of human KIAA0100 gene revealed that its downregulation inhibited cell proliferation, and promoted cell apoptosis in U937 cells. To summarize, these results suggest human KIAA0100 gene possibly comes within mitochondrial genome; moreover, it is a novel anti-apoptotic factor related to carcinogenesis or progression in acute monocytic leukemia, and may be a potential target for immunotherapy against acute monocytic leukemia.

  9. Protecting innovation in bioinformatics and in-silico biology.

    Science.gov (United States)

    Harrison, Robert

    2003-01-01

    Commercial success or failure of innovation in bioinformatics and in-silico biology requires the appropriate use of legal tools for protecting and exploiting intellectual property. These tools include patents, copyrights, trademarks, design rights, and limiting information in the form of 'trade secrets'. Potentially patentable components of bioinformatics programmes include lines of code, algorithms, data content, data structure and user interfaces. In both the US and the European Union, copyright protection is granted for software as a literary work, and most other major industrial countries have adopted similar rules. Nonetheless, the grant of software patents remains controversial and is being challenged in some countries. Current debate extends to aspects such as whether patents can claim not only the apparatus and methods but also the data signals and/or products, such as a CD-ROM, on which the programme is stored. The patentability of substances discovered using in-silico methods is a separate debate that is unlikely to be resolved in the near future.

  10. 大中型再生资源企业财务共享中心的构建与实施%Construction and implementation of large and medium sized renewable resources enterprises' financial sharing center

    Institute of Scientific and Technical Information of China (English)

    徐多林

    2016-01-01

    Based on the complex business environment and the demand of improving financial management level of the renewable resources industry, this paper expounds the necessity of constructing the financial shared service center. And, It discusses the main practices of construction and implementation of financial sharing center from 3 aspects, including making financial shared service center construction plan, determining the scope of business and unifying management standards&working standards, and implementing technology development based on the business process and information technology platform. It can be in favor of improving the company's financial management, accounting work quality and efficiency.%立足于再生资源行业经营环境的复杂性及提升公司财务管理水平的需要,阐述了构建财务共享服务中心的必要性。从制定财务共享服务中心建设方案;确定业务范围,统一管理标准及工作标准;以业务流程为基础,以信息技术平台为依托,实施服务中心的技术开发3个方面,论述了财务共享服务中心的构建与实施的主要做法,为提升公司财务管控水平、会计工作质量和效率提供参考。

  11. Partnership Brings Educational Exhibits, Events, and Resources from Seven National Research Laboratories to the Public in a New Retail Center: The Wonders of Science at Twenty Ninth Street Project

    Science.gov (United States)

    Foster, S. Q.; Johnson, R.; Carbone, L.; Vangundy, S.; Adams, L.; Becker, K.; Cobabe-Ammanns, E.; Curtis, L.; Dusenbery, P.; Foy, R.; Himes, C.; Howell, C.; Knight, C.; Morehouse, R.; Koch, L.; O'Brian, T.; Rooney, J.; Schassburger, P.

    2006-12-01

    Federally Funded Research and Development Centers and universities are challenged to disseminate their educational resources to national audiences, let alone to find ways to collaborate with each other while engaging with the schools and public in their local communities. A unique new partnership involving seven world renowned research laboratories and a commercial land developer in the Denver Metropolitan is celebrating the unveiling of exhibits, web kiosk portals, and public science education events in a shopping mall. The October 2006 opening of the Twenty Ninth Street retail sales center (formerly Crossroad Mall) in Boulder, Colorado, has revitalized 60 acres in the heart of the city. It offers outdoor plazas that accommodate science education installations and lab-sponsored public events. The goal of the partnership is to celebrate the long-standing contributions of research laboratories to the community, increase awareness of each institution's mission, and entice visitors of all ages to learn more about science, mathematics, engineering, technology and related educational opportunities and careers. We describe how the public is responding to the Wonders of Science at Twenty Ninth Street, summarize lessons learned about this ambitious science education collaboration, and plans to sustain public and the K-12 community interest into the future. Partners in the Wonders of Science at Twenty Ninth Street include the JILA at the University of Colorado, the National Center for Atmospheric Research, National Institute for Science and Technology, National Oceanic and Atmospheric Administration, National Renewable Energy Laboratory, the University of Colorado's Laboratory for Atmospheric and Space Physics, Space Science Institute, and Westcor, the shopping mall's developer.

  12. VectorBase

    Data.gov (United States)

    U.S. Department of Health & Human Services — VectorBase is a Bioinformatics Resource Center for invertebrate vectors. It is one of four Bioinformatics Resource Centers funded by NIAID to provide web-based...

  13. The Bioinformatics of Integrative Medical Insights: Proposals for an International PsychoSocial and Cultural Bioinformatics Project

    Directory of Open Access Journals (Sweden)

    Ernest Rossi

    2006-01-01

    Full Text Available We propose the formation of an International PsychoSocial and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.

  14. Bioinformatics analysis of differentially expressed proteins in prostate cancer based on proteomics data

    Directory of Open Access Journals (Sweden)

    Chen C

    2016-03-01

    Full Text Available Chen Chen,1 Li-Guo Zhang,1 Jian Liu,1 Hui Han,1 Ning Chen,1 An-Liang Yao,1 Shao-San Kang,1 Wei-Xing Gao,1 Hong Shen,2 Long-Jun Zhang,1 Ya-Peng Li,1 Feng-Hong Cao,1 Zhi-Guo Li3 1Department of Urology, North China University of Science and Technology Affiliated Hospital, 2Department of Modern Technology and Education Center, 3Department of Medical Research Center, International Science and Technology Cooperation Base of Geriatric Medicine, North China University of Science and Technology, Tangshan, People’s Republic of China Abstract: We mined the literature for proteomics data to examine the occurrence and metastasis of prostate cancer (PCa through a bioinformatics analysis. We divided the differentially expressed proteins (DEPs into two groups: the group consisting of PCa and benign tissues (P&b and the group presenting both high and low PCa metastatic tendencies (H&L. In the P&b group, we found 320 DEPs, 20 of which were reported more than three times, and DES was the most commonly reported. Among these DEPs, the expression levels of FGG, GSN, SERPINC1, TPM1, and TUBB4B have not yet been correlated with PCa. In the H&L group, we identified 353 DEPs, 13 of which were reported more than three times. Among these DEPs, MDH2 and MYH9 have not yet been correlated with PCa metastasis. We further confirmed that DES was differentially expressed between 30 cancer and 30 benign tissues. In addition, DEPs associated with protein transport, regulation of actin cytoskeleton, and the extracellular matrix (ECM–receptor interaction pathway were prevalent in the H&L group and have not yet been studied in detail in this context. Proteins related to homeostasis, the wound-healing response, focal adhesions, and the complement and coagulation pathways were overrepresented in both groups. Our findings suggest that the repeatedly reported DEPs in the two groups may function as potential biomarkers for detecting PCa and predicting its aggressiveness. Furthermore

  15. Bioinformatics in the secondary science classroom: A study of state content standards and students' perceptions of, and performance in, bioinformatics lessons

    Science.gov (United States)

    Wefer, Stephen H.

    The proliferation of bioinformatics in modern Biology marks a new revolution in science, which promises to influence science education at all levels. This thesis examined state standards for content that articulated bioinformatics, and explored secondary students' affective and cognitive perceptions of, and performance in, a bioinformatics mini-unit. The results are presented as three studies. The first study analyzed secondary science standards of 49 U.S States (Iowa has no science framework) and the District of Columbia for content related to bioinformatics at the introductory high school biology level. The bionformatics content of each state's Biology standards were categorized into nine areas and the prevalence of each area documented. The nine areas were: The Human Genome Project, Forensics, Evolution, Classification, Nucleotide Variations, Medicine, Computer Use, Agriculture/Food Technology, and Science Technology and Society/Socioscientific Issues (STS/SSI). Findings indicated a generally low representation of bioinformatics related content, which varied substantially across the different areas. Recommendations are made for reworking existing standards to incorporate bioinformatics and to facilitate the goal of promoting science literacy in this emerging new field among secondary school students. The second study examined thirty-two students' affective responses to, and content mastery of, a two-week bioinformatics mini-unit. The findings indicate that the students generally were positive relative to their interest level, the usefulness of the lessons, the difficulty level of the lessons, likeliness to engage in additional bioinformatics, and were overall successful on the assessments. A discussion of the results and significance is followed by suggestions for future research and implementation for transferability. The third study presents a case study of individual differences among ten secondary school students, whose cognitive and affective percepts were

  16. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  17. Mobile PET Center Project

    Science.gov (United States)

    Ryzhikova, O.; Naumov, N.; Sergienko, V.; Kostylev, V.

    2017-01-01

    Positron emission tomography is the most promising technology to monitor cancer and heart disease treatment. Stationary PET center requires substantial financial resources and time for construction and equipping. The developed mobile solution will allow introducing PET technology quickly without major investments.

  18. Integration of Bioinformatics into an Undergraduate Biology Curriculum and the Impact on Development of Mathematical Skills

    Science.gov (United States)

    Wightman, Bruce; Hark, Amy T.

    2012-01-01

    The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this…

  19. Exploring Cystic Fibrosis Using Bioinformatics Tools: A Module Designed for the Freshman Biology Course

    Science.gov (United States)

    Zhang, Xiaorong

    2011-01-01

    We incorporated a bioinformatics component into the freshman biology course that allows students to explore cystic fibrosis (CF), a common genetic disorder, using bioinformatics tools and skills. Students learn about CF through searching genetic databases, analyzing genetic sequences, and observing the three-dimensional structures of proteins…

  20. Visualizing and Sharing Results in Bioinformatics Projects: GBrowse and GenBank Exports

    Science.gov (United States)

    Effective tools for presenting and sharing data are necessary for collaborative projects, typical for bioinformatics. In order to facilitate sharing our data with other genomics, molecular biology, and bioinformatics researchers, we have developed software to export our data to GenBank and combined ...

  1. Making Bioinformatics Projects a Meaningful Experience in an Undergraduate Biotechnology or Biomedical Science Programme

    Science.gov (United States)

    Sutcliffe, Iain C.; Cummings, Stephen P.

    2007-01-01

    Bioinformatics has emerged as an important discipline within the biological sciences that allows scientists to decipher and manage the vast quantities of data (such as genome sequences) that are now available. Consequently, there is an obvious need to provide graduates in biosciences with generic, transferable skills in bioinformatics. We present…

  2. Bioinformatics in Middle East Program Curricula--A Focus on the Arabian Gulf

    Science.gov (United States)

    Loucif, Samia

    2014-01-01

    The purpose of this paper is to investigate the inclusion of bioinformatics in program curricula in the Middle East, focusing on educational institutions in the Arabian Gulf. Bioinformatics is a multidisciplinary field which has emerged in response to the need for efficient data storage and retrieval, and accurate and fast computational and…

  3. Bioinformatics in High School Biology Curricula: A Study of State Science Standards

    Science.gov (United States)

    Wefer, Stephen H.; Sheppard, Keith

    2008-01-01

    The proliferation of bioinformatics in modern biology marks a modern revolution in science that promises to influence science education at all levels. This study analyzed secondary school science standards of 49 U.S. states (Iowa has no science framework) and the District of Columbia for content related to bioinformatics. The bioinformatics…

  4. A Portable Bioinformatics Course for Upper-Division Undergraduate Curriculum in Sciences

    Science.gov (United States)

    Floraino, Wely B.

    2008-01-01

    This article discusses the challenges that bioinformatics education is facing and describes a bioinformatics course that is successfully taught at the California State Polytechnic University, Pomona, to the fourth year undergraduate students in biological sciences, chemistry, and computer science. Information on lecture and computer practice…

  5. Incorporating a Collaborative Web-Based Virtual Laboratory in an Undergraduate Bioinformatics Course

    Science.gov (United States)

    Weisman, David

    2010-01-01

    Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…

  6. Computer Programming and Biomolecular Structure Studies: A Step beyond Internet Bioinformatics

    Science.gov (United States)

    Likic, Vladimir A.

    2006-01-01

    This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…

  7. A Summer Program Designed to Educate College Students for Careers in Bioinformatics

    Science.gov (United States)

    Krilowicz, Beverly; Johnston, Wendie; Sharp, Sandra B.; Warter-Perez, Nancy; Momand, Jamil

    2007-01-01

    A summer program was created for undergraduates and graduate students that teaches bioinformatics concepts, offers skills in professional development, and provides research opportunities in academic and industrial institutions. We estimate that 34 of 38 graduates (89%) are in a career trajectory that will use bioinformatics. Evidence from…

  8. ISEV position paper: extracellular vesicle RNA analysis and bioinformatics

    Directory of Open Access Journals (Sweden)

    Andrew F. Hill

    2013-12-01

    Full Text Available Extracellular vesicles (EVs are the collective term for the various vesicles that are released by cells into the extracellular space. Such vesicles include exosomes and microvesicles, which vary by their size and/or protein and genetic cargo. With the discovery that EVs contain genetic material in the form of RNA (evRNA has come the increased interest in these vesicles for their potential use as sources of disease biomarkers and potential therapeutic agents. Rapid developments in the availability of deep sequencing technologies have enabled the study of EV-related RNA in detail. In October 2012, the International Society for Extracellular Vesicles (ISEV held a workshop on “evRNA analysis and bioinformatics.” Here, we report the conclusions of one of the roundtable discussions where we discussed evRNA analysis technologies and provide some guidelines to researchers in the field to consider when performing such analysis.

  9. Developing sustainable software solutions for bioinformatics by the " Butterfly" paradigm.

    Science.gov (United States)

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2014-01-01

    Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (" Butterfly" paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions.

  10. Bioinformatic Analysis of BBTV Satellite DNA in Hainan

    Institute of Scientific and Technical Information of China (English)

    Nai-tong Yu; Tuan-cheng Feng; Yu-liang Zhang; Jian-hua Wang; Zhi-xin Liu

    2011-01-01

    Banana bunchy top virus (BBTV),family Nanaviridae,genus Babuvirus,is a single stranded DNA virus (ssDNA) that causes banana bunchy top disease (BBTD) in banana plants.It is the most common and most destructive of all viruses in these plants and is widespread throughout the Asia-Pacific region.In this study we isolated,cloned and sequenced a BBTV sample from Hainan Island,China.The results from sequencing and bioinformatics analysis indicate this isolate represents a satellite DNA component with 12 DNA sequences motifs.We also predicted the physical and chemical properties,structure,signal peptide,phosphorylation,secondary structure,tertiary structure and functional domains of its encoding protein,and compare them with the corresponding quantities in the replication initiation protein of BBTV DNA1.

  11. Systems biology and bioinformatics in aging research: a workshop report.

    Science.gov (United States)

    Fuellen, Georg; Dengjel, Jörn; Hoeflich, Andreas; Hoeijemakers, Jan; Kestler, Hans A; Kowald, Axel; Priebe, Steffen; Rebholz-Schuhmann, Dietrich; Schmeck, Bernd; Schmitz, Ulf; Stolzing, Alexandra; Sühnel, Jürgen; Wuttke, Daniel; Vera, Julio

    2012-12-01

    In an "aging society," health span extension is most important. As in 2010, talks in this series of meetings in Rostock-Warnemünde demonstrated that aging is an apparently very complex process, where computational work is most useful for gaining insights and to find interventions that counter aging and prevent or counteract aging-related diseases. The specific topics of this year's meeting entitled, "RoSyBA: Rostock Symposium on Systems Biology and Bioinformatics in Ageing Research," were primarily related to "Cancer and Aging" and also had a focus on work funded by the German Federal Ministry of Education and Research (BMBF). The next meeting in the series, scheduled for September 20-21, 2013, will focus on the use of ontologies for computational research into aging, stem cells, and cancer. Promoting knowledge formalization is also at the core of the set of proposed action items concluding this report.

  12. Meta-learning framework applied in bioinformatics inference system design.

    Science.gov (United States)

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  13. Why Polyphenols have Promiscuous Actions? An Investigation by Chemical Bioinformatics.

    Science.gov (United States)

    Tang, Guang-Yan

    2016-05-01

    Despite their diverse pharmacological effects, polyphenols are poor for use as drugs, which have been traditionally ascribed to their low bioavailability. However, Baell and co-workers recently proposed that the redox potential of polyphenols also plays an important role in this, because redox reactions bring promiscuous actions on various protein targets and thus produce non-specific pharmacological effects. To investigate whether the redox reactivity behaves as a critical factor in polyphenol promiscuity, we performed a chemical bioinformatics analysis on the structure-activity relationships of twenty polyphenols. It was found that the gene expression profiles of human cell lines induced by polyphenols were not correlated with the presence or not of redox moieties in the polyphenols, but significantly correlated with their molecular structures. Therefore, it is concluded that the promiscuous actions of polyphenols are likely to result from their inherent structural features rather than their redox potential.

  14. Integrative content-driven concepts for bioinformatics ``beyond the cell"

    Indian Academy of Sciences (India)

    Edgar Wingender; Torsten Crass; Jennifer D Hogan; Alexander E Kel; Olga V Kel-Margoulis; Anatolij P Potapov

    2007-01-01

    Bioinformatics has delivered great contributions to genome and genomics research, without which the world-wide success of this and other global (‘omics’) approaches would not have been possible. More recently, it has developed further towards the analysis of different kinds of networks thus laying the foundation for comprehensive description, analysis and manipulation of whole living systems in modern ``systems biology”. The next step which is necessary for developing a systems biology that deals with systemic phenomena is to expand the existing and develop new methodologies that are appropriate to characterize intercellular processes and interactions without omitting the causal underlying molecular mechanisms. Modelling the processes on the different levels of complexity involved requires a comprehensive integration of information on gene regulatory events, signal transduction pathways, protein interaction and metabolic networks as well as cellular functions in the respective tissues/organs.

  15. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  16. Assessment of Common and Emerging Bioinformatics Pipelines for Targeted Metagenomics

    Science.gov (United States)

    Siegwald, Léa; Touzet, Hélène; Lemoine, Yves; Hot, David

    2017-01-01

    Targeted metagenomics, also known as metagenetics, is a high-throughput sequencing application focusing on a nucleotide target in a microbiome to describe its taxonomic content. A wide range of bioinformatics pipelines are available to analyze sequencing outputs, and the choice of an appropriate tool is crucial and not trivial. No standard evaluation method exists for estimating the accuracy of a pipeline for targeted metagenomics analyses. This article proposes an evaluation protocol containing real and simulated targeted metagenomics datasets, and adequate metrics allowing us to study the impact of different variables on the biological interpretation of results. This protocol was used to compare six different bioinformatics pipelines in the basic user context: Three common ones (mothur, QIIME and BMP) based on a clustering-first approach and three emerging ones (Kraken, CLARK and One Codex) using an assignment-first approach. This study surprisingly reveals that the effect of sequencing errors has a bigger impact on the results that choosing different amplified regions. Moreover, increasing sequencing throughput increases richness overestimation, even more so for microbiota of high complexity. Finally, the choice of the reference database has a bigger impact on richness estimation for clustering-first pipelines, and on correct taxa identification for assignment-first pipelines. Using emerging assignment-first pipelines is a valid approach for targeted metagenomics analyses, with a quality of results comparable to popular clustering-first pipelines, even with an error-prone sequencing technology like Ion Torrent. However, those pipelines are highly sensitive to the quality of databases and their annotations, which makes clustering-first pipelines still the only reliable approach for studying microbiomes that are not well described. PMID:28052134

  17. Bioinformatics approaches to single-cell analysis in developmental biology.

    Science.gov (United States)

    Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H

    2016-03-01

    Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology.

  18. Hydroxysteroid dehydrogenases (HSDs) in bacteria: a bioinformatic perspective.

    Science.gov (United States)

    Kisiela, Michael; Skarka, Adam; Ebert, Bettina; Maser, Edmund

    2012-03-01

    Steroidal compounds including cholesterol, bile acids and steroid hormones play a central role in various physiological processes such as cell signaling, growth, reproduction, and energy homeostasis. Hydroxysteroid dehydrogenases (HSDs), which belong to the superfamily of short-chain dehydrogenases/reductases (SDR) or aldo-keto reductases (AKR), are important enzymes involved in the steroid hormone metabolism. HSDs function as an enzymatic switch that controls the access of receptor-active steroids to nuclear hormone receptors and thereby mediate a fine-tuning of the steroid response. The aim of this study was the identification of classified functional HSDs and the bioinformatic annotation of these proteins in all complete sequenced bacterial genomes followed by a phylogenetic analysis. For the bioinformatic annotation we constructed specific hidden Markov models in an iterative approach to provide a reliable identification for the specific catalytic groups of HSDs. Here, we show a detailed phylogenetic analysis of 3α-, 7α-, 12α-HSDs and two further functional related enzymes (3-ketosteroid-Δ(1)-dehydrogenase, 3-ketosteroid-Δ(4)(5α)-dehydrogenase) from the superfamily of SDRs. For some bacteria that have been previously reported to posses a specific HSD activity, we could annotate the corresponding HSD protein. The dominating phyla that were identified to express HSDs were that of Actinobacteria, Proteobacteria, and Firmicutes. Moreover, some evolutionarily more ancient microorganisms (e.g., Cyanobacteria and Euryachaeota) were found as well. A large number of HSD-expressing bacteria constitute the normal human gastro-intestinal flora. Another group of bacteria were originally isolated from natural habitats like seawater, soil, marine and permafrost sediments. These bacteria include polycyclic aromatic hydrocarbons-degrading species such as Pseudomonas, Burkholderia and Rhodococcus. In conclusion, HSDs are found in a wide variety of microorganisms including

  19. Engineering Technical Support Center (ETSC)

    Science.gov (United States)

    ETSC is EPA’s technical support and resource centers responsible for providing specialized scientific and engineering support to decision-makers in the Agency’s ten regional offices, states, communities, and local businesses.

  20. 9th International Conference on Practical Applications of Computational Biology and Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan

    2015-01-01

    This proceedings presents recent practical applications of Computational Biology and  Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...