WorldWideScience

Sample records for bioinformatics resource center

  1. Improvements to PATRIC, the all-bacterial Bioinformatics Database and Analysis Resource Center

    Science.gov (United States)

    Wattam, Alice R.; Davis, James J.; Assaf, Rida; Boisvert, Sébastien; Brettin, Thomas; Bun, Christopher; Conrad, Neal; Dietrich, Emily M.; Disz, Terry; Gabbard, Joseph L.; Gerdes, Svetlana; Henry, Christopher S.; Kenyon, Ronald W.; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K.; Olsen, Gary J.; Murphy-Olson, Daniel E.; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D.; Shukla, Maulik; Vonstein, Veronika; Warren, Andrew; Xia, Fangfang; Yoo, Hyunseung; Stevens, Rick L.

    2017-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the bacterial Bioinformatics Resource Center (https://www.patricbrc.org). Recent changes to PATRIC include a redesign of the web interface and some new services that provide users with a platform that takes them from raw reads to an integrated analysis experience. The redesigned interface allows researchers direct access to tools and data, and the emphasis has changed to user-created genome-groups, with detailed summaries and views of the data that researchers have selected. Perhaps the biggest change has been the enhanced capability for researchers to analyze their private data and compare it to the available public data. Researchers can assemble their raw sequence reads and annotate the contigs using RASTtk. PATRIC also provides services for RNA-Seq, variation, model reconstruction and differential expression analysis, all delivered through an updated private workspace. Private data can be compared by ‘virtual integration’ to any of PATRIC's public data. The number of genomes available for comparison in PATRIC has expanded to over 80 000, with a special emphasis on genomes with antimicrobial resistance data. PATRIC uses this data to improve both subsystem annotation and k-mer classification, and tags new genomes as having signatures that indicate susceptibility or resistance to specific antibiotics. PMID:27899627

  2. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers

    DEFF Research Database (Denmark)

    Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude

    2012-01-01

    and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...

  3. Influenza research database: an integrated bioinformatics resource for influenza virus research

    Science.gov (United States)

    The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics, an...

  4. PATRIC, the bacterial bioinformatics database and analysis resource

    Science.gov (United States)

    Wattam, Alice R.; Abraham, David; Dalay, Oral; Disz, Terry L.; Driscoll, Timothy; Gabbard, Joseph L.; Gillespie, Joseph J.; Gough, Roger; Hix, Deborah; Kenyon, Ronald; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K.; Olson, Robert; Overbeek, Ross; Pusch, Gordon D.; Shukla, Maulik; Schulman, Julie; Stevens, Rick L.; Sullivan, Daniel E.; Vonstein, Veronika; Warren, Andrew; Will, Rebecca; Wilson, Meredith J.C.; Yoo, Hyun Seung; Zhang, Chengdong; Zhang, Yan; Sobral, Bruno W.

    2014-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein–protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10 000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issue. PMID:24225323

  5. PATRIC, the bacterial bioinformatics database and analysis resource.

    Science.gov (United States)

    Wattam, Alice R; Abraham, David; Dalay, Oral; Disz, Terry L; Driscoll, Timothy; Gabbard, Joseph L; Gillespie, Joseph J; Gough, Roger; Hix, Deborah; Kenyon, Ronald; Machi, Dustin; Mao, Chunhong; Nordberg, Eric K; Olson, Robert; Overbeek, Ross; Pusch, Gordon D; Shukla, Maulik; Schulman, Julie; Stevens, Rick L; Sullivan, Daniel E; Vonstein, Veronika; Warren, Andrew; Will, Rebecca; Wilson, Meredith J C; Yoo, Hyun Seung; Zhang, Chengdong; Zhang, Yan; Sobral, Bruno W

    2014-01-01

    The Pathosystems Resource Integration Center (PATRIC) is the all-bacterial Bioinformatics Resource Center (BRC) (http://www.patricbrc.org). A joint effort by two of the original National Institute of Allergy and Infectious Diseases-funded BRCs, PATRIC provides researchers with an online resource that stores and integrates a variety of data types [e.g. genomics, transcriptomics, protein-protein interactions (PPIs), three-dimensional protein structures and sequence typing data] and associated metadata. Datatypes are summarized for individual genomes and across taxonomic levels. All genomes in PATRIC, currently more than 10,000, are consistently annotated using RAST, the Rapid Annotations using Subsystems Technology. Summaries of different data types are also provided for individual genes, where comparisons of different annotations are available, and also include available transcriptomic data. PATRIC provides a variety of ways for researchers to find data of interest and a private workspace where they can store both genomic and gene associations, and their own private data. Both private and public data can be analyzed together using a suite of tools to perform comparative genomic or transcriptomic analysis. PATRIC also includes integrated information related to disease and PPIs. All the data and integrated analysis and visualization tools are freely available. This manuscript describes updates to the PATRIC since its initial report in the 2007 NAR Database Issue.

  6. Staff Scientist - RNA Bioinformatics | Center for Cancer Research

    Science.gov (United States)

    The newly established RNA Biology Laboratory (RBL) at the Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) in Frederick, Maryland is recruiting a Staff Scientist with strong expertise in RNA bioinformatics to join the Intramural Research Program’s mission of high impact, high reward science. The RBL is the equivalent of an

  7. Genomics and bioinformatics resources for translational science in Rosaceae.

    Science.gov (United States)

    Jung, Sook; Main, Dorrie

    2014-01-01

    Recent technological advances in biology promise unprecedented opportunities for rapid and sustainable advancement of crop quality. Following this trend, the Rosaceae research community continues to generate large amounts of genomic, genetic and breeding data. These include annotated whole genome sequences, transcriptome and expression data, proteomic and metabolomic data, genotypic and phenotypic data, and genetic and physical maps. Analysis, storage, integration and dissemination of these data using bioinformatics tools and databases are essential to provide utility of the data for basic, translational and applied research. This review discusses the currently available genomics and bioinformatics resources for the Rosaceae family.

  8. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    DEFF Research Database (Denmark)

    Babbitt, Patricia C.; Bagos, Pantelis G.; Bairoch, Amos

    2015-01-01

    During 11–12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from...... protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication...

  9. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat.

    Science.gov (United States)

    Babbitt, Patricia C; Bagos, Pantelis G; Bairoch, Amos; Bateman, Alex; Chatonnet, Arnaud; Chen, Mark Jinan; Craik, David J; Finn, Robert D; Gloriam, David; Haft, Daniel H; Henrissat, Bernard; Holliday, Gemma L; Isberg, Vignir; Kaas, Quentin; Landsman, David; Lenfant, Nicolas; Manning, Gerard; Nagano, Nozomi; Srinivasan, Narayanaswamy; O'Donovan, Claire; Pruitt, Kim D; Sowdhamini, Ramanathan; Rawlings, Neil D; Saier, Milton H; Sharman, Joanna L; Spedding, Michael; Tsirigos, Konstantinos D; Vastermark, Ake; Vriend, Gerrit

    2015-01-01

    During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication and funding. An important outcome of this meeting was the creation of a Specialist Protein Resource Network that we believe will improve coordination of the activities of its member resources. We invite further protein database resources to join the network and continue the dialogue.

  10. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    , and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...

  11. mockrobiota: a Public Resource for Microbiome Bioinformatics Benchmarking.

    Science.gov (United States)

    Bokulich, Nicholas A; Rideout, Jai Ram; Mercurio, William G; Shiffer, Arron; Wolfe, Benjamin; Maurice, Corinne F; Dutton, Rachel J; Turnbaugh, Peter J; Knight, Rob; Caporaso, J Gregory

    2016-01-01

    Mock communities are an important tool for validating, optimizing, and comparing bioinformatics methods for microbial community analysis. We present mockrobiota, a public resource for sharing, validating, and documenting mock community data resources, available at http://caporaso-lab.github.io/mockrobiota/. The materials contained in mockrobiota include data set and sample metadata, expected composition data (taxonomy or gene annotations or reference sequences for mock community members), and links to raw data (e.g., raw sequence data) for each mock community data set. mockrobiota does not supply physical sample materials directly, but the data set metadata included for each mock community indicate whether physical sample materials are available. At the time of this writing, mockrobiota contains 11 mock community data sets with known species compositions, including bacterial, archaeal, and eukaryotic mock communities, analyzed by high-throughput marker gene sequencing. IMPORTANCE The availability of standard and public mock community data will facilitate ongoing method optimizations, comparisons across studies that share source data, and greater transparency and access and eliminate redundancy. These are also valuable resources for bioinformatics teaching and training. This dynamic resource is intended to expand and evolve to meet the changing needs of the omics community.

  12. Bioinformatics Meets Virology: The European Virus Bioinformatics Center's Second Annual Meeting.

    Science.gov (United States)

    Ibrahim, Bashar; Arkhipova, Ksenia; Andeweg, Arno C; Posada-Céspedes, Susana; Enault, François; Gruber, Arthur; Koonin, Eugene V; Kupczok, Anne; Lemey, Philippe; McHardy, Alice C; McMahon, Dino P; Pickett, Brett E; Robertson, David L; Scheuermann, Richard H; Zhernakova, Alexandra; Zwart, Mark P; Schönhuth, Alexander; Dutilh, Bas E; Marz, Manja

    2018-05-14

    The Second Annual Meeting of the European Virus Bioinformatics Center (EVBC), held in Utrecht, Netherlands, focused on computational approaches in virology, with topics including (but not limited to) virus discovery, diagnostics, (meta-)genomics, modeling, epidemiology, molecular structure, evolution, and viral ecology. The goals of the Second Annual Meeting were threefold: (i) to bring together virologists and bioinformaticians from across the academic, industrial, professional, and training sectors to share best practice; (ii) to provide a meaningful and interactive scientific environment to promote discussion and collaboration between students, postdoctoral fellows, and both new and established investigators; (iii) to inspire and suggest new research directions and questions. Approximately 120 researchers from around the world attended the Second Annual Meeting of the EVBC this year, including 15 renowned international speakers. This report presents an overview of new developments and novel research findings that emerged during the meeting.

  13. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    Science.gov (United States)

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  14. - Oklahoma Water Resources Center

    Science.gov (United States)

    Development Ag Business Community & Rural Development Crops Family & Consumer Sciences Gardening Family & Consumer Sciences Food & Ag Products Center Horticulture & Landscape Architecture & Landscape Architecture Natural Resource Ecology & Management Plant & Soil Sciences

  15. E-MSD: an integrated data resource for bioinformatics.

    Science.gov (United States)

    Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K

    2005-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.

  16. Water Resources Research Center

    Science.gov (United States)

    Untitled Document  Search Welcome to the University of Hawai'i at Manoa Water Resources Research Center At WRRC we concentrate on addressing the unique water and wastewater management problems and issues elsewhere by researching water-related issues distinctive to these areas. We are Hawaii's link in a network

  17. An Overview of Bioinformatics Tools and Resources in Allergy.

    Science.gov (United States)

    Fu, Zhiyan; Lin, Jing

    2017-01-01

    The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.

  18. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  19. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community.

    Science.gov (United States)

    Connor, Thomas R; Loman, Nicholas J; Thompson, Simon; Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius; Sheppard, Samuel K; Pallen, Mark J

    2016-09-01

    The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.

  20. MOWServ: a web client for integration of bioinformatic resources

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  1. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  2. ENERGY RESOURCES CENTER

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, Virginia

    1979-11-01

    First I will give a short history of this Center which has had three names and three moves (and one more in the offing) in three years. Then I will tell you about the accomplishments made in the past year. And last, I will discuss what has been learned and what is planned for the future. The Energy and Environment Information Center (EEIC), as it was first known, was organized in August 1975 in San Francisco as a cooperative venture by the Federal Energy Administration (FEA), Energy Research and Development Administration (ERDA) and the Environmental Protection Agency (EPA). These three agencies planned this effort to assist the public in obtaining information about energy and the environmental aspects of energy. The Public Affairs Offices of FEA, ERDA and EPA initiated the idea of the Center. One member from each agency worked at the Center, with assistance from the Lawrence Berkeley Laboratory Information Research Group (LBL IRG) and with on-site help from the EPA Library. The Center was set up in a corner of the EPA Library. FEA and ERDA each contributed one staff member on a rotating basis to cover the daily operation of the Center and money for books and periodicals. EPA contributed space, staff time for ordering, processing and indexing publications, and additional money for acquisitions. The LBL Information Research Group received funds from ERDA on a 189 FY 1976 research project to assist in the development of the Center as a model for future energy centers.

  3. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    NARCIS (Netherlands)

    Babbitt, P.C.; Bagos, P.G.; Bairoch, A.; Bateman, A.; Chatonnet, A.; Chen, M.J.; Craik, D.J.; Finn, R.D.; Gloriam, D.; Haft, D.H.; Henrissat, B.; Holliday, G.L.; Isberg, V.; Kaas, Q.; Landsman, D.; Lenfant, N.; Manning, G.; Nagano, N.; Srinivasan, N.; O'Donovan, C.; Pruitt, K.D.; Sowdhamini, R.; Rawlings, N.D.; Saier, M.H., Jr.; Sharman, J.L.; Spedding, M.; Tsirigos, K.D.; Vastermark, A.; Vriend, G.

    2015-01-01

    During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from

  4. MSeqDR: A Centralized Knowledge Repository and Bioinformatics Web Resource to Facilitate Genomic Investigations in Mitochondrial Disease

    NARCIS (Netherlands)

    L. Shen (Lishuang); M.A. Diroma (Maria Angela); M. Gonzalez (Michael); D. Navarro-Gomez (Daniel); J. Leipzig (Jeremy); M.T. Lott (Marie T.); M. van Oven (Mannis); D.C. Wallace; C.C. Muraresku (Colleen Clarke); Z. Zolkipli-Cunningham (Zarazuela); P.F. Chinnery (Patrick); M. Attimonelli (Marcella); S. Zuchner (Stephan); M.J. Falk (Marni J.); X. Gai (Xiaowu)

    2016-01-01

    textabstractMSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes,

  5. Tools and data services registry: a community effort to document bioinformatics resources

    Science.gov (United States)

    Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599

  6. FungiDB: An Integrated Bioinformatic Resource for Fungi and Oomycetes

    Directory of Open Access Journals (Sweden)

    Evelina Y. Basenko

    2018-03-01

    Full Text Available FungiDB (fungidb.org is a free online resource for data mining and functional genomics analysis for fungal and oomycete species. FungiDB is part of the Eukaryotic Pathogen Genomics Database Resource (EuPathDB, eupathdb.org platform that integrates genomic, transcriptomic, proteomic, and phenotypic datasets, and other types of data for pathogenic and nonpathogenic, free-living and parasitic organisms. FungiDB is one of the largest EuPathDB databases containing nearly 100 genomes obtained from GenBank, Aspergillus Genome Database (AspGD, The Broad Institute, Joint Genome Institute (JGI, Ensembl, and other sources. FungiDB offers a user-friendly web interface with embedded bioinformatics tools that support custom in silico experiments that leverage FungiDB-integrated data. In addition, a Galaxy-based workspace enables users to generate custom pipelines for large-scale data analysis (e.g., RNA-Seq, variant calling, etc.. This review provides an introduction to the FungiDB resources and focuses on available features, tools, and queries and how they can be used to mine data across a diverse range of integrated FungiDB datasets and records.

  7. National Sexual Violence Resource Center (NSVRC)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Sexual Violence Resource Center (NSVRC) is a national information and resource hub relating to all aspects of sexual violence. NSVRC staff collect and...

  8. MSeqDR: A Centralized Knowledge Repository and Bioinformatics Web Resource to Facilitate Genomic Investigations in Mitochondrial Disease

    OpenAIRE

    Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T.; Oven, Mannis; Wallace, D.C.; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J.; Gai, Xiaowu

    2016-01-01

    textabstractMSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR ...

  9. BioStar: an online question & answer resource for the bioinformatics community

    Science.gov (United States)

    Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...

  10. Database Resources of the BIG Data Center in 2018.

    Science.gov (United States)

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  12. Model-driven user interfaces for bioinformatics data resources: regenerating the wheel as an alternative to reinventing it

    Directory of Open Access Journals (Sweden)

    Swainston Neil

    2006-12-01

    Full Text Available Abstract Background The proliferation of data repositories in bioinformatics has resulted in the development of numerous interfaces that allow scientists to browse, search and analyse the data that they contain. Interfaces typically support repository access by means of web pages, but other means are also used, such as desktop applications and command line tools. Interfaces often duplicate functionality amongst each other, and this implies that associated development activities are repeated in different laboratories. Interfaces developed by public laboratories are often created with limited developer resources. In such environments, reducing the time spent on creating user interfaces allows for a better deployment of resources for specialised tasks, such as data integration or analysis. Laboratories maintaining data resources are challenged to reconcile requirements for software that is reliable, functional and flexible with limitations on software development resources. Results This paper proposes a model-driven approach for the partial generation of user interfaces for searching and browsing bioinformatics data repositories. Inspired by the Model Driven Architecture (MDA of the Object Management Group (OMG, we have developed a system that generates interfaces designed for use with bioinformatics resources. This approach helps laboratory domain experts decrease the amount of time they have to spend dealing with the repetitive aspects of user interface development. As a result, the amount of time they can spend on gathering requirements and helping develop specialised features increases. The resulting system is known as Pierre, and has been validated through its application to use cases in the life sciences, including the PEDRoDB proteomics database and the e-Fungi data warehouse. Conclusion MDAs focus on generating software from models that describe aspects of service capabilities, and can be applied to support rapid development of repository

  13. The Virtual Xenbase: transitioning an online bioinformatics resource to a private cloud.

    Science.gov (United States)

    Karimi, Kamran; Vize, Peter D

    2014-01-01

    As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.

  14. A Reading Resource Center: Why and How

    Science.gov (United States)

    Minkoff, Henry

    1974-01-01

    Hunter College has set up a Reading Resource Center where students receive individualized help in specific problem areas not covered in their reading classes and where teachers can find materials either for their own edification or for use in the classroom. (Author)

  15. Self-Access Centers: Maximizing Learners’ Access to Center Resources

    Directory of Open Access Journals (Sweden)

    Mark W. Tanner

    2010-09-01

    Full Text Available Originally published in TESL-EJ March 2009, Volume 12, Number 4 (http://tesl-ej.org/ej48/a2.html. Reprinted with permission from the authors.Although some students have discovered how to use self-access centers effectively, the majority appear to be unaware of available resources. A website and database of materials were created to help students locate materials and use the Self-Access Study Center (SASC at Brigham Young University’s English Language Center (ELC more effectively. Students took two surveys regarding their use of the SASC. The first survey was given before the website and database were made available. A second survey was administered 12 weeks after students had been introduced to the resource. An analysis of the data shows that students tend to use SASC resources more autonomously as a result of having a web-based database. The survey results suggest that SAC managers can encourage more autonomous use of center materials by provided a website and database to help students find appropriate materials to use to learn English.

  16. Illinois trauma centers and community violence resources

    Directory of Open Access Journals (Sweden)

    Bennet Butler

    2014-01-01

    Full Text Available Background: Elder abuse and neglect (EAN, intimate partner violence (IPV, and street-based community violence (SBCV are significant public health problems, which frequently lead to traumatic injury. Trauma centers can provide an effective setting for intervention and referral, potentially interrupting the cycle of violence. Aims: To assess existing institutional resources for the identification and treatment of violence victims among patients presenting with acute injury to statewide trauma centers. Settings and Design: We used a prospective, web-based survey of trauma medical directors at 62 Illinois trauma centers. Nonresponders were contacted via telephone to complete the survey. Materials and Methods: This survey was based on a survey conducted in 2004 assessing trauma centers and IPV resources. We modified this survey to collect data on IPV, EAN, and SBCV. Statistical Analysis: Univariate and bivariate statistics were performed using STATA statistical software. Results: We found that 100% of trauma centers now screen for IPV, an improvement from 2004 (P = 0.007. Screening for EAN (70% and SBCV (61% was less common (P < 0.001, and hospitals thought that resources for SBCV in particular were inadequate (P < 0.001 and fewer resources were available for these patients (P = 0.02. However, there was lack of uniformity of screening, tracking, and referral practices for victims of violence throughout the state. Conclusion: The multiplicity of strategies for tracking and referring victims of violence in Illinois makes it difficult to assess screening and tracking or form generalized policy recommendations. This presents an opportunity to improve care delivered to victims of violence by standardizing care and referral protocols.

  17. Improved genomic resources and new bioinformatic workflow for the carcinogenic parasite Clonorchis sinensis: Biotechnological implications.

    Science.gov (United States)

    Wang, Daxi; Korhonen, Pasi K; Gasser, Robin B; Young, Neil D

    Clonorchis sinensis (family Opisthorchiidae) is an important foodborne parasite that has a major socioeconomic impact on ~35 million people predominantly in China, Vietnam, Korea and the Russian Far East. In humans, infection with C. sinensis causes clonorchiasis, a complex hepatobiliary disease that can induce cholangiocarcinoma (CCA), a malignant cancer of the bile ducts. Central to understanding the epidemiology of this disease is knowledge of genetic variation within and among populations of this parasite. Although most published molecular studies seem to suggest that C. sinensis represents a single species, evidence of karyotypic variation within C. sinensis and cryptic species within a related opisthorchiid fluke (Opisthorchis viverrini) emphasise the importance of studying and comparing the genes and genomes of geographically distinct isolates of C. sinensis. Recently, we sequenced, assembled and characterised a draft nuclear genome of a C. sinensis isolate from Korea and compared it with a published draft genome of a Chinese isolate of this species using a bioinformatic workflow established for comparing draft genome assemblies and their gene annotations. We identified that 50.6% and 51.3% of the Korean and Chinese C. sinensis genomic scaffolds were syntenic, respectively. Within aligned syntenic blocks, the genomes had a high level of nucleotide identity (99.1%) and encoded 15 variable proteins likely to be involved in diverse biological processes. Here, we review current technical challenges of using draft genome assemblies to undertake comparative genomic analyses to quantify genetic variation between isolates of the same species. Using a workflow that overcomes these challenges, we report on a high-quality draft genome for C. sinensis from Korea and comparative genomic analyses, as a basis for future investigations of the genetic structures of C. sinensis populations, and discuss the biotechnological implications of these explorations. Copyright © 2018

  18. MSeqDR: A Centralized Knowledge Repository and Bioinformatics Web Resource to Facilitate Genomic Investigations in Mitochondrial Disease.

    Science.gov (United States)

    Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu

    2016-06-01

    MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. © 2016 WILEY PERIODICALS, INC.

  19. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  20. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    Science.gov (United States)

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  1. miRToolsGallery: a tag-based and rankable microRNA bioinformatics resources database portal

    Science.gov (United States)

    Chen, Liang; Heikkinen, Liisa; Wang, ChangLiang; Yang, Yang; Knott, K Emily

    2018-01-01

    Abstract Hundreds of bioinformatics tools have been developed for MicroRNA (miRNA) investigations including those used for identification, target prediction, structure and expression profile analysis. However, finding the correct tool for a specific application requires the tedious and laborious process of locating, downloading, testing and validating the appropriate tool from a group of nearly a thousand. In order to facilitate this process, we developed a novel database portal named miRToolsGallery. We constructed the portal by manually curating > 950 miRNA analysis tools and resources. In the portal, a query to locate the appropriate tool is expedited by being searchable, filterable and rankable. The ranking feature is vital to quickly identify and prioritize the more useful from the obscure tools. Tools are ranked via different criteria including the PageRank algorithm, date of publication, number of citations, average of votes and number of publications. miRToolsGallery provides links and data for the comprehensive collection of currently available miRNA tools with a ranking function which can be adjusted using different criteria according to specific requirements. Database URL: http://www.mirtoolsgallery.org PMID:29688355

  2. Virtualized cloud data center networks issues in resource management

    CERN Document Server

    Tsai, Linjiun

    2016-01-01

    This book discusses the characteristics of virtualized cloud networking, identifies the requirements of cloud network management, and illustrates the challenges in deploying virtual clusters in multi-tenant cloud data centers. The book also introduces network partitioning techniques to provide contention-free allocation, topology-invariant reallocation, and highly efficient resource utilization, based on the Fat-tree network structure. Managing cloud data center resources without considering resource contentions among different cloud services and dynamic resource demands adversely affects the performance of cloud services and reduces the resource utilization of cloud data centers. These challenges are mainly due to strict cluster topology requirements, resource contentions between uncooperative cloud services, and spatial/temporal data center resource fragmentation. Cloud data center network resource allocation/reallocation which cope well with such challenges will allow cloud services to be provisioned with ...

  3. Animal Resource Program | Center for Cancer Research

    Science.gov (United States)

    CCR Animal Resource Program The CCR Animal Resource Program plans, develops, and coordinates laboratory animal resources for CCR’s research programs. We also provide training, imaging, and technology development in support of moving basic discoveries to the clinic. The ARP Manager:

  4. Animal Resource Program | Center for Cancer Research

    Science.gov (United States)

    CCR Animal Resource Program The CCR Animal Resource Program plans, develops, and coordinates laboratory animal resources for CCR’s research programs. We also provide training, imaging, and technology development in support of moving basic discoveries to the clinic. The ARP Office:

  5. 76 FR 53885 - Patent and Trademark Resource Centers Metrics

    Science.gov (United States)

    2011-08-30

    ... DEPARTMENT OF COMMERCE United States Patent and Trademark Office Patent and Trademark Resource Centers Metrics ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and... ``Patent and Trademark Resource Centers Metrics comment'' in the subject line of the message. Mail: Susan K...

  6. A Dynamic and Interactive Monitoring System of Data Center Resources

    Directory of Open Access Journals (Sweden)

    Yu Ling-Fei

    2016-01-01

    Full Text Available To maximize the utilization and effectiveness of resources, it is very necessary to have a well suited management system for modern data centers. Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. In this paper, a dynamic and interactive monitoring system for data center resources, ResourceView, is presented. By consolidating all data center management functionality into a single interface, ResourceView shares a common view of the timeline metric status, while providing comprehensive, centralized monitoring of data center physical and virtual IT assets including power, cooling, physical space and VMs, so that to improve availability and efficiency. In addition, servers and VMs can be monitored from several viewpoints such as clusters, racks and projects, which is very convenient for users.

  7. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  8. Patient centered integrated clinical resource management.

    Science.gov (United States)

    Hofdijk, Jacob

    2011-01-01

    The impact of funding systems on the IT systems of providers has been enormous and have prevented the implementation of designs to focused on the health issue of patients. The paradigm shift the Dutch Ministry of Health has taken in funding health care has a remarkable impact on the orientation of IT systems design. Since 2007 the next step is taken: the application of the funding concept on chronic diseases using clinical standards as the norm. The focus on prevention involves the patient as an active partner in the care plan. The impact of the new dimension in funding has initiated a process directed to the development of systems to support collaborative working and an active involvement of the patient and its informal carers. This national approach will be presented to assess its international potential, as all countries face the long term care crisis lacking resources to meet the health needs of the population.

  9. Survivable resource orchestration for optically interconnected data center networks.

    Science.gov (United States)

    Zhang, Qiong; She, Qingya; Zhu, Yi; Wang, Xi; Palacharla, Paparao; Sekiya, Motoyoshi

    2014-01-13

    We propose resource orchestration schemes in overlay networks enabled by optical network virtualization. Based on the information from underlying optical networks, our proposed schemes provision the fewest data centers to guarantee K-connect survivability, thus maintaining resource availability for cloud applications under any failure.

  10. Navigating the changing learning landscape: perspective from bioinformatics.ca

    OpenAIRE

    Brazas, Michelle D.; Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...

  11. Designing and Implementing a Parenting Resource Center for Pregnant Teens

    Science.gov (United States)

    Broussard, Anne B; Broussard, Brenda S

    2009-01-01

    The Resource Center for Young Parents-To-Be is a longstanding and successful grant-funded project that was initiated as a response to an identified community need. Senior-level baccalaureate nursing students and their maternity-nursing instructors are responsible for staffing the resource center's weekly sessions, which take place at a public school site for pregnant adolescents. Childbirth educators interested in working with this population could assist in replicating this exemplary clinical project in order to provide prenatal education to this vulnerable and hard-to-reach group. PMID:20190852

  12. Nursing Reference Center: a point-of-care resource.

    Science.gov (United States)

    Vardell, Emily; Paulaitis, Gediminas Geddy

    2012-01-01

    Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.

  13. Biosecurity and Health Monitoring at the Zebrafish International Resource Center

    OpenAIRE

    Murray, Katrina N.; Varga, Zolt?n M.; Kent, Michael L.

    2016-01-01

    The Zebrafish International Resource Center (ZIRC) is a repository and distribution center for mutant, transgenic, and wild-type zebrafish. In recent years annual imports of new zebrafish lines to ZIRC have increased tremendously. In addition, after 15 years of research, we have identified some of the most virulent pathogens affecting zebrafish that should be avoided in large production facilities, such as ZIRC. Therefore, while importing a high volume of new lines we prioritize safeguarding ...

  14. Electronic Commerce Resource Centers. An Industry--University Partnership.

    Science.gov (United States)

    Gulledge, Thomas R.; Sommer, Rainer; Tarimcilar, M. Murat

    1999-01-01

    Electronic Commerce Resource Centers focus on transferring emerging technologies to small businesses through university/industry partnerships. Successful implementation hinges on a strategic operating plan, creation of measurable value for customers, investment in customer-targeted training, and measurement of performance outputs. (SK)

  15. Building an Information Resource Center for Competitive Intelligence.

    Science.gov (United States)

    Martin, J. Sperling

    1992-01-01

    Outlines considerations in the design of a Competitive Intelligence Information Resource Center (CIIRC), which is needed by business organizations for effective strategic decision making. Discussed are user needs, user participation, information sources, technology and interface design, operational characteristics, and planning for implementation.…

  16. The NIH-NIAID Filariasis Research Reagent Resource Center.

    Directory of Open Access Journals (Sweden)

    Michelle L Michalski

    2011-11-01

    Full Text Available Filarial worms cause a variety of tropical diseases in humans; however, they are difficult to study because they have complex life cycles that require arthropod intermediate hosts and mammalian definitive hosts. Research efforts in industrialized countries are further complicated by the fact that some filarial nematodes that cause disease in humans are restricted in host specificity to humans alone. This potentially makes the commitment to research difficult, expensive, and restrictive. Over 40 years ago, the United States National Institutes of Health-National Institute of Allergy and Infectious Diseases (NIH-NIAID established a resource from which investigators could obtain various filarial parasite species and life cycle stages without having to expend the effort and funds necessary to maintain the entire life cycles in their own laboratories. This centralized resource (The Filariasis Research Reagent Resource Center, or FR3 translated into cost savings to both NIH-NIAID and to principal investigators by freeing up personnel costs on grants and allowing investigators to divert more funds to targeted research goals. Many investigators, especially those new to the field of tropical medicine, are unaware of the scope of materials and support provided by the FR3. This review is intended to provide a short history of the contract, brief descriptions of the fiilarial species and molecular resources provided, and an estimate of the impact the resource has had on the research community, and describes some new additions and potential benefits the resource center might have for the ever-changing research interests of investigators.

  17. 2014 Mid-Atlantic Telehealth Resource Center Annual Summit

    Directory of Open Access Journals (Sweden)

    Katharine Hsu Wibberly

    2013-12-01

    Full Text Available The Mid-Atlantic Resource Center (MATRC; http://www.matrc.org/ advances the adoption and utilization of telehealth within the MATRC region and works collaboratively with the other federally funded Telehealth Resource Centers to accomplish the same nationally. MATRC offers technical assistance and other resources within the following mid-Atlantic states: Delaware, District of Columbia, Kentucky, Maryland, North Carolina, Pennsylvania, Virginia and West Virginia.   The 2014 MATRC Summit “Adding Value through Sustainable Telehealth” will be held March 30-April 1, 2014, at the Fredericksburg Expo & Conference Center, Fredericksburg, VA. The Summit will explore how telehealth adds value to patients, practitioners, hospitals, health systems, and other facilities. Participants will experience a highly interactive program built around the case history of “Mr. Doe” as he progresses through the primary care, inpatient hospitalization, and post-discharge environments. The Summit will conclude with a session on financial and business models for providing sustainable telehealth services.   For further information and registration, visit: http://matrc.org/component/content/article/2-uncategorised/80-mid-atlantic-telehealth-resource-summit-2014    

  18. Amarillo National Resource Center for Plutonium 1999 plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-01-30

    The purpose of the Amarillo National Resource Center for Plutonium is to serve the Texas Panhandle, the State of Texas and the US Department of Energy by: conducting scientific and technical research; advising decision makers; and providing information on nuclear weapons materials and related environment, safety, health, and nonproliferation issues while building academic excellence in science and technology. This paper describes the electronic resource library which provides the national archives of technical, policy, historical, and educational information on plutonium. Research projects related to the following topics are described: Environmental restoration and protection; Safety and health; Waste management; Education; Training; Instrumentation development; Materials science; Plutonium processing and handling; and Storage.

  19. Amarillo National Resource Center for Plutonium 1999 plan

    International Nuclear Information System (INIS)

    1999-01-01

    The purpose of the Amarillo National Resource Center for Plutonium is to serve the Texas Panhandle, the State of Texas and the US Department of Energy by: conducting scientific and technical research; advising decision makers; and providing information on nuclear weapons materials and related environment, safety, health, and nonproliferation issues while building academic excellence in science and technology. This paper describes the electronic resource library which provides the national archives of technical, policy, historical, and educational information on plutonium. Research projects related to the following topics are described: Environmental restoration and protection; Safety and health; Waste management; Education; Training; Instrumentation development; Materials science; Plutonium processing and handling; and Storage

  20. Crowdsourcing for bioinformatics.

    Science.gov (United States)

    Good, Benjamin M; Su, Andrew I

    2013-08-15

    Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.

  1. Human resource management in patient-centered pharmaceutical care.

    Science.gov (United States)

    White, S J

    1994-04-01

    Patient-centered care may have the pharmacists and technicians reporting either directly or in a matrix to other than pharmacy administration. The pharmacy administrative people will need to be both effective leaders and managers utilizing excellent human resource management skills. Significant creativity and innovation will be needed for transition from departmental-based services to patient care team services. Changes in the traditional methods of recruiting, interviewing, hiring, training, developing, inspiring, evaluating, and disciplining are required in this new environment.

  2. Web-based tools from AHRQ's National Resource Center.

    Science.gov (United States)

    Cusack, Caitlin M; Shah, Sapna

    2008-11-06

    The Agency for Healthcare Research and Quality (AHRQ) has made an investment of over $216 million in research around health information technology (health IT). As part of their investment, AHRQ has developed the National Resource Center for Health IT (NRC) which includes a public domain Web site. New content for the web site, such as white papers, toolkits, lessons from the health IT portfolio and web-based tools, is developed as needs are identified. Among the tools developed by the NRC are the Compendium of Surveys and the Clinical Decision Support (CDS) Resources. The Compendium of Surveys is a searchable repository of health IT evaluation surveys made available for public use. The CDS Resources contains content which may be used to develop clinical decision support tools, such as rules, reminders and templates. This live demonstration will show the access, use, and content of both these freely available web-based tools.

  3. Biosecurity and Health Monitoring at the Zebrafish International Resource Center.

    Science.gov (United States)

    Murray, Katrina N; Varga, Zoltán M; Kent, Michael L

    2016-07-01

    The Zebrafish International Resource Center (ZIRC) is a repository and distribution center for mutant, transgenic, and wild-type zebrafish. In recent years annual imports of new zebrafish lines to ZIRC have increased tremendously. In addition, after 15 years of research, we have identified some of the most virulent pathogens affecting zebrafish that should be avoided in large production facilities, such as ZIRC. Therefore, while importing a high volume of new lines we prioritize safeguarding the health of our in-house fish colony. Here, we describe the biosecurity and health-monitoring program implemented at ZIRC. This strategy was designed to prevent introduction of new zebrafish pathogens, minimize pathogens already present in the facility, and ensure a healthy zebrafish colony for in-house uses and shipment to customers.

  4. AbMiner: A bioinformatic resource on available monoclonal antibodies and corresponding gene identifiers for genomic, proteomic, and immunologic studies

    Directory of Open Access Journals (Sweden)

    Shankavaram Uma

    2006-04-01

    Full Text Available Abstract Background Monoclonal antibodies are used extensively throughout the biomedical sciences for detection of antigens, either in vitro or in vivo. We, for example, have used them for quantitation of proteins on "reverse-phase" protein lysate arrays. For those studies, we quality-controlled > 600 available monoclonal antibodies and also needed to develop precise information on the genes that encode their antigens. Translation among the various protein and gene identifier types proved non-trivial because of one-to-many and many-to-one relationships. To organize the antibody, protein, and gene information, we initially developed a relational database in Filemaker for our own use. When it became apparent that the information would be useful to many other researchers faced with the need to choose or characterize antibodies, we developed it further as AbMiner, a fully relational web-based database under MySQL, programmed in Java. Description AbMiner is a user-friendly, web-based relational database of information on > 600 commercially available antibodies that we validated by Western blot for protein microarray studies. It includes many types of information on the antibody, the immunogen, the vendor, the antigen, and the antigen's gene. Multiple gene and protein identifier types provide links to corresponding entries in a variety of other public databases, including resources for phosphorylation-specific antibodies. AbMiner also includes our quality-control data against a pool of 60 diverse cancer cell types (the NCI-60 and also protein expression levels for the NCI-60 cells measured using our high-density "reverse-phase" protein lysate microarrays for a selection of the listed antibodies. Some other available database resources give information on antibody specificity for one or a couple of cell types. In contrast, the data in AbMiner indicate specificity with respect to the antigens in a pool of 60 diverse cell types from nine different

  5. AbMiner: a bioinformatic resource on available monoclonal antibodies and corresponding gene identifiers for genomic, proteomic, and immunologic studies.

    Science.gov (United States)

    Major, Sylvia M; Nishizuka, Satoshi; Morita, Daisaku; Rowland, Rick; Sunshine, Margot; Shankavaram, Uma; Washburn, Frank; Asin, Daniel; Kouros-Mehr, Hosein; Kane, David; Weinstein, John N

    2006-04-06

    Monoclonal antibodies are used extensively throughout the biomedical sciences for detection of antigens, either in vitro or in vivo. We, for example, have used them for quantitation of proteins on "reverse-phase" protein lysate arrays. For those studies, we quality-controlled > 600 available monoclonal antibodies and also needed to develop precise information on the genes that encode their antigens. Translation among the various protein and gene identifier types proved non-trivial because of one-to-many and many-to-one relationships. To organize the antibody, protein, and gene information, we initially developed a relational database in Filemaker for our own use. When it became apparent that the information would be useful to many other researchers faced with the need to choose or characterize antibodies, we developed it further as AbMiner, a fully relational web-based database under MySQL, programmed in Java. AbMiner is a user-friendly, web-based relational database of information on > 600 commercially available antibodies that we validated by Western blot for protein microarray studies. It includes many types of information on the antibody, the immunogen, the vendor, the antigen, and the antigen's gene. Multiple gene and protein identifier types provide links to corresponding entries in a variety of other public databases, including resources for phosphorylation-specific antibodies. AbMiner also includes our quality-control data against a pool of 60 diverse cancer cell types (the NCI-60) and also protein expression levels for the NCI-60 cells measured using our high-density "reverse-phase" protein lysate microarrays for a selection of the listed antibodies. Some other available database resources give information on antibody specificity for one or a couple of cell types. In contrast, the data in AbMiner indicate specificity with respect to the antigens in a pool of 60 diverse cell types from nine different tissues of origin. AbMiner is a relational database that

  6. Education resources of the National Center for Biotechnology Information.

    Science.gov (United States)

    Cooper, Peter S; Lipshultz, Dawn; Matten, Wayne T; McGinnis, Scott D; Pechous, Steven; Romiti, Monica L; Tao, Tao; Valjavec-Gratian, Majda; Sayers, Eric W

    2010-11-01

    The National Center for Biotechnology Information (NCBI) hosts 39 literature and molecular biology databases containing almost half a billion records. As the complexity of these data and associated resources and tools continues to expand, so does the need for educational resources to help investigators, clinicians, information specialists and the general public make use of the wealth of public data available at the NCBI. This review describes the educational resources available at NCBI via the NCBI Education page (www.ncbi.nlm.nih.gov/Education/). These resources include materials designed for new users, such as About NCBI and the NCBI Guide, as well as documentation, Frequently Asked Questions (FAQs) and writings on the NCBI Bookshelf such as the NCBI Help Manual and the NCBI Handbook. NCBI also provides teaching materials such as tutorials, problem sets and educational tools such as the Amino Acid Explorer, PSSM Viewer and Ebot. NCBI also offers training programs including the Discovery Workshops, webinars and tutorials at conferences. To help users keep up-to-date, NCBI produces the online NCBI News and offers RSS feeds and mailing lists, along with a presence on Facebook, Twitter and YouTube.

  7. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  8. Lower Savannah aging, disability & transportation resource center : regional travel management and coordination center (TMCC) model and demonstration project.

    Science.gov (United States)

    2014-10-01

    This report details the deployed technology and implementation experiences of the Lower Savannah Aging, Disability & Transportation : Resource Center in Aiken, South Carolina, which served as the regional Travel Management and Coordination Center (TM...

  9. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  10. Fluor Hanford ALARA Center is a D and D Resource

    International Nuclear Information System (INIS)

    Waggoner, L.O.

    2008-01-01

    II. The ALARA Center staff routinely researches and tests new technology, sponsor vendor demonstrations, and redistribute tools, equipment and temporary shielding that may not be needed at one facility to another facility that needs it. The ALARA Center staff learns about new technology in several ways. This includes past radiological work experience, interaction with vendors, lessons learned, networking with other DOE sites, visits to the Hanford Technical Library, attendance at off-site conferences and ALARA Workshops. Personnel that contact the ALARA Center for assistance report positive results when they implement the tools, equipment and work practices recommended by the ALARA Center staff. This has translated to reduced exposure for workers and reduced the risk of contamination spread. For example: using a hydraulic shear on one job saved 16 Rem of exposure that would have been received if workers had used saws-all tools to cut piping in twenty-nine locations. Currently, the ALARA Center staff is emphasizing D and D techniques on size-reducing materials, decontamination techniques, use of remote tools/video equipment, capture ventilation, fixatives, using containments and how to find lessons learned. The ALARA Center staff issues a weekly report that discusses their interaction with the workforce and any new work practices, tools and equipment being used by the Hanford contractors. Distribution of this weekly report is to about 130 personnel on site and 90 personnel off site. This effectively spreads the word about ALARA throughout the DOE Complex. DOE EM-23, in conjunction with the D and D and Environmental Restoration work group of the Energy Facility Contractors Organization (EFCOG) established the Hanford ALARA Center as the D and D Hotline for companies who have questions about how D and D work is accomplished. The ALARA Center has become a resource to the nuclear industry and routinely helps contractors at other DOE Sites, power reactors, DOD sites, and

  11. LegumeDB1 bioinformatics resource: comparative genomic analysis and novel cross-genera marker identification in lupin and pasture legume species.

    Science.gov (United States)

    Moolhuijzen, P; Cakir, M; Hunter, A; Schibeci, D; Macgregor, A; Smith, C; Francki, M; Jones, M G K; Appels, R; Bellgard, M

    2006-06-01

    The identification of markers in legume pasture crops, which can be associated with traits such as protein and lipid production, disease resistance, and reduced pod shattering, is generally accepted as an important strategy for improving the agronomic performance of these crops. It has been demonstrated that many quantitative trait loci (QTLs) identified in one species can be found in other plant species. Detailed legume comparative genomic analyses can characterize the genome organization between model legume species (e.g., Medicago truncatula, Lotus japonicus) and economically important crops such as soybean (Glycine max), pea (Pisum sativum), chickpea (Cicer arietinum), and lupin (Lupinus angustifolius), thereby identifying candidate gene markers that can be used to track QTLs in lupin and pasture legume breeding. LegumeDB is a Web-based bioinformatics resource for legume researchers. LegumeDB analysis of Medicago truncatula expressed sequence tags (ESTs) has identified novel simple sequence repeat (SSR) markers (16 tested), some of which have been putatively linked to symbiosome membrane proteins in root nodules and cell-wall proteins important in plant-pathogen defence mechanisms. These novel markers by preliminary PCR assays have been detected in Medicago truncatula and detected in at least one other legume species, Lotus japonicus, Glycine max, Cicer arietinum, and (or) Lupinus angustifolius (15/16 tested). Ongoing research has validated some of these markers to map them in a range of legume species that can then be used to compile composite genetic and physical maps. In this paper, we outline the features and capabilities of LegumeDB as an interactive application that provides legume genetic and physical comparative maps, and the efficient feature identification and annotation of the vast tracks of model legume sequences for convenient data integration and visualization. LegumeDB has been used to identify potential novel cross-genera polymorphic legume

  12. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.

  13. Japan's silver human resource centers and participant well-being.

    Science.gov (United States)

    Weiss, Robert S; Bass, Scott A; Heimovitz, Harley K; Oka, Masato

    2005-03-01

    Japan's Silver Human Resource Center (SHRC) program provides part-time, paid employment to retirement-aged men and women. We studied 393 new program participants and examined whether part-time work influenced their well-being or "ikigai." The participants were divided into those who had worked in SHRC-provided jobs in the preceding year, and those who had not. Gender-stratified regression models were fitted to determine whether SHRC employment was associated with increased well-being. For men, actively working at a SHRC job was associated with greater well-being, compared to inactive members. And men with SHRC jobs and previous volunteering experience had the greatest increase in well-being. Women SHRC job holders did not experience increased well-being at the year's end. The study concludes that there is justification for exploring the usefulness of a similar program for American retirees who desire post-retirement part-time work.

  14. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  15. Eukaryotic Pathogen Database Resources (EuPathDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — EuPathDB Bioinformatics Resource Center for Biodefense and Emerging/Re-emerging Infectious Diseases is a portal for accessing genomic-scale datasets associated with...

  16. Electricity production perspective regarding resource recovery center (RRC) in Malaysia

    International Nuclear Information System (INIS)

    Masoud Aghajani Mir; Noor Ezlin Ahmad Basri; Rawshan Ara Begum; Sanaz Saheri

    2010-01-01

    Waste disposal is a global problem contributing to the ongoing climate change because of large emissions of greenhouse gases. So, using waste material as a resource instead of land filling, the greenhouse gas emissions from landfills are reduced. Also, Waste material can be used for waste incineration with energy recovery, thus decreasing the greenhouse gas emission from energy utilization by changing from fossil fuels to a partly renewable fuel. The production of Refuse Derived Fuels (RDF) involves the mechanical processing of household waste using screens, shredders and separators to recover recyclable materials and to produce a combustible product Regarding Resource Recovery Center/Waste to Energy (RRC/WtE) Facility in Malaysia that located in Semenyih. This System involves the removal of inert and compost able materials followed by pulverization to produce a feedstock which be incinerated in power stations. The purpose of this study is to evaluate and forecasting of the number of these facilities that Kuala Lumpur will need regarding to potential of Municipal Solid Waste (MSW) generation and Refuse Derive Fuel that will be produce from that in future. This plant is able to produce average 7.5 MWh electricity from 700 ton MSW or 200 ton RDF per day that approximately is used 1.8 MWh per day inside the pant and it can sell around 5.7 MWh daily. Kuala Lumpur will generate around 7713 ton MSW per day and it is able to produce 2466 ton RDF per day. Regarding to potential of MSW and RDF generation by 2020 in Kuala Lumpur it will need around 11 plants to treatment of MSW that this number of plants is able to produce around 62.8 MWh electricity per day. (author)

  17. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    Directory of Open Access Journals (Sweden)

    Tilton Susan C

    2012-11-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single

  18. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    Science.gov (United States)

    2012-01-01

    Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis

  19. Silicon Photonics towards Disaggregation of Resources in Data Centers

    Directory of Open Access Journals (Sweden)

    Miltiadis Moralis-Pegios

    2018-01-01

    Full Text Available In this paper, we demonstrate two subsystems based on Silicon Photonics, towards meeting the network requirements imposed by disaggregation of resources in Data Centers. The first one utilizes a 4 × 4 Silicon photonics switching matrix, employing Mach Zehnder Interferometers (MZIs with Electro-Optical phase shifters, directly controlled by a high speed Field Programmable Gate Array (FPGA board for the successful implementation of a Bloom-Filter (BF-label forwarding scheme. The FPGA is responsible for extracting the BF-label from the incoming optical packets, carrying out the BF-based forwarding function, determining the appropriate switching state and generating the corresponding control signals towards conveying incoming packets to the desired output port of the matrix. The BF-label based packet forwarding scheme allows rapid reconfiguration of the optical switch, while at the same time reduces the memory requirements of the node’s lookup table. Successful operation for 10 Gb/s data packets is reported for a 1 × 4 routing layout. The second subsystem utilizes three integrated spiral waveguides, with record-high 2.6 ns/mm2, delay versus footprint efficiency, along with two Semiconductor Optical Amplifier Mach-Zehnder Interferometer (SOA-MZI wavelength converters, to construct a variable optical buffer and a Time Slot Interchange module. Error-free on-chip variable delay buffering from 6.5 ns up to 17.2 ns and successful timeslot interchanging for 10 Gb/s optical packets are presented.

  20. FY 1998 achievement report on the R and D for accelerating the basement arrangement of the biological resource information. Bioinformatics; 1998 nendo seibutsu shigen joho kiban seibi kasokuka kenkyu kaihatsu seika hokokusho. Bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-04-01

    The paper described the FY 1998 results of the development of bioinformatics. All structure/sequence known proteins from all public databases were hierarchically classified systematically and according to similarities among sequences. Individual proteins in each classified group were added with explanatory notes of structure and physiological function. The mutually related information in metabolic system/signal transfer system was also included. The state of each residue position was classified into three states in terms of the solvent accessibility and three states in terms of the secondary structure. Parameter sets of occurrence frequency for each state of 20 kinds of amino acid were made. By these, it can be evaluated how much the arbitrary amino acid sequence is suitable for each structure in database. The data on the structure of low molecular-weight compounds are also incorporated so that the search for the related biomolecular system information becomes possible. The metabolic system/signal transfer system information was made database, the link was formed between each protein and each low molecular-weight compound, and the information on biomolecular networks was made searchable. A system to predict/support protein structure and function was developed. (NEDO)

  1. Bioinformatics tools and database resources for systems genetics analysis in mice-a short review and an evaluation of future needs

    NARCIS (Netherlands)

    Durrant, Caroline; Swertz, Morris A.; Alberts, Rudi; Arends, Danny; Moeller, Steffen; Mott, Richard; Prins, Pjotr; van der Velde, K. Joeri; Jansen, Ritsert C.; Schughart, Klaus

    During a meeting of the SYSGENET working group 'Bioinformatics', currently available software tools and databases for systems genetics in mice were reviewed and the needs for future developments discussed. The group evaluated interoperability and performed initial feasibility studies. To aid future

  2. L-025: EPR-First Responders: Resource Coordinator and National Center for Emergency Operations

    International Nuclear Information System (INIS)

    2011-01-01

    This conference cover the importance of resource coordinator and the national Center for Emergency Operations which provides a stable environment installation and a valuable aid in the radiological emergency situation.The resources coordinator maintains the registers and resources located in general as well as the National Center for Emergency Operations is the ideal place for the public information Center. Both roles provide support and encourage the efforts to respond to the incident Command

  3. National Center for Mathematics and Science - teacher resources

    Science.gov (United States)

    Mathematics and Science (NCISLA) HOME | PROGRAM OVERVIEW | RESEARCH AND PROFESSIONAL DEVELOPMENT support and improve student understanding of mathematics and science. The instructional resources listed Resources (CD)Powerful Practices in Mathematics and Science A multimedia product for educators, professional

  4. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, May 1, 1997--July 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-09-01

    Progress summaries are provided from the Amarillo National Center for Plutonium. Programs include the plutonium information resource center, environment, public health, and safety, education and training, nuclear and other material studies.

  5. Bioinformatics Training: A Review of Challenges, Actions and Support Requirements

    DEFF Research Database (Denmark)

    Schneider, M.V.; Watson, J.; Attwood, T.

    2010-01-01

    As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...

  6. Biggest challenges in bioinformatics.

    Science.gov (United States)

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-04-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.

  7. Biggest challenges in bioinformatics

    OpenAIRE

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-01-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.

  8. A bioinformatics potpourri.

    Science.gov (United States)

    Schönbach, Christian; Li, Jinyan; Ma, Lan; Horton, Paul; Sjaugi, Muhammad Farhan; Ranganathan, Shoba

    2018-01-19

    The 16th International Conference on Bioinformatics (InCoB) was held at Tsinghua University, Shenzhen from September 20 to 22, 2017. The annual conference of the Asia-Pacific Bioinformatics Network featured six keynotes, two invited talks, a panel discussion on big data driven bioinformatics and precision medicine, and 66 oral presentations of accepted research articles or posters. Fifty-seven articles comprising a topic assortment of algorithms, biomolecular networks, cancer and disease informatics, drug-target interactions and drug efficacy, gene regulation and expression, imaging, immunoinformatics, metagenomics, next generation sequencing for genomics and transcriptomics, ontologies, post-translational modification, and structural bioinformatics are the subject of this editorial for the InCoB2017 supplement issues in BMC Genomics, BMC Bioinformatics, BMC Systems Biology and BMC Medical Genomics. New Delhi will be the location of InCoB2018, scheduled for September 26-28, 2018.

  9. 34 CFR 669.1 - What is the Language Resource Centers Program?

    Science.gov (United States)

    2010-07-01

    ... improving the nation's capacity for teaching and learning foreign languages effectively. (Authority: 20 U.S... 34 Education 3 2010-07-01 2010-07-01 false What is the Language Resource Centers Program? 669.1... POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION LANGUAGE RESOURCE CENTERS PROGRAM General § 669.1 What is the...

  10. National Resource Center for Health and Safety in Child Care and Early Education

    Science.gov (United States)

    ... National Resource Center for Health and Safety in Child Care and Early Education (NRC) at the University of Colorado College of ... National Resource Center for Health and Safety in Child Care and Early Education Email: info@NRCKids.org Please read our disclaimer ...

  11. National Maternal and Child Oral Health Resource Center

    Science.gov (United States)

    ... State Offices Search the Organizations Database Center for Oral Health Systems Integration and Improvement (COHSII) COHSII is a ... needs of the MCH population. Brush Up on Oral Health This monthly newsletter provides Head Start staff with ...

  12. 77 FR 72868 - The Centers for Disease Control (CDC)/Health Resources and Services Administration (HRSA...

    Science.gov (United States)

    2012-12-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention The Centers for Disease Control (CDC)/Health Resources and Services Administration (HRSA) Advisory Committee on HIV, Viral... announcements of meetings and other committee management activities, for both the Centers for Disease Control...

  13. Bioinformatics and Cancer

    Science.gov (United States)

    Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.

  14. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Alternative Fuels Data Center: Codes and Standards Resources

    Science.gov (United States)

    resources linked below help project developers and code officials prepare and review code-compliant projects , storage, and infrastructure. The following charts show the SDOs responsible for these alternative fuel codes and standards. Biodiesel Vehicle and Infrastructure Codes and Standards Chart Electric Vehicle and

  16. Plant Resources Center and the Vietnamese genebank system

    Science.gov (United States)

    The highly diverse floristic composition of Vietnam has been recognized as a center of angiosperm expansion and crop biodiversity. The broad range of climatic environments include habitats from tropical and subtropical, to temperate and alpine flora. The human component of the country includes 54 et...

  17. Moving from "optimal resources" to "optimal care" at trauma centers.

    Science.gov (United States)

    Shafi, Shahid; Rayan, Nadine; Barnes, Sunni; Fleming, Neil; Gentilello, Larry M; Ballard, David

    2012-04-01

    The Trauma Quality Improvement Program has shown that risk-adjusted mortality rates at some centers are nearly 50% higher than at others. This "quality gap" may be due to different clinical practices or processes of care. We have previously shown that adoption of processes called core measures by the Joint Commission and Centers for Medicare and Medicaid Services does not improve outcomes of trauma patients. We hypothesized that improved compliance with trauma-specific clinical processes of care (POC) is associated with reduced in-hospital mortality. Records of a random sample of 1,000 patients admitted to a Level I trauma center who met Trauma Quality Improvement Program criteria (age ≥ 16 years and Abbreviated Injury Scale score 3) were retrospectively reviewed for compliance with 25 trauma-specific POC (T-POC) that were evidence-based or expert consensus panel recommendations. Multivariate regression was used to determine the relationship between T-POC compliance and in-hospital mortality, adjusted for age, gender, injury type, and severity. Median age was 41 years, 65% were men, 88% sustained a blunt injury, and mortality was 12%. Of these, 77% were eligible for at least one T-POC and 58% were eligible for two or more. There was wide variation in T-POC compliance. Every 10% increase in compliance was associated with a 14% reduction in risk-adjusted in-hospital mortality. Unlike adoption of core measures, compliance with T-POC is associated with reduced mortality in trauma patients. Trauma centers with excess in-hospital mortality may improve patient outcomes by consistently applying T-POC. These processes should be explored for potential use as Core Trauma Center Performance Measures.

  18. Fiscal Year 1988 program report: Rhode Island Water Resources Center

    International Nuclear Information System (INIS)

    Poon, C.P.C.

    1989-07-01

    The State of Rhode Island is active in water resources planning, development, and management activities which include legislation, upgrading of wastewater treatment facilities, upgrading and implementing pretreatment programs, protecting watersheds and aquifers throughout the state. Current and anticipated state water problems are contamination and clean up of aquifers to protect the valuable groundwater resources; protection of watersheds by controlling non-point source pollution; development of pretreatment technologies; and deterioring groundwater quality from landfill leachate or drainage from septic tank leaching field. Seven projects were included covering the following subjects: (1) Radon and its nuclei parents in bedrocks; (2) Model for natural flushing of aquifer; (3) Microbial treatment of heavy metals; (4) Vegetative uptake of nitrate; (5) Microbial process in vegetative buffer strips; (6) Leachate characterization in landfills; and (7) Electrochemical treatment of heavy metals and cyanide

  19. Criteria and foundations for the implementation of the Learning Resource Centers

    OpenAIRE

    Raquel Zamora Fonseca

    2013-01-01

    Review the criteria and rationale basis for the implementation of research - library and learning resource centers. The analysis focused on the implementation of CRAIs in university libraries and organizational models that can take.

  20. Criteria and foundations for the implementation of the Learning Resource Centers

    Directory of Open Access Journals (Sweden)

    Raquel Zamora Fonseca

    2013-03-01

    Full Text Available Review the criteria and rationale basis for the implementation of research - library and learning resource centers. The analysis focused on the implementation of CRAIs in university libraries and organizational models that can take.

  1. Amarillo National Resource Center for Plutonium quarterly technical progress report, August 1--October 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    This paper describes activities of the Center under the following topical sections: Electronic resource library; Environmental restoration and protection; Health and safety; Waste management; Communication program; Education program; Training; Analytical development; Materials science; Plutonium processing and handling; and Storage.

  2. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, February 1, 1998--April 30, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    Activities from the Amarillo National Resource Center for Plutonium are described. Areas of work include materials science of nuclear and explosive materials, plutonium processing and handling, robotics, and storage.

  3. Use of IKONOS Data for Mapping Cultural Resources of Stennis Space Center, Mississippi

    Science.gov (United States)

    Spruce, Joseph P.; Giardino, Marco

    2002-01-01

    Cultural resource surveys are important for compliance with Federal and State law. Stennis Space Center (SSC) in Mississippi is researching, developing, and validating remote sensing and Geographical Information System (GIS) methods for aiding cultural resource assessments on the center's own land. The suitability of IKONOS satellite imagery for georeferencing scanned historic maps is examined in this viewgraph presentation. IKONOS data can be used to map historic buildings and farmland in Gainsville, MS, and plan archaeological surveys.

  4. OpenHelix: bioinformatics education outside of a different box.

    Science.gov (United States)

    Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C

    2010-11-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review.

  5. New research resources at the Bloomington Drosophila Stock Center.

    Science.gov (United States)

    Cook, Kevin R; Parks, Annette L; Jacobus, Luke M; Kaufman, Thomas C; Matthews, Kathleen A

    2010-01-01

    The Bloomington Drosophila Stock Center (BDSC) is a primary source of Drosophila stocks for researchers all over the world. It houses over 27,000 unique fly lines and distributed over 160,000 samples of these stocks this past year. This report provides a brief overview of significant recent events at the BDSC with a focus on new stock sets acquired in the past year, including stocks for phiC31 transformation, RNAi knockdown of gene expression, and SNP and quantitative trait loci discovery. We also describe additions to sets of insertions and molecularly defined chromosomal deficiencies, the creation of a new Deficiency Kit, and planned additions of X chromosome duplication sets.

  6. Assessment of water resources for nuclear energy centers

    Energy Technology Data Exchange (ETDEWEB)

    Samuels, G.

    1976-09-01

    Maps of the conterminous United States showing the rivers with sufficient flow to be of interest as potential sites for nuclear energy centers are presented. These maps show the rivers with (1) mean annual flows greater than 3000 cfs, with the flow rates identified for ranges of 3000 to 6000, 6000 to 12,000, 12,000 to 24,000, and greater than 24,000 cfs; (2) monthly, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs; and (3) annual, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs. Criteria relating river flow rates required for various size generating stations both for sites located on reservoirs and for sites without local storage of cooling water are discussed. These criteria are used in conjunction with plant water consumption rates (based on both instantaneous peak and annual average usage rates) to estimate the installed generating capacity that may be located at one site or within a river basin. Projections of future power capacity requirements, future demand for water (both withdrawals and consumption), and regions of expected water shortages are also presented. Regional maps of water availability, based on annual, 20-year low flows, are also shown. The feasibility of locating large energy centers in these regions is discussed.

  7. Assessment of water resources for nuclear energy centers

    International Nuclear Information System (INIS)

    Samuels, G.

    1976-09-01

    Maps of the conterminous United States showing the rivers with sufficient flow to be of interest as potential sites for nuclear energy centers are presented. These maps show the rivers with (1) mean annual flows greater than 3000 cfs, with the flow rates identified for ranges of 3000 to 6000, 6000 to 12,000, 12,000 to 24,000, and greater than 24,000 cfs; (2) monthly, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs; and (3) annual, 20-year low flows greater than 1500 cfs, with the flow rates identified for ranges of 1500 to 3000, 3000 to 6000, 6000 to 12,000, and greater than 12,000 cfs. Criteria relating river flow rates required for various size generating stations both for sites located on reservoirs and for sites without local storage of cooling water are discussed. These criteria are used in conjunction with plant water consumption rates (based on both instantaneous peak and annual average usage rates) to estimate the installed generating capacity that may be located at one site or within a river basin. Projections of future power capacity requirements, future demand for water (both withdrawals and consumption), and regions of expected water shortages are also presented. Regional maps of water availability, based on annual, 20-year low flows, are also shown. The feasibility of locating large energy centers in these regions is discussed

  8. Amarillo National Resource Center for Plutonium quarterly technical progress report, August 1, 1997--October 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report summarizes activities of the Amarillo National Resource Center for Plutonium during the quarter. The report describes the Electronic Resource Library; DOE support activities; current and future environmental health and safety programs; pollution prevention and pollution avoidance; communication, education, training, and community involvement programs; and nuclear and other material studies, including plutonium storage and disposition studies.

  9. 34 CFR 656.1 - What is the National Resource Centers Program?

    Science.gov (United States)

    2010-07-01

    ... STUDIES OR FOREIGN LANGUAGE AND INTERNATIONAL STUDIES General § 656.1 What is the National Resource... Foreign Language and International Studies (National Resource Centers Program), the Secretary awards... international studies and the international and foreign language aspects of professional and other fields of...

  10. Strategizing for the Future: Evolving Cultural Resource Centers in Higher Education

    Science.gov (United States)

    Shek, Yen Ling

    2013-01-01

    Cultural resource centers have been an ongoing and integral component to creating a more welcoming campus climate for Students of Color since its establishment in the 1960s. While the racial dynamics may have changed, many of the challenges Students of Color faced on predominantly White campuses have not. Interestingly, cultural resource centers…

  11. 78 FR 14303 - Statement of Delegation of Authority; Health Resources and Services Administration and Centers...

    Science.gov (United States)

    2013-03-05

    ... Services Administration and Centers for Disease Control and Prevention I hereby delegate to the Administrator, Health Resources and Services Administration (HRSA), and the Director, Centers for Disease Control and Prevention (CDC), with authority to redelegate, the authority vested in the Secretary of the...

  12. Nuclear Energy Center Site Survey, 1975. Part V. Resource availability and site screening

    International Nuclear Information System (INIS)

    1976-01-01

    Resource requirements for nuclear energy centers are discussed and the large land areas which meet these requirements and may contain potential sites for a nuclear energy center (NEC) are identified. Maps of the areas are included that identify seismic zones, river flow rates, and population density

  13. Library/Media Centers in U.S. Public Schools: Growth, Staffing, and Resources. Full Report

    Science.gov (United States)

    Tuck, Kathy D.; Holmes, Dwight R.

    2016-01-01

    At the request of New Business Item: 89 (NBI: 89) adopted at the 2015 NEA Representative Assembly, this study examines the extent to which students have access to public school library/media centers with qualified staff and up-to-date resources. The study explores trends in library/media center openings and closings as well as staffing patterns…

  14. Using Language Corpora to Develop a Virtual Resource Center for Business English

    Science.gov (United States)

    Ngo, Thi Phuong Le

    2015-01-01

    A Virtual Resource Center (VRC) has been brought into use since 2008 as an integral part of a task-based language teaching and learning program for Business English courses at Nantes University, France. The objective of the center is to enable students to work autonomously and individually on their language problems so as to improve their language…

  15. National Training Center Fort Irwin expansion area aquatic resources survey

    Energy Technology Data Exchange (ETDEWEB)

    Cushing, C.E.; Mueller, R.P.

    1996-02-01

    Biologists from Pacific Northwest National Laboratory (PNNL) were requested by personnel from Fort Irwin to conduct a biological reconnaissance of the Avawatz Mountains northeast of Fort Irwin, an area for proposed expansion of the Fort. Surveys of vegetation, small mammals, birds, reptiles, amphibians, and aquatic resources were conducted during 1995 to characterize the populations and habitats present with emphasis on determining the presence of any species of special concern. This report presents a description of the sites sampled, a list of the organisms found and identified, and a discussion of relative abundance. Taxonomic identifications were done to the lowest level possible commensurate with determining the status of the taxa relative to its possible listing as a threatened, endangered, or candidate species. Consultation with taxonomic experts was undertaken for the Coleoptera ahd Hemiptera. In addition to listing the macroinvertebrates found, the authors also present a discussion related to the possible presence of any threatened or endangered species or species of concern found in Sheep Creek Springs, Tin Cabin Springs, and the Amargosa River.

  16. The MMS Science Data Center: Operations, Capabilities, and Resource.

    Science.gov (United States)

    Larsen, K. W.; Pankratz, C. K.; Giles, B. L.; Kokkonen, K.; Putnam, B.; Schafer, C.; Baker, D. N.

    2015-12-01

    The Magnetospheric MultiScale (MMS) constellation of satellites completed their six month commissioning period in August, 2015 and began science operations. Science operations for the Solving Magnetospheric Acceleration, Reconnection, and Turbulence (SMART) instrument package occur at the Laboratory for Atmospheric and Space Physics (LASP). The Science Data Center (SDC) at LASP is responsible for the data production, management, distribution, and archiving of the data received. The mission will collect several gigabytes per day of particles and field data. Management of these data requires effective selection, transmission, analysis, and storage of data in the ground segment of the mission, including efficient distribution paths to enable the science community to answer the key questions regarding magnetic reconnection. Due to the constraints on download volume, this includes the Scientist-in-the-Loop program that identifies high-value science data needed to answer the outstanding questions of magnetic reconnection. Of particular interest to the community is the tools and associated website we have developed to provide convenient access to the data, first by the mission science team and, beginning March 1, 2016, by the entire community. This presentation will demonstrate the data and tools available to the community via the SDC and discuss the technologies we chose and lessons learned.

  17. Bioinformatics education dissemination with an evolutionary problem solving perspective.

    Science.gov (United States)

    Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J

    2010-11-01

    Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education.

  18. Introduction to bioinformatics.

    Science.gov (United States)

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  19. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The GMOD Drupal Bioinformatic Server Framework

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  1. The GMOD Drupal bioinformatic server framework.

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G

    2010-12-15

    Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.

  2. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrö nen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-01-01

    concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource

  3. Mobilizing Learning Resources in a Transnational Classroom: Translocal and Digital Resources in a Community Technology Center

    Science.gov (United States)

    Noguerón-Liu, Silvia

    2014-01-01

    Drawing from transnational and activity theory frameworks, this study analyzes the ways translocal flows shape learning in a community technology center serving adult immigrants in the US Southwest. It also explores students' constructions of the transnational nature of the courses they took, where they had access to both online and face-to-face…

  4. Vertical and Horizontal Integration of Bioinformatics Education: A Modular, Interdisciplinary Approach

    Science.gov (United States)

    Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.

    2009-01-01

    Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…

  5. Development of Bioinformatics Infrastructure for Genomics Research.

    Science.gov (United States)

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  6. Geriatric resources in acute care hospitals and trauma centers: a scarce commodity.

    Science.gov (United States)

    Maxwell, Cathy A; Mion, Lorraine C; Minnick, Ann

    2013-12-01

    The number of older adults admitted to acute care hospitals with traumatic injury is rising. The purpose of this study was to examine the location of five prominent geriatric resource programs in U.S. acute care hospitals and trauma centers (N = 4,865). As of 2010, 5.8% of all U.S. hospitals had at least one of these programs. Only 8.8% of trauma centers were served by at least one program; the majorities were in level I trauma centers. Slow adoption of geriatric resource programs in hospitals may be due to lack of champions who will advocate for these programs, lack of evidence of their impact on outcomes, or lack of a business plan to support adoption. Future studies should focus on the benefits of geriatric resource programs from patients' perspectives, as well as from business case and outcomes perspectives. Copyright 2013, SLACK Incorporated.

  7. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  8. Bioinformatics for Exploration

    Science.gov (United States)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  9. Advance in structural bioinformatics

    CERN Document Server

    Wei, Dongqing; Zhao, Tangzhen; Dai, Hao

    2014-01-01

    This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform

  10. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  11. Data mining for bioinformatics applications

    CERN Document Server

    Zengyou, He

    2015-01-01

    Data Mining for Bioinformatics Applications provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems, including problem definition, data collection, data preprocessing, modeling, and validation. The text uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems, containing 45 bioinformatics problems that have been investigated in recent research. For each example, the entire data mining process is described, ranging from data preprocessing to modeling and result validation. Provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems Uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems Contains 45 bioinformatics problems that have been investigated in recent research.

  12. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  13. Turnover intentions in a call center: The role of emotional dissonance, job resources, and job satisfaction.

    Directory of Open Access Journals (Sweden)

    Margherita Zito

    Full Text Available Turnover intentions refer to employees' intent to leave the organization and, within call centers, it can be influenced by factors such as relational variables or the perception of the quality of working life, which can be affected by emotional dissonance. This specific job demand to express emotions not felt is peculiar in call centers, and can influence job satisfaction and turnover intentions, a crucial problem among these working contexts. This study aims to detect, within the theoretical framework of the Job Demands-Resources Model, the role of emotional dissonance (job demand, and two resources, job autonomy and supervisors' support, in the perception of job satisfaction and turnover intentions among an Italian call center.The study involved 318 call center agents of an Italian Telecommunication Company. Data analysis first performed descriptive statistics through SPSS 22. A path analysis was then performed through LISREL 8.72 and tested both direct and indirect effects.Results suggest the role of resources in fostering job satisfaction and in decreasing turnover intentions. Emotional dissonance reveals a negative relation with job satisfaction and a positive relation with turnover. Moreover, job satisfaction is negatively related with turnover and mediates the relationship between job resources and turnover.This study contributes to extend the knowledge about the variables influencing turnover intentions, a crucial problem among call centers. Moreover, the study identifies theoretical considerations and practical implications to promote well-being among call center employees. To foster job satisfaction and reduce turnover intentions, in fact, it is important to make resources available, but also to offer specific training programs to make employees and supervisors aware about the consequences of emotional dissonance.

  14. Turnover intentions in a call center: The role of emotional dissonance, job resources, and job satisfaction

    Science.gov (United States)

    Zito, Margherita; Molino, Monica; Cortese, Claudio Giovanni; Ghislieri, Chiara; Colombo, Lara

    2018-01-01

    Background Turnover intentions refer to employees’ intent to leave the organization and, within call centers, it can be influenced by factors such as relational variables or the perception of the quality of working life, which can be affected by emotional dissonance. This specific job demand to express emotions not felt is peculiar in call centers, and can influence job satisfaction and turnover intentions, a crucial problem among these working contexts. This study aims to detect, within the theoretical framework of the Job Demands-Resources Model, the role of emotional dissonance (job demand), and two resources, job autonomy and supervisors’ support, in the perception of job satisfaction and turnover intentions among an Italian call center. Method The study involved 318 call center agents of an Italian Telecommunication Company. Data analysis first performed descriptive statistics through SPSS 22. A path analysis was then performed through LISREL 8.72 and tested both direct and indirect effects. Results Results suggest the role of resources in fostering job satisfaction and in decreasing turnover intentions. Emotional dissonance reveals a negative relation with job satisfaction and a positive relation with turnover. Moreover, job satisfaction is negatively related with turnover and mediates the relationship between job resources and turnover. Conclusion This study contributes to extend the knowledge about the variables influencing turnover intentions, a crucial problem among call centers. Moreover, the study identifies theoretical considerations and practical implications to promote well-being among call center employees. To foster job satisfaction and reduce turnover intentions, in fact, it is important to make resources available, but also to offer specific training programs to make employees and supervisors aware about the consequences of emotional dissonance. PMID:29401507

  15. Turnover intentions in a call center: The role of emotional dissonance, job resources, and job satisfaction.

    Science.gov (United States)

    Zito, Margherita; Emanuel, Federica; Molino, Monica; Cortese, Claudio Giovanni; Ghislieri, Chiara; Colombo, Lara

    2018-01-01

    Turnover intentions refer to employees' intent to leave the organization and, within call centers, it can be influenced by factors such as relational variables or the perception of the quality of working life, which can be affected by emotional dissonance. This specific job demand to express emotions not felt is peculiar in call centers, and can influence job satisfaction and turnover intentions, a crucial problem among these working contexts. This study aims to detect, within the theoretical framework of the Job Demands-Resources Model, the role of emotional dissonance (job demand), and two resources, job autonomy and supervisors' support, in the perception of job satisfaction and turnover intentions among an Italian call center. The study involved 318 call center agents of an Italian Telecommunication Company. Data analysis first performed descriptive statistics through SPSS 22. A path analysis was then performed through LISREL 8.72 and tested both direct and indirect effects. Results suggest the role of resources in fostering job satisfaction and in decreasing turnover intentions. Emotional dissonance reveals a negative relation with job satisfaction and a positive relation with turnover. Moreover, job satisfaction is negatively related with turnover and mediates the relationship between job resources and turnover. This study contributes to extend the knowledge about the variables influencing turnover intentions, a crucial problem among call centers. Moreover, the study identifies theoretical considerations and practical implications to promote well-being among call center employees. To foster job satisfaction and reduce turnover intentions, in fact, it is important to make resources available, but also to offer specific training programs to make employees and supervisors aware about the consequences of emotional dissonance.

  16. 15 CFR 291.4 - National industry-specific pollution prevention and environmental compliance resource centers.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false National industry-specific pollution prevention and environmental compliance resource centers. 291.4 Section 291.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE NIST EXTRAMURAL PROGRAM...

  17. Automated Library Networking in American Public Community College Learning Resources Centers.

    Science.gov (United States)

    Miah, Adbul J.

    1994-01-01

    Discusses the need for community colleges to assess their participation in automated library networking systems (ALNs). Presents results of questionnaires sent to 253 community college learning resource center directors to determine their use of ALNs. Reviews benefits of automation and ALN activities, planning and communications, institution size,…

  18. Measuring Malaysia School Resource Centers' Standards through iQ-PSS: An Online Management Information System

    Science.gov (United States)

    Zainudin, Fadzliaton; Ismail, Kamarulzaman

    2010-01-01

    The Ministry of Education has come up with an innovative way to monitor the progress of 9,843 School Resource Centers (SRCs) using an online management information system called iQ-PSS (Quality Index of SRC). This paper aims to describe the data collection method and analyze the current state of SRCs in Malaysia and explain how the results can be…

  19. Expanding the Intellectual Property Knowledge Base at University Libraries: Collaborating with Patent and Trademark Resource Centers

    Science.gov (United States)

    Wallace, Martin; Reinman, Suzanne

    2018-01-01

    Patent and Trademark Resource Centers are located in libraries throughout the U.S., with 43 being in academic libraries. With the importance of incorporating a knowledge of intellectual property (IP) and patent research in university curricula nationwide, this study developed and evaluated a partnership program to increase the understanding of IP…

  20. Efficient management of data center resources for massively multiplayer online games

    NARCIS (Netherlands)

    Nae, V.; Iosup, A.; Podlipnig, S.; Prodan, R.; Epema, D.H.J.; Fahringer, T.

    2008-01-01

    Today's massively multiplayer online games (MMOGs) can include millions of concurrent players spread across the world. To keep these highly-interactive virtual environments online, a MMOG operator may need to provision tens of thousands of computing resources from various data centers. Faced with

  1. Zebrafish Health Conditions in the China Zebrafish Resource Center and 20 Major Chinese Zebrafish Laboratories.

    Science.gov (United States)

    Liu, Liyue; Pan, Luyuan; Li, Kuoyu; Zhang, Yun; Zhu, Zuoyan; Sun, Yonghua

    2016-07-01

    In China, the use of zebrafish as an experimental animal in the past 15 years has widely expanded. The China Zebrafish Resource Center (CZRC), which was established in 2012, is becoming one of the major resource centers in the global zebrafish community. Large-scale use and regular exchange of zebrafish resources have put forward higher requirements on zebrafish health issues in China. This article reports the current aquatic infrastructure design, animal husbandry, and health-monitoring programs in the CZRC. Meanwhile, through a survey of 20 Chinese zebrafish laboratories, we also describe the current health status of major zebrafish facilities in China. We conclude that it is of great importance to establish a widely accepted health standard and health-monitoring strategy in the Chinese zebrafish research community.

  2. COMPARISON OF POPULAR BIOINFORMATICS DATABASES

    OpenAIRE

    Abdulganiyu Abdu Yusuf; Zahraddeen Sufyanu; Kabir Yusuf Mamman; Abubakar Umar Suleiman

    2016-01-01

    Bioinformatics is the application of computational tools to capture and interpret biological data. It has wide applications in drug development, crop improvement, agricultural biotechnology and forensic DNA analysis. There are various databases available to researchers in bioinformatics. These databases are customized for a specific need and are ranged in size, scope, and purpose. The main drawbacks of bioinformatics databases include redundant information, constant change, data spread over m...

  3. Bioinformatics-Aided Venomics

    Directory of Open Access Journals (Sweden)

    Quentin Kaas

    2015-06-01

    Full Text Available Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future.

  4. Evaluation of health resource utilization efficiency in community health centers of Jiangsu Province, China.

    Science.gov (United States)

    Xu, Xinglong; Zhou, Lulin; Antwi, Henry Asante; Chen, Xi

    2018-02-20

    While the demand for health services keep escalating at the grass roots or rural areas of China, a substantial portion of healthcare resources remain stagnant in the more developed cities and this has entrenched health inequity in many parts of China. At its conception, China's Deepen Medical Reform started in 2012 was intended to flush out possible disparities and promote a more equitable and efficient distribution of healthcare resources. Nearly half a decade of this reform, there are uncertainties as to whether the attainment of the objectives of the reform is in sight. Using a hybrid of panel data analysis and an augmented data envelopment analysis (DEA), we model human resources, material, finance to determine their technical and scale efficiency to comprehensively evaluate the transverse and longitudinal allocation efficiency of community health resources in Jiangsu Province. We observed that the Deepen Medical Reform in China has led to an increase concern to ensure efficient allocation of community health resources by health policy makers in the province. This has led to greater efficiency in health resource allocation in Jiangsu in general but serious regional or municipal disparities still exist. Using the DEA model, we note that the output from the Community Health Centers does not commensurate with the substantial resources (human resources, materials, and financial) invested in them. We further observe that the case is worst in less-developed Northern parts of Jiangsu Province. The government of Jiangsu Province could improve the efficiency of health resource allocation by improving the community health service system, rationalizing the allocation of health personnel, optimizing the allocation of material resources, and enhancing the level of health of financial resource allocation.

  5. Bioinformatics Education in Pathology Training: Current Scope and Future Direction

    Directory of Open Access Journals (Sweden)

    Michael R Clay

    2017-04-01

    Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.

  6. Establishing a health outcomes and economics center in radiology: strategies and resources required

    International Nuclear Information System (INIS)

    Medina, Santiago L.; Altman, Nolan R.

    2002-01-01

    To describe the resources and strategies required to establish a health outcomes and economics center in radiology.Methods. Human and nonhuman resources required to perform sound outcomes and economics studies in radiology are reviewed.Results. Human resources needed include skilled medical and nonmedical staff. Nonhuman resources required are: (1) communication and information network; (2) education tools and training programs; (3) budgetary strategies; and (4) sources of income. Effective utilization of these resources allows the performance of robust operational and clinical research projects in decision analysis, cost-effectiveness, diagnostic performance (sensitivity, specificity, and ROC curves), and clinical analytical and experimental studies.Conclusion. As new radiologic technology and techniques are introduced in medicine, society is increasingly demanding sound clinical studies that will determine the impact of radiologic studies on patient outcome. Health-care funding is scarce, and therefore third-party payers and hospitals are demanding more efficiency and productivity from radiologic service providers. To meet these challenges, radiology departments could establish health outcomes and economics centers to study the clinical effectiveness of imaging and its impact on patient outcome. (orig.)

  7. Western Mineral and Environmental Resources Science Center--providing comprehensive earth science for complex societal issues

    Science.gov (United States)

    Frank, David G.; Wallace, Alan R.; Schneider, Jill L.

    2010-01-01

    Minerals in the environment and products manufactured from mineral materials are all around us and we use and come into contact with them every day. They impact our way of life and the health of all that lives. Minerals are critical to the Nation's economy and knowing where future mineral resources will come from is important for sustaining the Nation's economy and national security. The U.S. Geological Survey (USGS) Mineral Resources Program (MRP) provides scientific information for objective resource assessments and unbiased research results on mineral resource potential, production and consumption statistics, as well as environmental consequences of mining. The MRP conducts this research to provide information needed for land planners and decisionmakers about where mineral commodities are known and suspected in the earth's crust and about the environmental consequences of extracting those commodities. As part of the MRP scientists of the Western Mineral and Environmental Resources Science Center (WMERSC or 'Center' herein) coordinate the development of national, geologic, geochemical, geophysical, and mineral-resource databases and the migration of existing databases to standard models and formats that are available to both internal and external users. The unique expertise developed by Center scientists over many decades in response to mineral-resource-related issues is now in great demand to support applications such as public health research and remediation of environmental hazards that result from mining and mining-related activities. Western Mineral and Environmental Resources Science Center Results of WMERSC research provide timely and unbiased analyses of minerals and inorganic materials to (1) improve stewardship of public lands and resources; (2) support national and international economic and security policies; (3) sustain prosperity and improve our quality of life; and (4) protect and improve public health, safety, and environmental quality. The MRP

  8. Emergent Computation Emphasizing Bioinformatics

    CERN Document Server

    Simon, Matthew

    2005-01-01

    Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...

  9. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  10. Establishing Network Interaction between Resource Training Centers for People with Disabilities and Partner Universities

    Directory of Open Access Journals (Sweden)

    Panyukova S.V.,

    2018-05-01

    Full Text Available The paper focuses on the problem of accessibility and quality of higher education for students with disabilities. We describe our experience in organising network interaction between the MSUPE Resource and Training Center for Disabled People established in 2016-2017 and partner universities in ‘fixed territories’. The need for cooperation and network interaction arises from the high demand for the cooperation of efforts of leading experts, researchers, methodologists and instructors necessary for improving the quality and accessibility of higher education for persons with disabilities. The Resource and Training Center offers counseling for the partner universities, arranges advanced training for those responsible for teaching of the disabled, and offers specialized equipment for temporary use. In this article, we emphasize the importance of organizing network interactions with universities and social partners in order to ensure accessibility of higher education for students with disabilities.

  11. FORMATION OF TEACHERS-TUTOR ICT COMPETENCE OF DISTANCE EDUCATION RESOURCE CENTER

    Directory of Open Access Journals (Sweden)

    Olga E. Konevchshynska

    2014-09-01

    Full Text Available The paper analyzes the main approaches to the definition of ICT competence of professionals who provide training and methodological support of distance learning. There is highlighted the level of scientific development of the problem, identified and proved the essence of teacher’s ICT competence, overviewed the international and domestic experience of teacher training in the the sphere of information technologies. It is indicated that one of the main tasks of resource centers for distance education is the provision of an appropriate level of qualification of teacher-tutor working in a network of resource centers. Also it is pointed out the levels of ICT competencies necessary for successful professional activity of network teachers.

  12. Uso da bioinformática na diferenciação molecular da Entamoeba histolytica e Entamoeba díspar - DOI: 10.4025/actascihealthsci.v30i2.2375 Molecular discrimination of Entamoeba histolytica and Entamoeba dispar by bioinformatics resources - DOI: 10.4025/actascihealthsci.v30i2.2375

    Directory of Open Access Journals (Sweden)

    Débora Sommer

    2008-12-01

    Full Text Available Amebíase invasiva, causada por Entamoeba histolytica, é microscopicamente indistinguível da espécie não-patogênica Entamoeba dispar. Com auxílio de ferramentas de bioinformática, objetivou-se diferenciar Entamoeba histolytica e Entamoeba dispar por técnicas moleculares. A análise foi realizada a partir do banco de dados da National Center for Biotechnology Information; pela pesquisa de similaridade de sequências, elegeu-se o gene da cisteína sintase. Um par de primer foi desenhado (programa Web Primer e foi selecionada a enzima de restrição TaqI (programa Web Cutter. Após a atuação da enzima, o fragmento foi dividido em dois, um com 255 pb e outro com 554 pb, padrão característico da E. histolytica. Na ausência de corte, o fragmento apresentou o tamanho de 809 pb, referente à E. dispar.Under microscopic conditions, the invasive Entamoeba histolytica is indistinguishable from the non-pathogenic species Entamoeba dispar. In this way, the present study was carried out to determine a molecular strategy for discriminating both species by the mechanisms of bioinformatics. The gene cysteine synthetase was considered for such a purpose by using the resources of the National Center for Biotechnology Information data bank in the search for similarities in the gene sequence. In this way, a primer pair was designed by the Web Primer program and the restriction enzyme TaqI was selected by the Web Cutter software program. The DNA fragment had a size of 809 bp before cutting, which is consistent with E. dispar. The gene fragment was partitioned in a first fragment with 255 bp and a second one with 554 bp, which is similar to the genetic characteristics of E. histolytica.

  13. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  14. Interdisciplinary Introductory Course in Bioinformatics

    Science.gov (United States)

    Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.

    2010-01-01

    Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…

  15. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  16. Activities and experience of the Federal Resource Center for Organizing Comprehensive Support for Children with ASD

    Directory of Open Access Journals (Sweden)

    Khaustov A.V.

    2016-12-01

    Full Text Available This article presents basic activities and experience of the Federal Resource Center for Organizing Comprehensive Sup¬port for Children with ASD of Moscow state university of psychology & education, amassed during 22 years of practice. Some statistic data on the center’s activity are displayed. Emphasis is done on multidirectional work and developing ways of interdepartmental and networking interaction for the sake of founding a system of complex support for autistic children in Russian Federation.

  17. Evaluation of a fungal collection as certified reference material producer and as a biological resource center

    Directory of Open Access Journals (Sweden)

    Tatiana Forti

    2016-06-01

    Full Text Available Abstract Considering the absence of standards for culture collections and more specifically for biological resource centers in the world, in addition to the absence of certified biological material in Brazil, this study aimed to evaluate a Fungal Collection from Fiocruz, as a producer of certified reference material and as Biological Resource Center (BRC. For this evaluation, a checklist based on the requirements of ABNT ISO GUIA34:2012 correlated with the ABNT NBR ISO/IEC17025:2005, was designed and applied. Complementing the implementation of the checklist, an internal audit was performed. An evaluation of this Collection as a BRC was also conducted following the requirements of the NIT-DICLA-061, the Brazilian internal standard from Inmetro, based on ABNT NBR ISO/IEC 17025:2005, ABNT ISO GUIA 34:2012 and OECD Best Practice Guidelines for BRCs. This was the first time that the NIT DICLA-061 was applied in a culture collection during an internal audit. The assessments enabled the proposal for the adequacy of this Collection to assure the implementation of the management system for their future accreditation by Inmetro as a certified reference material producer as well as its future accreditation as a Biological Resource Center according to the NIT-DICLA-061.

  18. Evaluation of a fungal collection as certified reference material producer and as a biological resource center.

    Science.gov (United States)

    Forti, Tatiana; Souto, Aline da S S; do Nascimento, Carlos Roberto S; Nishikawa, Marilia M; Hubner, Marise T W; Sabagh, Fernanda P; Temporal, Rosane Maria; Rodrigues, Janaína M; da Silva, Manuela

    2016-01-01

    Considering the absence of standards for culture collections and more specifically for biological resource centers in the world, in addition to the absence of certified biological material in Brazil, this study aimed to evaluate a Fungal Collection from Fiocruz, as a producer of certified reference material and as Biological Resource Center (BRC). For this evaluation, a checklist based on the requirements of ABNT ISO GUIA34:2012 correlated with the ABNT NBR ISO/IEC17025:2005, was designed and applied. Complementing the implementation of the checklist, an internal audit was performed. An evaluation of this Collection as a BRC was also conducted following the requirements of the NIT-DICLA-061, the Brazilian internal standard from Inmetro, based on ABNT NBR ISO/IEC 17025:2005, ABNT ISO GUIA 34:2012 and OECD Best Practice Guidelines for BRCs. This was the first time that the NIT DICLA-061 was applied in a culture collection during an internal audit. The assessments enabled the proposal for the adequacy of this Collection to assure the implementation of the management system for their future accreditation by Inmetro as a certified reference material producer as well as its future accreditation as a Biological Resource Center according to the NIT-DICLA-061. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  19. Development of a center for biosystmeatics resources. Progress report, November 1, 1978-October 31, 1979

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, S.R.

    1979-11-01

    The objective in the development of a Center for Biosystematics Resources is to provide a centralized source of information regarding the biological expertise available in the academic/museum community; and the federal and state regulations concerning the acquisition, transport, and possession of biological specimens. Such a Center would serve to facilitate access to this widely dispersed information. The heart of the Center is a series of computer assisted data bases which contain information on biologists and their areas of expertise, biological collections, annotated federal regulations, and federal and state controlled species lists. The purpose of this three-year contract with the Department of Energy is to continue the updating and revision of these data bases, make the information they contain readily available to the Department of Energy, other government agencies, the private sector, and the academic community; and to achieve financial independence by the end of the three-year period.

  20. Renewable Resources: a national catalog of model projects. Volume 1. Northeast Solar Energy Center Region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-07-01

    This compilation of diverse conservation and renewable energy projects across the United States was prepared through the enthusiastic participation of solar and alternate energy groups from every state and region. Compiled and edited by the Center for Renewable Resources, these projects reflect many levels of innovation and technical expertise. In many cases, a critique analysis is presented of how projects performed and of the institutional conditions associated with their success or failure. Some 2000 projects are included in this compilation; most have worked, some have not. Information about all is presented to aid learning from these experiences. The four volumes in this set are arranged in state sections by geographic region, coinciding with the four Regional Solar Energy Centers. The table of contents is organized by project category so that maximum cross-referencing may be obtained. This volume includes information on the Northeast Solar Energy Center Region. (WHK).

  1. Renewable Resources: a national catalog of model projects. Volume 3. Southern Solar Energy Center Region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-07-01

    This compilation of diverse conservation and renewable energy projects across the United States was prepared through the enthusiastic participation of solar and alternate energy groups from every state and region. Compiled and edited by the Center for Renewable Resources, these projects reflect many levels of innovation and technical expertise. In many cases, a critique analysis is presented of how projects performed and of the institutional conditions associated with their success or failure. Some 2000 projects are included in this compilation; most have worked, some have not. Information about all is presented to aid learning from these experiences. The four volumes in this set are arranged in state sections by geographic region, coinciding with the four Regional Solar Energy Centers. The table of contents is organized by project category so that maximum cross-referencing may be obtained. This volume includes information on the Southern Solar Energy Center Region. (WHK)

  2. 75 FR 78997 - Centers for Disease Control and Prevention/Health Resources and Services Administration (CDC/HRSA...

    Science.gov (United States)

    2010-12-17

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Centers for Disease Control and Prevention/Health Resources and Services Administration (CDC/HRSA) Advisory Committee... and other committee management activities, for both the Centers for Disease Control and Prevention and...

  3. A Guide to the Data Resources of the Henry A. Murray Research Center of Radcliffe College: A Center for the Study of Lives [and] Index to [the] Guide.

    Science.gov (United States)

    Radcliffe Coll., Cambridge, MA. Henry A. Murray Research Center.

    The first of two volumes provides information about data resources available at the Henry A. Murray Research Center of Radcliffe College, a multidisciplinary research center that is a national repository for social and behavioral science data on human development and social change; topics of special concern to women are collection priorities. The…

  4. Development of a cloud-based Bioinformatics Training Platform.

    Science.gov (United States)

    Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A

    2017-05-01

    The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.

  5. Microbial bioinformatics 2020.

    Science.gov (United States)

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. Human resources management in fitness centers and their relationship with the organizational performance

    Directory of Open Access Journals (Sweden)

    Jerónimo García Fernández

    2014-12-01

    Full Text Available Purpose: Human capital is essential in organizations providing sports services. However, there are few studies that examine what practices are carried out and whether they, affect sports organizations achieve better results are. Therefore the aim of this paper is to analyze the practices of human resource management in private fitness centers and the relationship established with organizational performance.Design/methodology/approach: Questionnaire to 101 managers of private fitness centers in Spain, performing exploratory and confirmatory factor analysis, and linear regressions between the variables.Findings: In organizations of fitness, the findings show that training practices, reward, communication and selection are positively correlated with organizational performance.Research limitations/implications: The fact that you made a convenience sampling in a given country and reduce the extrapolation of the results to the market.Originality/value: First, it represents a contribution to the fact that there are no studies analyzing the management of human resources in sport organizations from the point of view of the top leaders. On the other hand, allows fitness center managers to adopt practices to improve organizational performance.

  7. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  8. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  9. Bioinformatics on the Cloud Computing Platform Azure

    Science.gov (United States)

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  10. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  11. When process mining meets bioinformatics

    NARCIS (Netherlands)

    Jagadeesh Chandra Bose, R.P.; Aalst, van der W.M.P.; Nurcan, S.

    2011-01-01

    Process mining techniques can be used to extract non-trivial process related knowledge and thus generate interesting insights from event logs. Similarly, bioinformatics aims at increasing the understanding of biological processes through the analysis of information associated with biological

  12. Science center capabilities to monitor and investigate Michigan’s water resources, 2016

    Science.gov (United States)

    Giesen, Julia A.; Givens, Carrie E.

    2016-09-06

    Michigan faces many challenges related to water resources, including flooding, drought, water-quality degradation and impairment, varying water availability, watershed-management issues, stormwater management, aquatic-ecosystem impairment, and invasive species. Michigan’s water resources include approximately 36,000 miles of streams, over 11,000 inland lakes, 3,000 miles of shoreline along the Great Lakes (MDEQ, 2016), and groundwater aquifers throughout the State.The U.S. Geological Survey (USGS) works in cooperation with local, State, and other Federal agencies, as well as tribes and universities, to provide scientific information used to manage the water resources of Michigan. To effectively assess water resources, the USGS uses standardized methods to operate streamgages, water-quality stations, and groundwater stations. The USGS also monitors water quality in lakes and reservoirs, makes periodic measurements along rivers and streams, and maintains all monitoring data in a national, quality-assured, hydrologic database.The USGS in Michigan investigates the occurrence, distribution, quantity, movement, and chemical and biological quality of surface water and groundwater statewide. Water-resource monitoring and scientific investigations are conducted statewide by USGS hydrologists, hydrologic technicians, biologists, and microbiologists who have expertise in data collection as well as various scientific specialties. A support staff consisting of computer-operations and administrative personnel provides the USGS the functionality to move science forward. Funding for USGS activities in Michigan comes from local and State agencies, other Federal agencies, direct Federal appropriations, and through the USGS Cooperative Matching Funds, which allows the USGS to partially match funding provided by local and State partners.This fact sheet provides an overview of the USGS current (2016) capabilities to monitor and study Michigan’s vast water resources. More

  13. Best practices in bioinformatics training for life scientists

    DEFF Research Database (Denmark)

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik

    2013-01-01

    their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes...... to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse...

  14. Inbound Call Centers and Emotional Dissonance in the Job Demands - Resources Model.

    Science.gov (United States)

    Molino, Monica; Emanuel, Federica; Zito, Margherita; Ghislieri, Chiara; Colombo, Lara; Cortese, Claudio G

    2016-01-01

    Emotional labor, defined as the process of regulating feelings and expressions as part of the work role, is a major characteristic in call centers. In particular, interacting with customers, agents are required to show certain emotions that are considered acceptable by the organization, even though these emotions may be different from their true feelings. This kind of experience is defined as emotional dissonance and represents a feature of the job especially for call center inbound activities. The present study was aimed at investigating whether emotional dissonance mediates the relationship between job demands (workload and customer verbal aggression) and job resources (supervisor support, colleague support, and job autonomy) on the one hand, and, on the other, affective discomfort, using the job demands-resources model as a framework. The study also observed differences between two different types of inbound activities: customer assistance service (CA) and information service. The study involved agents of an Italian Telecommunication Company, 352 of whom worked in the CA and 179 in the information service. The hypothesized model was tested across the two groups through multi-group structural equation modeling. Analyses showed that CA agents experience greater customer verbal aggression and emotional dissonance than information service agents. RESULTS also showed, only for the CA group, a full mediation of emotional dissonance between workload and affective discomfort, and a partial mediation of customer verbal aggression and job autonomy, and affective discomfort. This study's findings contributed both to the emotional labor literature, investigating the mediational role of emotional dissonance in the job demands-resources model, and to call center literature, considering differences between two specific kinds of inbound activities. Suggestions for organizations and practitioners emerged in order to identify practical implications useful both to support

  15. Inbound Call Centers and Emotional Dissonance in the Job Demands – Resources Model

    Science.gov (United States)

    Molino, Monica; Emanuel, Federica; Zito, Margherita; Ghislieri, Chiara; Colombo, Lara; Cortese, Claudio G.

    2016-01-01

    Background: Emotional labor, defined as the process of regulating feelings and expressions as part of the work role, is a major characteristic in call centers. In particular, interacting with customers, agents are required to show certain emotions that are considered acceptable by the organization, even though these emotions may be different from their true feelings. This kind of experience is defined as emotional dissonance and represents a feature of the job especially for call center inbound activities. Aim: The present study was aimed at investigating whether emotional dissonance mediates the relationship between job demands (workload and customer verbal aggression) and job resources (supervisor support, colleague support, and job autonomy) on the one hand, and, on the other, affective discomfort, using the job demands-resources model as a framework. The study also observed differences between two different types of inbound activities: customer assistance service (CA) and information service. Method: The study involved agents of an Italian Telecommunication Company, 352 of whom worked in the CA and 179 in the information service. The hypothesized model was tested across the two groups through multi-group structural equation modeling. Results: Analyses showed that CA agents experience greater customer verbal aggression and emotional dissonance than information service agents. Results also showed, only for the CA group, a full mediation of emotional dissonance between workload and affective discomfort, and a partial mediation of customer verbal aggression and job autonomy, and affective discomfort. Conclusion: This study’s findings contributed both to the emotional labor literature, investigating the mediational role of emotional dissonance in the job demands-resources model, and to call center literature, considering differences between two specific kinds of inbound activities. Suggestions for organizations and practitioners emerged in order to identify

  16. Criticality Safety Information Resource Center Web portal: www.csirc.net

    International Nuclear Information System (INIS)

    Harmon, C.D. II; Jones, T.

    2000-01-01

    The Nuclear Criticality Safety Group (ESH-6) at Los Alamos National Laboratory (LANL) is in the process of collecting and archiving historical and technical information related to nuclear criticality safety from LANL and other facilities. In an ongoing effort, this information is being made available via the Criticality Safety Information Resource Center (CSIRC) web site, which is hosted and maintained by ESH-6 staff. Recently, the CSIRC Web site was recreated as a Web portal that provides the criticality safety community with much more than just archived data

  17. Inbound Call Centers and Emotional Dissonance in the Job Demands – Resources Model

    Directory of Open Access Journals (Sweden)

    Monica Molino

    2016-07-01

    Full Text Available Background: Emotional labor, defined as the process of regulating feelings and expressions as part of the work role, is a major characteristic in call centers. In particular, interacting with customers, agents are required to show certain emotions that are considered acceptable by the organization, even though these emotions may be different from their true feelings. This kind of experience is defined as emotional dissonance and represents a feature of the job especially for call center inbound activities. Aim: The present study was aimed at investigating whether emotional dissonance mediates the relationship between job demands (workload and customer verbal aggression and job resources (supervisor support, colleague support and job autonomy on the one hand, and, on the other, affective discomfort, using the job demands-resources model as a framework. The study also observed differences between two different types of inbound activities: customer assistance service and information service.Method: The study involved agents of an Italian Telecommunication Company, 352 of whom worked in the customer assistance service and 179 in the information service. The hypothesized model was tested across the two groups through multi-group structural equation modeling.Results: Analyses showed that customer assistance service agents experience greater customer verbal aggression and emotional dissonance than information service agents. Results also showed, only for the customer assistance service group, a full mediation of emotional dissonance between workload and affective discomfort, and a partial mediation of customer verbal aggression and job autonomy, and affective discomfort.Conclusion: This study’s findings contributed both to the emotional labor literature, investigating the mediational role of emotional dissonance in the job demands-resources model, and to call center literature, considering differences between two specific kinds of inbound activities

  18. KEY ISSUES OF CONCEPTS' FORMATION OF THE NETWORK OF RESOURCE CENTER OF DISTANCE EDUCATION OF GENERAL EDUCATION INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Yuriy M. Bogachkov

    2013-06-01

    Full Text Available In the article the problem of constructing a network of resource centers for Distance Education to meet the needs of general secondary schools is presented. Modern educational trends in the use of Internet services in education are viewed.  Main contradictions, solution of which helps to create a network of resource centers, are identified. The definition of key terms related to the range of issues are given. The basic categories of participants, who  implementation of e-learning and networking are oriented on. There are considered the basic tasks of  distance education resource centers' functioning and types of supporting: personnel, regulatory, informative, systematic and  technical etc. The review of possible models of implementation of  students' distance education is reviewed . Three options for business models of resource centers, depending on funding  sources are offered.

  19. Performance evaluation of data center service localization based on virtual resource migration in software defined elastic optical network.

    Science.gov (United States)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tan, Yuanlong; Lin, Yi; Han, Jianrui; Lee, Young

    2015-09-07

    Data center interconnection with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate data center services. In view of this, this study extends the data center resources to user side to enhance the end-to-end quality of service. We propose a novel data center service localization (DCSL) architecture based on virtual resource migration in software defined elastic data center optical network. A migration evaluation scheme (MES) is introduced for DCSL based on the proposed architecture. The DCSL can enhance the responsiveness to the dynamic end-to-end data center demands, and effectively reduce the blocking probability to globally optimize optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of our OpenFlow-based enhanced SDN testbed. The performance of MES scheme under heavy traffic load scenario is also quantitatively evaluated based on DCSL architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning scheme.

  20. Development of a center for biosystematics resources. Summary report, November 1, 1979-October 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, S.R.

    1980-11-01

    The objective in the development of a Center for Biosystematics Resources is to provide a centralized source of information regarding the biological expertise available in the academic/museum community; and the federal and state regulations concerning the acquisition, transport, and possession of biological specimens. Such a Center would serve to facilitate access to this widely dispersed information. The heart of the Center is a series of computer assisted data bases which contain information on biologists and their areas of expertise, biological collections, annotated federal regulations, and federal and state controlled species lists. In the last year these data bases have been updated and expanded. Additional data bases have been constructed and are being maintained. The purpose of this three-year contract with the Department of Energy is to continue the updating and revision of the original data bases, make the information they contain readily available to the Department of Energy, other government agencies, the private sector, and the academic community; and to achieve financial independence by the end of the three-year period.

  1. Plutonium research and related activities at the Amarillo National Resource Center for Plutonium

    International Nuclear Information System (INIS)

    Hartley, R.S.; Beard, C.A.; Barnes, D.L.

    1998-01-01

    With the end of the Cold War, the US and Russia are reducing their nuclear weapons stockpiles. What to do with the materials from thousands of excess nuclear weapons is an important international challenge. How to handle the remaining US stockpile to ensure safe storage and reliability, in light of the aging support infrastructure, is an important national challenge. To help address these challenges and related issues, the Amarillo National Resource Center for Plutonium is working on behalf of the State of Texas with the US Department of Energy (DOE). The center directs three major programs that address the key aspects of the plutonium management issue: (1) the Communications, Education, Training and Community Involvement Program, which focuses on informing the public about plutonium and providing technical education at all levels; (2) the Environmental, Safety, and Health (ES and H) Program, which investigates the key ES and H impacts of activities related to the DOE weapons complex in Texas; and (3) the Nuclear and Other Materials Program, which is aimed at minimizing safety and proliferation risks by helping to develop and advocate safe stewardship, storage, and disposition of nuclear weapons materials. This paper provides an overview of the center's nuclear activities described in four broad categories of international activities, materials safety, plutonium storage, and plutonium disposition

  2. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  3. Taking Bioinformatics to Systems Medicine.

    Science.gov (United States)

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  4. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  5. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  6. PyPedia: using the wiki paradigm as crowd sourcing environment for bioinformatics protocols.

    Science.gov (United States)

    Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A

    2015-01-01

    Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.

  7. Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools

    Science.gov (United States)

    Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson

    2010-01-01

    This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…

  8. Performance evaluation of multi-stratum resources integrated resilience for software defined inter-data center interconnect.

    Science.gov (United States)

    Yang, Hui; Zhang, Jie; Zhao, Yongli; Ji, Yuefeng; Wu, Jialin; Lin, Yi; Han, Jianrui; Lee, Young

    2015-05-18

    Inter-data center interconnect with IP over elastic optical network (EON) is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resources integration among IP networks, optical networks and application stratums resources that allows to accommodate data center services. In view of this, this study extends to consider the service resilience in case of edge optical node failure. We propose a novel multi-stratum resources integrated resilience (MSRIR) architecture for the services in software defined inter-data center interconnect based on IP over EON. A global resources integrated resilience (GRIR) algorithm is introduced based on the proposed architecture. The MSRIR can enable cross stratum optimization and provide resilience using the multiple stratums resources, and enhance the data center service resilience responsiveness to the dynamic end-to-end service demands. The overall feasibility and efficiency of the proposed architecture is experimentally verified on the control plane of our OpenFlow-based enhanced SDN (eSDN) testbed. The performance of GRIR algorithm under heavy traffic load scenario is also quantitatively evaluated based on MSRIR architecture in terms of path blocking probability, resilience latency and resource utilization, compared with other resilience algorithms.

  9. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    Science.gov (United States)

    A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  10. The Criticality Safety Information Resource Center (CSIRC) at Los Alamos National Laboratory

    International Nuclear Information System (INIS)

    Henderson, B.D.; Meade, R.A.; Pruvost, N.L.

    1999-01-01

    The Criticality Safety Information Resource Center (CSIRC) at Los Alamos National Laboratory (LANL) is a program jointly funded by the U.S. Department of Energy (DOE) and the U.S. Nuclear Regulatory Commission (NRC) in conjunction with the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 97-2. The goal of CSIRC is to preserve primary criticality safety documentation from U.S. critical experimental sites and to make this information available for the benefit of the technical community. Progress in archiving criticality safety primary documents at the LANL archives as well as efforts to make this information available to researchers are discussed. The CSIRC project has a natural linkage to the International Criticality Safety Benchmark Evaluation Project (ICSBEP). This paper raises the possibility that the CSIRC project will evolve in a fashion similar to the ICSBEP. Exploring the implications of linking the CSIRC to the international criticality safety community is the motivation for this paper

  11. Mars Atmospheric In Situ Resource Utilization Projects at the Kennedy Space Center

    Science.gov (United States)

    Muscatello, A. C.; Hintze, P. E.; Caraccio, A. J.; Bayliss, J. A.; Karr, L. J.; Paley, M. S.; Marone, M. J.; Gibson, T. L.; Surma, J. M.; Mansell, J. M.; hide

    2016-01-01

    The atmosphere of Mars, which is approximately 95% carbon dioxide (CO2), is a rich resource for the human exploration of the red planet, primarily by the production of rocket propellants and oxygen for life support. Three recent projects led by NASA's Kennedy Space Center have been investigating the processing of CO2. The first project successfully demonstrated the Mars Atmospheric Processing Module (APM), which freezes CO2 with cryocoolers and combines sublimated CO2 with hydrogen to make methane and water. The second project absorbs CO2 with Ionic Liquids and electrolyzes it with water to make methane and oxygen, but with limited success so far. A third project plans to recover up to 100% of the oxygen in spacecraft respiratory CO2. A combination of the Reverse Water Gas Shift reaction and the Boudouard reaction eventually fill the reactor up with carbon, stopping the process. A system to continuously remove and collect carbon is under construction.

  12. X-ray microscopy resource center at the Advanced Light Source

    International Nuclear Information System (INIS)

    Meyer-Ilse, W.; Koike, M.; Beguiristain, R.; Maser, J.; Attwood, D.

    1992-07-01

    An x-ray microscopy resource center for biological x-ray imaging vvill be built at the Advanced Light Source (ALS) in Berkeley. The unique high brightness of the ALS allows short exposure times and high image quality. Two microscopes, an x-ray microscope (XM) and a scanning x-ray microscope (SXM) are planned. These microscopes serve complementary needs. The XM gives images in parallel at comparable short exposure times, and the SXM is optimized for low radiation doses applied to the sample. The microscopes extend visible light microscopy towards significantly higher resolution and permit images of objects in an aqueous medium. High resolution is accomplished by the use of Fresnel zone plates. Design considerations to serve the needs of biological x-ray microscopy are given. Also the preliminary design of the microscopes is presented. Multiple wavelength and multiple view images will provide elemental contrast and some degree of 3D information

  13. Mars Atmospheric In Situ Resource Utilization Projects at the Kennedy Space Center

    Science.gov (United States)

    Muscatello, Anthony; Hintze, Paul; Meier, Anne; Bayliss, Jon; Karr, Laurel; Paley, Steve; Marone, Matt; Gibson, Tracy; Surma, Jan; Mansell, Matt; hide

    2016-01-01

    The atmosphere of Mars, which is 96 percent carbon dioxide (CO2), is a rich resource for the human exploration of the red planet, primarily by the production of rocket propellants and oxygen for life support. Three recent projects led by NASAs Kennedy Space Center have been investigating the processing of CO2. The first project successfully demonstrated the Mars Atmospheric Processing Module (APM), which freezes CO2 with cryocoolers and combines sublimated CO2 with hydrogen to make methane and water. The second project absorbs CO2 with Ionic Liquids and electrolyzes it with water to make methane and oxygen, but with limited success so far. A third project plans to recover up to 100 of the oxygen in spacecraft respiratory CO2. A combination of the Reverse Water Gas Shift reaction and the Boudouard reaction eventually fill the reactor up with carbon, stopping the process. A system to continuously remove and collect carbon has been tested with encouraging results.

  14. Clinical support role for a pharmacy technician within a primary care resource center.

    Science.gov (United States)

    Fera, Toni; Kanel, Keith T; Bolinger, Meghan L; Fink, Amber E; Iheasirim, Serah

    2018-02-01

    The creation of a clinical support role for a pharmacy technician within a primary care resource center is described. In the Primary Care Resource Center (PCRC) Project, hospital-based care transition coordination hubs staffed by nurses and pharmacist teams were created in 6 independent community hospitals. At the largest site, patient volume for targeted diseases challenged the ability of the PCRC pharmacist to provide expected elements of care to targeted patients. Creation of a new pharmacy technician clinical support role was implemented as a cost-effective option to increase the pharmacist's efficiency. The pharmacist's work processes were reviewed and technical functions identified that could be assigned to a specially trained pharmacy technician under the direction of the PCRC pharmacist. Daily tasks performed by the pharmacy technician included maintenance of the patient roster and pending discharges, retrieval and documentation of pertinent laboratory and diagnostic test information from the patient's medical record, assembly of patient medication education materials, and identification of discrepancies between disparate systems' medication records. In the 6 months after establishing the PCRC pharmacy technician role, the pharmacist's completion of comprehensive medication reviews (CMRs) for target patients increased by 40.5% ( p = 0.0223), driven largely by a 42.4% ( p technician to augment pharmacist care in a PCRC team extended the reach of the pharmacist and allowed more time for the pharmacist to engage patients. Technician support enabled the pharmacist to complete more CMRs and reduced the time required for chart reviews. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  15. Community Coordinated Modeling Center: A Powerful Resource in Space Science and Space Weather Education

    Science.gov (United States)

    Chulaki, A.; Kuznetsova, M. M.; Rastaetter, L.; MacNeice, P. J.; Shim, J. S.; Pulkkinen, A. A.; Taktakishvili, A.; Mays, M. L.; Mendoza, A. M. M.; Zheng, Y.; Mullinix, R.; Collado-Vega, Y. M.; Maddox, M. M.; Pembroke, A. D.; Wiegand, C.

    2015-12-01

    Community Coordinated Modeling Center (CCMC) is a NASA affiliated interagency partnership with the primary goal of aiding the transition of modern space science models into space weather forecasting while supporting space science research. Additionally, over the past ten years it has established itself as a global space science education resource supporting undergraduate and graduate education and research, and spreading space weather awareness worldwide. A unique combination of assets, capabilities and close ties to the scientific and educational communities enable this small group to serve as a hub for raising generations of young space scientists and engineers. CCMC resources are publicly available online, providing unprecedented global access to the largest collection of modern space science models (developed by the international research community). CCMC has revolutionized the way simulations are utilized in classrooms settings, student projects, and scientific labs and serves hundreds of educators, students and researchers every year. Another major CCMC asset is an expert space weather prototyping team primarily serving NASA's interplanetary space weather needs. Capitalizing on its unrivaled capabilities and experiences, the team provides in-depth space weather training to students and professionals worldwide, and offers an amazing opportunity for undergraduates to engage in real-time space weather monitoring, analysis, forecasting and research. In-house development of state-of-the-art space weather tools and applications provides exciting opportunities to students majoring in computer science and computer engineering fields to intern with the software engineers at the CCMC while also learning about the space weather from the NASA scientists.

  16. Peer Mentoring for Bioinformatics presentation

    OpenAIRE

    Budd, Aidan

    2014-01-01

    A handout used in a HUB (Heidelberg Unseminars in Bioinformatics) meeting focused on career development for bioinformaticians. It describes an activity for use to help introduce the idea of peer mentoring, potnetially acting as an opportunity to create peer-mentoring groups.

  17. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  18. Taking Bioinformatics to Systems Medicine

    NARCIS (Netherlands)

    van Kampen, Antoine H. C.; Moerland, Perry D.

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically

  19. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  20. Bioinformatics of genomic association mapping

    NARCIS (Netherlands)

    Vaez Barzani, Ahmad

    2015-01-01

    In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping

  1. Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.

    Science.gov (United States)

    Wren, Jonathan D

    2016-09-01

    To analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades. When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. jdwren@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Bioinformatics in the Netherlands: the value of a nationwide community.

    Science.gov (United States)

    van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap

    2017-09-15

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.

  3. GOBLET: the Global Organisation for Bioinformatics Learning, Education and Training.

    Science.gov (United States)

    Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G

    2015-04-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.

  4. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  5. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  6. Bioinformatics for cancer immunotherapy target discovery

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein

    2014-01-01

    therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...

  7. EURASIP journal on bioinformatics & systems biology

    National Research Council Canada - National Science Library

    2006-01-01

    "The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...

  8. The USC Epigenome Center.

    Science.gov (United States)

    Laird, Peter W

    2009-10-01

    The University of Southern California (USC, CA, USA) has a long tradition of excellence in epigenetics. With the recent explosive growth and technological maturation of the field of epigenetics, it became clear that a dedicated high-throughput epigenomic data production facility would be needed to remain at the forefront of epigenetic research. To address this need, USC launched the USC Epigenome Center as the first large-scale center in academics dedicated to epigenomic research. The Center is providing high-throughput data production for large-scale genomic and epigenomic studies, and developing novel analysis tools for epigenomic research. This unique facility promises to be a valuable resource for multidisciplinary research, education and training in genomics, epigenomics, bioinformatics, and translational medicine.

  9. Annual report of Nuclear Human Resource Development Center. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-07-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the fiscal year (FY) 2015. In FY 2015, we were actively engaged in organizing special training courses in response to external training needs, cooperating with universities, and offering international training courses for Asian countries in addition to the regular training programs at NuHRDeC. In accordance to the annual plan for national training, we conducted training courses for radioisotopes and radiation engineers, nuclear energy engineers, and national qualification examinations, as well as for officials in Nuclear Regulatory Authority and prefectural and municipal officials in Fukushima as outreach activities in order to meet the training needs from the external organizations. We continued to enhance cooperative activities with universities, such as the acceptance of postdoctoral researchers, the cooperation according to the cooperative graduate school system, including the acceptance of students from Nuclear Professional School of University of Tokyo. Furthermore, through utilizing the remote education system, the joint course was successfully held with seven universities, and the intensive summer course and the practical exercise at Nuclear Fuel Cycle Engineering Laboratories were also conducted as part of the collaboration network with universities. The Instructor Training Program (ITP) was continually offered to the ITP participating countries (Bangladesh, China, Indonesia, Kazakhstan, Malaysia, Mongolia, Philippines, Saudi Arabia, Sri Lanka, Thailand, Turkey and Viet Nam) in FY2015 under contact with Ministry of Education, Culture, Sports, Science and Technology. As part of the ITP, the Instructor Training Course and the Nuclear Technology Seminar were organized at NuHRDeC such as “Reactor Engineering Course” and “Basic Radiation Knowledge for School Education Seminar”. Eight and eleven countries

  10. The Earth Resources Observation Systems data center's training technical assistance, and applications research activities

    Science.gov (United States)

    Sturdevant, J.A.

    1981-01-01

    The Earth Resources Observation Systems (EROS) Data Center (EDO, administered by the U.S. Geological Survey, U.S. Department of the Interior, provides remotely sensed data to the user community and offers a variety of professional services to further the understanding and use of remote sensing technology. EDC reproduces and sells photographic and electronic copies of satellite images of areas throughout the world. Other products include aerial photographs collected by 16 organizations, including the U.S. Geological Survey and the National Aeronautics and Space Administration. Primary users of the remotely sensed data are Federal, State, and municipal government agencies, universities, foreign nations, and private industries. The professional services available at EDC are primarily directed at integrating satellite and aircraft remote sensing technology into the programs of the Department of the Interior and its cooperators. This is accomplished through formal training workshops, user assistance, cooperative demonstration projects, and access to equipment and capabilities in an advanced data analysis laboratory. In addition, other Federal agencies, State and local governments, universities, and the general public can get assistance from the EDC Staff. Since 1973, EDC has contributed to the accelerating growth in development and operational use of remotely sensed data for land resource problems through its role as educator and by conducting basic and applied remote sensing applications research. As remote sensing technology continues to evolve, EDC will continue to respond to the increasing demand for timely information on remote sensing applications. Questions most often asked about EDC's research and training programs include: Who may attend an EDC remote sensing training course? Specifically, what is taught? Who may cooperate with EDC on remote sensing projects? Are interpretation services provided on a service basis? This report attempts to define the goals and

  11. Heuristic evaluation of online COPD respiratory therapy and education video resource center.

    Science.gov (United States)

    Stellefson, Michael; Chaney, Beth; Chaney, Don

    2014-10-01

    Abstract Purpose: Because of limited accessibility to pulmonary rehabilitation programs, patients with chronic obstructive pulmonary disease (COPD) are infrequently provided with patient education resources. To help educate patients with COPD on how to live a better life with diminished breathing capacity, we developed a novel social media resource center containing COPD respiratory therapy and education videos called "COPDFlix." A heuristic evaluation of COPDFlix was conducted as part of a larger study to determine whether the prototype was successful in adhering to formal Web site usability guidelines for older adults. A purposive sample of three experts, with expertise in Web design and health communications technology, was recruited (a) to identify usability violations and (b) to propose solutions to improve the functionality of the COPDFlix prototype. Each expert evaluated 18 heuristics in four categories of task-based criteria (i.e., interaction and navigation, information architecture, presentation design, and information design). Seventy-six subcriteria across these four categories were assessed. Quantitative ratings and qualitative comments from each expert were compiled into a single master list, noting the violated heuristic and type/location of problem(s). Sixty-one usability violations were identified across the 18 heuristics. Evaluators rated the majority of heuristic subcriteria as either a "minor hindrance" (n=32) or "no problem" (n=132). Moreover, only 2 of the 18 heuristic categories were noted as "major" violations, with mean severity scores of ≥3. Mixed-methods data analysis helped the multidisciplinary research team to categorize and prioritize usability problems and solutions, leading to 26 discrete design modifications within the COPDFlix prototype.

  12. The Effects of Yoga, Massage, and Reiki on Patient Well-Being at a Cancer Resource Center.

    Science.gov (United States)

    Rosenbaum, Mark S; Velde, Jane

    2016-06-01

    Cancer resource centers offer patients a variety of therapeutic services. However, patients with cancer and cancer healthcare practitioners may not fully understand the specific objectives and benefits of each service. This research offers guidance to cancer healthcare practitioners on how they can best direct patients to partake in specific integrative therapies, depending on their expressed needs. This article investigates the effects of yoga, massage, and Reiki services administered in a cancer resource center on patients' sense of personal well-being. The results show how program directors at a cancer resource center can customize therapies to meet the needs of patients' well-being. The experimental design measured whether engaging in yoga, massage, or Reiki services affects the self-perceived well-being of 150 patients at a cancer resource center at two times. All three services helped decrease stress and anxiety, improve mood, and enhance cancer center patrons' perceived overall health and quality of life in a similar manner. Reiki reduced the pain of patients with cancer to a greater extent than either massage or yoga.

  13. Preface to Introduction to Structural Bioinformatics

    NARCIS (Netherlands)

    Feenstra, K. Anton; Abeln, Sanne

    2018-01-01

    While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which

  14. Annual report of Nuclear Human Resource Development Center. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2017-06-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the fiscal year (FY) 2014. In FY 2014, we flexibly designed special training courses corresponding with the outside training needs, while organizing the annually scheduled regular training programs. We also actively addressed the challenging issues on human resource development, such as to enhance the collaboration with academia and to organize international training for Asian countries. Besides these regular courses, we also organized the special training courses based on the outside needs, e.g. Nuclear Regulatory Authority or the people in Naraha town in Fukushima Prefecture. JAEA continued its cooperative activities with universities. In respect of the cooperation with graduate school of The University of Tokyo, we accepted nuclear major students and cooperatively conducted lectures and practical exercises for one year. In terms of the collaboration network with universities, the joint course was successfully held with six universities through utilizing the remote education system. Besides, the intensive summer course and practical exercise at Nuclear Fuel Cycle Engineering Laboratories were also conducted. Furthermore, JAEA had re-signed the agreement “Japan Nuclear Education Network” with 7 Universities in Feb. 2015 for the new participation of Nagoya University from FY 2015. Concerning International training, we continuously implemented the Instructor Training Program (ITP) by receiving the annual sponsorship from Ministry of Education, Culture, Sports, Science and Technology. In FY 2014, eight countries (i.e. Bangladesh, Indonesia, Kazakhstan, Malaysia, Mongolia, Philippines, Thailand and Vietnam) joined this Instructor training courses such as “Reactor Engineering Course”. Furthermore, we organized nuclear technology seminar courses, e.g. “Basic Radiation Knowledge for School Education”. In respect of

  15. Lessons Learned from Creating the Public Earthquake Resource Center at CERI

    Science.gov (United States)

    Patterson, G. L.; Michelle, D.; Johnston, A.

    2004-12-01

    The Center for Earthquake Research and Information (CERI) at the University of Memphis opened the Public Earthquake Resource Center (PERC) in May 2004. The PERC is an interactive display area that was designed to increase awareness of seismology, Earth Science, earthquake hazards, and earthquake engineering among the general public and K-12 teachers and students. Funding for the PERC is provided by the US Geological Survey, The NSF-funded Mid America Earthquake Center, and the University of Memphis, with input from the Incorporated Research Institutions for Seismology. Additional space at the facility houses local offices of the US Geological Survey. PERC exhibits are housed in a remodeled residential structure at CERI that was donated by the University of Memphis and the State of Tennessee. Exhibits were designed and built by CERI and US Geological Survey staff and faculty with the help of experienced museum display subcontractors. The 600 square foot display area interactively introduces the basic concepts of seismology, real-time seismic information, seismic network operations, paleoseismology, building response, and historical earthquakes. Display components include three 22" flat screen monitors, a touch sensitive monitor, 3 helicorder elements, oscilloscope, AS-1 seismometer, life-sized liquefaction trench, liquefaction shake table, and building response shake table. All displays include custom graphics, text, and handouts. The PERC website at www.ceri.memphis.edu/perc also provides useful information such as tour scheduling, ask a geologist, links to other institutions, and will soon include a virtual tour of the facility. Special consideration was given to address State science standards for teaching and learning in the design of the displays and handouts. We feel this consideration is pivotal to the success of any grass roots Earth Science education and outreach program and represents a valuable lesson that has been learned at CERI over the last several

  16. Using registries to integrate bioinformatics tools and services into workbench environments

    DEFF Research Database (Denmark)

    Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer

    2016-01-01

    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....

  17. Implementation an human resources shared services center: Multinational company strategy in fusion context

    Directory of Open Access Journals (Sweden)

    João Paulo Bittencourt

    2016-09-01

    Full Text Available The aim of this research was to analyze the process of implementation and management of the Shared Services Center for Human Resources, in a multinational company in the context of mergers and acquisitions. The company analyzed was called here Alpha, and is one of the largest food companies in the country that was born of a merger between Beta and Delta in 2008. The CSC may constitute a tool for strategic management of HR that allows repositioning of the role of the area in order to be more strategic at corporate level and more profitable at the operating level. The research was based on a descriptive and exploratory study of qualitative approach. Among the results, there is the fact that shared services were strategic to support, standardize and ensure the expansion of the company. The challenges found were associated with the development of a culture of service and the relationship with users and the definition of HR activities scope. The following management procedures include the adequacy of wage differences between employees, the career path limitation and the need to attract and retain talent and international expansion.

  18. The International Center for Integrated Water Resources Management (ICIWaRM): The United States' Contribution to UNESCO IHP's Global Network of Water Centers

    Science.gov (United States)

    Logan, W. S.

    2015-12-01

    The concept of a "category 2 center"—i.e., one that is closely affiliated with UNESCO, but not legally part of UNESCO—dates back many decades. However, only in the last decade has the concept been fully developed. Within UNESCO, the International Hydrological Programme (IHP) has led the way in creating a network of regional and global water-related centers.ICIWaRM—the International Center for Integrated Water Resources Management—is one member of this network. Approved by UNESCO's General Conference, the center has been operating since 2009. It was designed to fill a niche in the system for a center that was backed by an institution with on-the-ground water management experience, but that also had strong connections to academia, NGOs and other governmental agencies. Thus, ICIWaRM is hosted by the US Army Corps of Engineers' Institute for Water Resources (IWR), but established with an internal network of partner institutions. Three main factors have contributed to any success that ICIWaRM has achieved in its global work: A focus on practical science and technology which can be readily transferred. This includes the Corps' own methodologies and models for planning and water management, and those of our university and government partners. Collaboration with other UNESCO Centers on joint applied research, capacity-building and training. A network of centers needs to function as a network, and ICIWaRM has worked together with UNESCO-affiliated centers in Chile, Brazil, Paraguay, the Dominican Republic, Japan, China, and elsewhere. Partnering with and supporting existing UNESCO-IHP programs. ICIWaRM serves as the Global Technical Secretariat for IHP's Global Network on Water and Development Information in Arid Lands (G-WADI). In addition to directly supporting IHP, work through G-WADI helps the center to frame, prioritize and integrate its activities. With the recent release of the United Nation's 2030 Agenda for Sustainable Development, it is clear that

  19. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  20. The Bone Marrow Transplantation Center of the National Cancer Institute - its resources to assist patients with bone marrow failure

    International Nuclear Information System (INIS)

    Tabak, Daniel

    1997-01-01

    This paper describes the bone marrow transplantation center of the brazilian National Cancer Institute, which is responsible for the cancer control in Brazil. The document also describes the resources available in the Institute for assisting patients presenting bone marrow failures. The Center provides for allogeneic and autologous bone marrow transplants, peripheral stem cell transplants, umbilical cord collections and transplants, and a small experience with unrelated bone marrow transplants. The Center receives patient from all over the country and provides very sophisticated medical care at no direct cost to the patients

  1. Establishing bioinformatics research in the Asia Pacific

    OpenAIRE

    Ranganathan, Shoba; Tammi, Martti; Gribskov, Michael; Tan, Tin Wee

    2006-01-01

    Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-...

  2. The Amarillo National Resource Center for Plutonium. Quarterly progress detailed report, 1 November 1996--31 January 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Progress for this quarter is given for each of the following Center programs: (1) plutonium information resource; (2) advisory function (DOE and state support); (3) environmental, public health and safety; (3) communication, education, and training; and (4) nuclear and other material studies. Both summaries of the activities and detailed reports are included.

  3. Annual report of Nuclear Human Resource Development Center. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-11-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the fiscal year 2011. In this fiscal year, we flexibly designed and conducted training courses corresponding with the needs from outside, while conducting the annually scheduled training programs, and also actively addressed the challenge of human resource development, such as to enhance the collaboration with academia and to organize international training for Asian countries. The number of trainees who completed the domestic training courses in 2011 was increased to 387, which is 14 percent more than the previous year. And also, in order to respond to the Tokyo Electric Power Company (TEPCO)'s Fukushima No.1 nuclear power plant accident, we also newly designed and organized the special training courses on radiation survey for the subcontracting companies working with TEPCO, and the training courses on decontamination work for the construction companies in Fukushima prefecture. The total number of attendees in these special courses was 3,800 persons. JAEA continued its cooperative activities with universities. In respect of the cooperation with graduate school of University of Tokyo, we accepted 17 students and cooperatively conducted practical exercises for nuclear major. Furthermore, we also actively continued cooperation on practical exercises for students of universities which were signed in Nuclear HRD Program. In terms of the collaboration network with universities, the joint course was held with six universities through utilizing the remote education system. Furthermore, the intensive course at Okayama University and practical exercise at Nuclear Fuel Cycle Engineering Laboratories of JAEA were also conducted. In respect of International training, NuHRDeC continuously implemented the Instructor Training Program (ITP) by receiving the annual sponsorship from MEXT. In fiscal year 2011, seven countries (i.e. Bangladesh

  4. Annual report of Nuclear Human Resource Development Center. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-07-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the FY2013. In FY2013, we flexibly designed special training courses corresponding with the outside training needs, while organizing the annually scheduled regular training programs. We also actively addressed the challenging issues on human resource development, such as to enhance the collaboration with academia and to organize international training for Asian countries. The number of trainees who participated in the domestic regular training courses in 2013 was more than 300 persons. Besides these regular courses, we also organized the special training courses based on the outside needs, e.g. the training courses on radiation survey and decontamination work in Fukushima prefecture for the subcontracting companies of the Tokyo Electric Power Company (TEPCO) working to respond to the TEPCO's Fukushima Daiichi nuclear power station accident. JAEA continued its cooperative activities with universities. In respect of the cooperation with graduate school of University of Tokyo, we accepted nuclear major students and cooperatively conducted lectures and practical exercises for one year. In terms of the collaboration network with universities, the joint course was successfully held with six universities through utilizing the remote education system. Furthermore, the intensive course at Okayama University, University of Fukui, and practical exercise at Nuclear Fuel Cycle Engineering Laboratories of JAEA were also conducted. In respect of International training, we continuously implemented the Instructor Training Program (ITP) by receiving the annual sponsorship from Ministry of Education, Culture, Sports, Science and Technology. In fiscal year 2013, eight countries (i.e. Bangladesh, Indonesia, Kazakhstan, Malaysia, Mongolia, Philippines, Thailand, Vietnam) joined this Instructor training courses. Furthermore, we organized nuclear

  5. Annual report of Nuclear Human Resource Development Center. April 1, 2010 - March 31, 2011

    International Nuclear Information System (INIS)

    2012-03-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the fiscal year 2010. In this fiscal year, NuHRDeC flexibly designed and conducted as need training courses upon requests while conducting the annually scheduled training programs, and actively addressed the challenge of human resource development, such as to enhance the collaboration with academia and to expand the number of participating countries for international training. The number of trainees who completed the domestic training courses in 2010 was slightly increased to 340, which is 6 percent more than the previous year. The number of those who completed the staff technical training courses was 879 in 2010, which is 12 percent more than the previous year. As a result, the total number of trainees during this period is about 10 percent more than the previous year. In order to correspond with the needs from outside of JAEA, four temporary courses were held upon the request from Nuclear and Industrial Safety Agency (NISA), Ministry of Economy, Trade and Industry (METI). JAEA continued its cooperative activities with universities; cooperation with graduate school of University of Tokyo, and the cooperative graduate school program was enlarged to cooperate with totally 19 graduate schools, one faculty of undergraduate school, and one technical college, including the newly joined 1 graduate school in 2010. JAEA also continued cooperative activities with Nuclear HRD Program initiated by MEXT and METI in 2007. The joint course has continued networking with six universities through utilizing the remote education system, Japan Nuclear Education Network (JNEN), and special lectures, summer and winter practice were also conducted. In respect of International training, NuHRDeC continuously implemented the Instructor Training Program (ITP) by receiving the annual sponsorship from MEXT. In fiscal year 2010, four countries (Bangladesh

  6. Annual report of Nuclear Human Resource Development Center. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-03-01

    This annual report summarizes the activities of Nuclear Human Resource Development Center (NuHRDeC) of Japan Atomic Energy Agency (JAEA) in the fiscal year 2012. In this fiscal year, we flexibly designed training courses corresponding with the needs from outside, while organizing the annually scheduled training programs, and also actively addressed the challenging issues on human resource development, such as to enhance the collaboration with academia and to organize international training for Asian countries. The number of trainees who completed the domestic training courses in 2012 was increased to 525, which is 30 percent more than the previous year. And also, in order to respond to the Tokyo Electric Power Company (TEPCO)'s Fukushima No.1 nuclear power plant accident, we also organized the special training courses on radiation survey for the subcontracting companies working with TEPCO, and the training courses on decontamination work for the construction companies in Fukushima prefecture. The total number of attendees in these special courses was more than 4,000 persons. JAEA continued its cooperative activities with universities. In respect of the cooperation with graduate school of University of Tokyo, we accepted 14 students and cooperatively conducted practical exercises for nuclear major. Furthermore, we also actively continued cooperation on practical exercises for students of universities which were signed in Nuclear HRD Program. In terms of the collaboration network with universities, the joint course was held with six universities through utilizing the remote education system. Furthermore, the intensive course at Okayama University, Fukui University, and practical exercise at Nuclear Fuel Cycle Engineering Laboratories of JAEA were also conducted. In respect of International training, NuHRDeC continuously implemented the Instructor Training Program (ITP) by receiving the annual sponsorship from MEXT. In fiscal year 2012, eight countries (i

  7. The retention of health human resources in primary healthcare centers in Lebanon: a national survey.

    Science.gov (United States)

    Alameddine, Mohamad; Saleh, Shadi; El-Jardali, Fadi; Dimassi, Hani; Mourad, Yara

    2012-11-22

    Critical shortages of health human resources (HHR), associated with high turnover rates, have been a concern in many countries around the globe. Of particular interest is the effect of such a trend on the primary healthcare (PHC) sector; considered a cornerstone in any effective healthcare system. This study is a rare attempt to investigate PHC HHR work characteristics, level of burnout and likelihood to quit as well as the factors significantly associated with staff retention at PHC centers in Lebanon. A cross-sectional design was utilized to survey all health providers at 81 PHC centers dispersed in all districts of Lebanon. The questionnaire consisted of four sections: socio-demographic/ professional background, organizational/institutional characteristics, likelihood to quit and level of professional burnout (using the Maslach-Burnout Inventory). A total of 755 providers completed the questionnaire (60.5% response rate). Bivariate analyses and multinomial logistic regression were used to determine factors associated with likelihood to quit. Two out of five respondents indicated likelihood to quit their jobs within the next 1-3 years and an additional 13.4% were not sure about quitting. The top three reasons behind likelihood to quit were poor salary (54.4%), better job opportunities outside the country (35.1%) and lack of professional development (33.7%). A U-shaped relationship was observed between age and likelihood to quit. Regression analysis revealed that high levels of burnout, lower level of education and low tenure were all associated with increased likelihood to quit. The study findings reflect an unstable workforce and are not conducive to supporting an expanded role for PHC in the Lebanese healthcare system. While strategies aiming at improving staff retention would be important to develop and implement for all PHC HHR; targeted retention initiatives should focus on the young-new recruits and allied health professionals. Particular attention should

  8. The retention of health human resources in primary healthcare centers in Lebanon: a national survey

    Directory of Open Access Journals (Sweden)

    Alameddine Mohamad

    2012-11-01

    Full Text Available Abstract Background Critical shortages of health human resources (HHR, associated with high turnover rates, have been a concern in many countries around the globe. Of particular interest is the effect of such a trend on the primary healthcare (PHC sector; considered a cornerstone in any effective healthcare system. This study is a rare attempt to investigate PHC HHR work characteristics, level of burnout and likelihood to quit as well as the factors significantly associated with staff retention at PHC centers in Lebanon. Methods A cross-sectional design was utilized to survey all health providers at 81 PHC centers dispersed in all districts of Lebanon. The questionnaire consisted of four sections: socio-demographic/ professional background, organizational/institutional characteristics, likelihood to quit and level of professional burnout (using the Maslach-Burnout Inventory. A total of 755 providers completed the questionnaire (60.5% response rate. Bivariate analyses and multinomial logistic regression were used to determine factors associated with likelihood to quit. Results Two out of five respondents indicated likelihood to quit their jobs within the next 1–3 years and an additional 13.4% were not sure about quitting. The top three reasons behind likelihood to quit were poor salary (54.4%, better job opportunities outside the country (35.1% and lack of professional development (33.7%. A U-shaped relationship was observed between age and likelihood to quit. Regression analysis revealed that high levels of burnout, lower level of education and low tenure were all associated with increased likelihood to quit. Conclusions The study findings reflect an unstable workforce and are not conducive to supporting an expanded role for PHC in the Lebanese healthcare system. While strategies aiming at improving staff retention would be important to develop and implement for all PHC HHR; targeted retention initiatives should focus on the young-new recruits

  9. Bioinformatics meets user-centred design: a perspective.

    Directory of Open Access Journals (Sweden)

    Katrina Pavelin

    Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.

  10. SPA AND CLIMATIC RESORTS (CENTERS AS RESOURCES OF PROGRAM OF SPORT RECREATION IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Ivica Nikolić

    2006-06-01

    Full Text Available The aspiration of the civilized man is the improvement of work which aim is to achieve as big as possible effect of productivity and as small as possible participation of labour. The result of this process, which cannot be avoided, is some kind of fatigue that has hypocinaesiological characteristics in regard to demands of modern work process. The most effective way to fight against fatigue is to have an active holiday that is meaningfully programmed, led and carried out through movement of tourists, with the addition of natural factors, among which climate and healing waters are particularly important. These very resources characterize the tourist potential of Serbia and Montenegro with lots of available facilities at 1000 m height above the sea level and spa centers with springs and a complete offer physio-prophylactic procedures and following facilities for sport recreation. The implementation of programmed active holidays in to the corpus of tourist offer of Serbia and Montenegro represents prospective of development of tourism and tourist economy with effects of multiple importance as for participants, so for the level of tourist consumption. That will definitely influence the lengthening of tourist season as the primary goal of every catering establishment. Surveys show that the affection and viewpoints of potential tourists are especially directed towards engaging sport games and activities on and in the water, as part of the elementary tourist offer in spas and climatic resorts and their available facilities. Recommendationsand postulates of program of sport recreation, which are presented through four charts, are the basis of marketing strategy of appearance on tourist market with permanent education of management personnel and further research of potential market expanding. The publication and distribution of advertising materials are especially important, both at the market in our country and at the foreign market, where the abundance

  11. The NIH-NIAID Schistosomiasis Resource Center at the Biomedical Research Institute: Molecular Redux.

    Directory of Open Access Journals (Sweden)

    James J Cody

    2016-10-01

    Full Text Available Schistosomiasis remains a health burden in many parts of the world. The complex life cycle of Schistosoma parasites and the economic and societal conditions present in endemic areas make the prospect of eradication unlikely in the foreseeable future. Continued and vigorous research efforts must therefore be directed at this disease, particularly since only a single World Health Organization (WHO-approved drug is available for treatment. The National Institutes of Health (NIH-National Institute of Allergy and Infectious Diseases (NIAID Schistosomiasis Resource Center (SRC at the Biomedical Research Institute provides investigators with the critical raw materials needed to carry out this important research. The SRC makes available, free of charge (including international shipping costs, not only infected host organisms but also a wide array of molecular reagents derived from all life stages of each of the three main human schistosome parasites. As the field of schistosomiasis research rapidly advances, it is likely to become increasingly reliant on omics, transgenics, epigenetics, and microbiome-related research approaches. The SRC has and will continue to monitor and contribute to advances in the field in order to support these research efforts with an expanding array of molecular reagents. In addition to providing investigators with source materials, the SRC has expanded its educational mission by offering a molecular techniques training course and has recently organized an international schistosomiasis-focused meeting. This review provides an overview of the materials and services that are available at the SRC for schistosomiasis researchers, with a focus on updates that have occurred since the original overview in 2008.

  12. The Counseling Center: An Undervalued Resource in Recruitment, Retention, and Risk Management

    Science.gov (United States)

    Bishop, John B.

    2010-01-01

    A primary responsibility for directors of college and university counseling centers is to explain to various audiences the multiple ways such units are of value to their institutions. This article reviews the history of how counseling center directors have been encouraged to develop and describe the work of their centers. Often overlooked are the…

  13. A Survey of Bioinformatics Database and Software Usage through Mining the Literature.

    Directory of Open Access Journals (Sweden)

    Geraint Duck

    Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.

  14. Learning Resources Centers and Their Effectiveness on Students’ Learning Outcomes: A Case-Study of an Omani Higher Education Institute

    Directory of Open Access Journals (Sweden)

    Peyman Nouraey

    2017-06-01

    Full Text Available The study aimed at investigating the use and effectiveness of a learning resources center, which is generally known as a library. In doing so, eight elements were investigated through an author-designed questionnaire. Each of these elements tended to delve into certain aspects of the afore-mentioned center. These elements included a students’ visits frequency, b availability of books related to modules, c center facilities, d use of discussion rooms, e use of online resources, f staff cooperation, g impact on knowledge enhancement, and, h recommendation to peers. Eighty undergraduate students participated in the study. Participants were then asked to read the statements carefully and choose one of the five responses provided, ranging from strongly agree to strongly disagree. Data were analyzed based on 5-point Likert Scale. Findings of the study revealed that participants were mostly in agreement with all eight statements provided in the questionnaire, which were interpreted as positive feedbacks from the students. Then, the frequencies of responses by the participants were reported. Finally, the results were compared and contrasted and related discussions on the effectiveness of libraries and learning resources centers on students’ learning performances and outcomes were made.

  15. David Grant Medical Center energy use baseline and integrated resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Richman, E.E.; Hoshide, R.K.; Dittmer, A.L.

    1993-04-01

    The US Air Mobility Command (AMC) has tasked Pacific Northwest Laboratory (PNL) with supporting the US Department of Energy (DOE) Federal Energy Management Program`s (FEMP) mission to identify, evaluate, and assist in acquiring all cost-effective energy resource opportunities (EROs) at the David Grant Medical Center (DGMC). This report describes the methodology used to identify and evaluate the EROs at DGMC, provides a life-cycle cost (LCC) analysis for each ERO, and prioritizes any life-cycle cost-effective EROs based on their net present value (NPV), value index (VI), and savings to investment ratio (SIR or ROI). Analysis results are presented for 17 EROs that involve energy use in the areas of lighting, fan and pump motors, boiler operation, infiltration, electric load peak reduction and cogeneration, electric rate structures, and natural gas supply. Typical current energy consumption is approximately 22,900 MWh of electricity (78,300 MBtu), 87,600 kcf of natural gas (90,300 MBtu), and 8,300 gal of fuel oil (1,200 MBtu). A summary of the savings potential by energy-use category of all independent cost-effective EROs is shown in a table. This table includes the first cost, yearly energy consumption savings, and NPV for each energy-use category. The net dollar savings and NPV values as derived by the life-cycle cost analysis are based on the 1992 federal discount rate of 4.6%. The implementation of all EROs could result in a yearly electricity savings of more than 6,000 MWh or 26% of current yearly electricity consumption. More than 15 MW of billable load (total billed by the utility for a 12-month period) or more than 34% of current billed demand could also be saved. Corresponding natural gas savings would be 1,050 kcf (just over 1% of current consumption). Total yearly net energy cost savings for all options would be greater than $343,340. This value does not include any operations and maintenance (O&M) savings.

  16. David Grant Medical Center energy use baseline and integrated resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Richman, E.E.; Hoshide, R.K.; Dittmer, A.L.

    1993-04-01

    The US Air Mobility Command (AMC) has tasked Pacific Northwest Laboratory (PNL) with supporting the US Department of Energy (DOE) Federal Energy Management Program's (FEMP) mission to identify, evaluate, and assist in acquiring all cost-effective energy resource opportunities (EROs) at the David Grant Medical Center (DGMC). This report describes the methodology used to identify and evaluate the EROs at DGMC, provides a life-cycle cost (LCC) analysis for each ERO, and prioritizes any life-cycle cost-effective EROs based on their net present value (NPV), value index (VI), and savings to investment ratio (SIR or ROI). Analysis results are presented for 17 EROs that involve energy use in the areas of lighting, fan and pump motors, boiler operation, infiltration, electric load peak reduction and cogeneration, electric rate structures, and natural gas supply. Typical current energy consumption is approximately 22,900 MWh of electricity (78,300 MBtu), 87,600 kcf of natural gas (90,300 MBtu), and 8,300 gal of fuel oil (1,200 MBtu). A summary of the savings potential by energy-use category of all independent cost-effective EROs is shown in a table. This table includes the first cost, yearly energy consumption savings, and NPV for each energy-use category. The net dollar savings and NPV values as derived by the life-cycle cost analysis are based on the 1992 federal discount rate of 4.6%. The implementation of all EROs could result in a yearly electricity savings of more than 6,000 MWh or 26% of current yearly electricity consumption. More than 15 MW of billable load (total billed by the utility for a 12-month period) or more than 34% of current billed demand could also be saved. Corresponding natural gas savings would be 1,050 kcf (just over 1% of current consumption). Total yearly net energy cost savings for all options would be greater than $343,340. This value does not include any operations and maintenance (O M) savings.

  17. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2010-08-01

    Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  18. Establishing bioinformatics research in the Asia Pacific

    Directory of Open Access Journals (Sweden)

    Tammi Martti

    2006-12-01

    Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.

  19. Emerging strengths in Asia Pacific bioinformatics.

    Science.gov (United States)

    Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee

    2008-12-12

    The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.

  20. The Hardwood Tree Improvement and Regeneration Center: its strategic plans for sustaining the hardwood resource

    Science.gov (United States)

    Charles H. Michler; Michael J. Bosela; Paula M. Pijut; Keith E. Woeste

    2003-01-01

    A regional center for hardwood tree improvement, genomics, and regeneration research, development and technology transfer will focus on black walnut, black cherry, northern red oak and, in the future, on other fine hardwoods as the effort is expanded. The Hardwood Tree Improvement and Regeneration Center (HTIRC) will use molecular genetics and genomics along with...

  1. Establishing a distributed national research infrastructure providing bioinformatics support to life science researchers in Australia.

    Science.gov (United States)

    Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew

    2017-06-30

    EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.

  2. Resources to Support Ethical Practice in Evaluation: An Interview with the Director of the National Center for Research and Professional Ethics

    Science.gov (United States)

    Goodyear, Leslie

    2012-01-01

    Where do evaluators find resources on ethics and ethical practice? This article highlights a relatively new online resource, a centerpiece project of the National Center for Professional and Research Ethics (NCPRE), which brings together information on best practices in ethics in research, academia, and business in an online portal and center. It…

  3. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    Science.gov (United States)

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  4. Impact of Information Technology, Clinical Resource Constraints, and Patient-Centered Practice Characteristics on Quality of Care

    Directory of Open Access Journals (Sweden)

    JongDeuk Baek

    2015-02-01

    Full Text Available Objective: Factors in the practice environment, such as health information technology (IT infrastructure, availability of other clinical resources, and financial incentives, may influence whether practices are able to successfully implement the patient-centered medical home (PCMH model and realize its benefits. This study investigates the impacts of those PCMH-related elements on primary care physicians’ perception of quality of care. Methods: A multiple logistic regression model was estimated using the 2004 to 2005 CTS Physician Survey, a national sample of salaried primary care physicians (n = 1733. Results: The patient-centered practice environment and availability of clinical resources increased physicians’ perceived quality of care. Although IT use for clinical information access did enhance physicians’ ability to provide high quality of care, a similar positive impact of IT use was not found for e-prescribing or the exchange of clinical patient information. Lack of resources was negatively associated with physician perception of quality of care. Conclusion: Since health IT is an important foundation of PCMH, patient-centered practices are more likely to have health IT in place to support care delivery. However, despite its potential to enhance delivery of primary care, simply making health IT available does not necessarily translate into physicians’ perceptions that it enhances the quality of care they provide. It is critical for health-care managers and policy makers to ensure that primary care physicians fully recognize and embrace the use of new technology to improve both the quality of care provided and the patient outcomes.

  5. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  6. Bioclipse: an open source workbench for chemo- and bioinformatics

    Directory of Open Access Journals (Sweden)

    Wagener Johannes

    2007-02-01

    Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  7. Report compiled by Research Center for Carbonaceous Resources, Institute for Chemical Reaction Science, Tohoku University; Tohoku Daigaku Hanno Kagaku Kenkyusho tanso shigen hanno kenkyu center hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The Research Center for Carbonaceous Resources was established in April 1991 for the purpose of developing a comprehensive process for converting carbonaceous resources into clean fuels or into materials equipped with advanced functions. In this report, the track records etc. of the center are introduced. Under study in the conversion process research department is the organization of a comprehensive coal conversion process which will be a combination of solvent extraction, catalytic decomposition, and catalytic gasification, whose goal is to convert coal in a clean way at high efficiency. Under study in the conversion catalyst research department are the development of a coal denitrogenation method, development of a low-temperature gasification method by use of inexpensive catalysts, synthesis of C{sub 2} hydrocarbons in a methane/carbon dioxide reaction, etc. Other endeavors under way involve the designing and development of new organic materials such as new carbon materials and a study of the foundation on which such efforts stand, that is, the study of the control of reactions between solids. Furthermore, in the study of interfacial reaction control, the contact gasification of coal, brown coal ion exchange capacity and surface conditions, carbonization of cation exchanged brown coal, etc., are being developed. (NEDO)

  8. The secondary metabolite bioinformatics portal

    DEFF Research Database (Denmark)

    Weber, Tilmann; Kim, Hyun Uk

    2016-01-01

    . In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...... analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work......Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly...

  9. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra

    2013-06-25

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  10. Biology in 'silico': The Bioinformatics Revolution.

    Science.gov (United States)

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  11. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  12. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  13. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  14. Fuzzy Logic in Medicine and Bioinformatics

    Directory of Open Access Journals (Sweden)

    Angela Torres

    2006-01-01

    Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.

  15. Rising Strengths Hong Kong SAR in Bioinformatics.

    Science.gov (United States)

    Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy

    2017-06-01

    Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.

  16. Bioinformatics clouds for big data manipulation

    Directory of Open Access Journals (Sweden)

    Dai Lin

    2012-11-01

    Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  17. The 2016 Bioinformatics Open Source Conference (BOSC).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  18. Bioinformatics clouds for big data manipulation

    KAUST Repository

    Dai, Lin

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.

  19. Bioinformatics clouds for big data manipulation.

    Science.gov (United States)

    Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  20. Bioinformatics and Microarray Data Analysis on the Cloud.

    Science.gov (United States)

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  1. Fox Chase Cancer Center's Genitourinary Division: a national resource for research, innovation and patient care.

    Science.gov (United States)

    Uzzo, Robert G; Horwitz, Eric M; Plimack, Elizabeth R

    2016-04-01

    Founded in 1904, Fox Chase Cancer Center remains committed to its mission. It is one of 41 centers in the country designated as a Comprehensive Cancer Center by the National Cancer Institute, is a founding member of the National Comprehensive Cancer Network, holds the magnet designation for nursing excellence, is one of the first to establish a family cancer risk assessment program, and has achieved national distinction because of the scientific discoveries made there that have advanced clinical care. Two of its researchers have won Nobel prizes. The Genitourinary Division is nationally recognized and viewed as one of the top driving forces behind the growth of Fox Chase due to its commitment to initiating and participating in clinical trials, its prolific contributions to peer-reviewed publications and presentations at scientific meetings, its innovations in therapies and treatment strategies, and its commitment to bringing cutting-edge therapies to patients.

  2. Assessment of Outreach by a Regional Burn Center: Could Referral Criteria Revision Help with Utilization of Resources?

    Science.gov (United States)

    Carter, Nicholas H; Leonard, Clint; Rae, Lisa

    2018-02-20

    The objectives of this study were to identify trends in preburn center care, assess needs for outreach and education efforts, and evaluate resource utilization with regard to referral criteria. We hypothesized that many transferred patients were discharged home after brief hospitalizations and without need for operation. Retrospective chart review was performed for all adult and pediatric transfers to our regional burn center from July 2012 to July 2014. Details of initial management including TBSA estimation, fluid resuscitation, and intubation status were recorded. Mode of transport, burn center length of stay, need for operation, and in-hospital mortality were analyzed. In two years, our burn center received 1004 referrals from other hospitals including 713 inpatient transfers. Within this group, 621 were included in the study. Among transferred patients, 476 (77%) had burns less than 10% TBSA, 69 (11%) had burns between 10-20% TBSA, and 76 (12%) had burns greater than 20% TBSA. Referring providers did not document TBSA for 261 (42%) of patients. Among patients with less than 10% TBSA burns, 196 (41%) received fluid boluses. Among patients with TBSA < 10%, 196 (41%) were sent home from the emergency department or discharged within 24 hours, and an additional 144 (30%) were discharged within 48 hours. Overall, 187 (30%) patients required an operation. In-hospital mortality rates were 1.5% for patients who arrived by ground transport, 14.9% for rotor wing transport, and 18.2% for fixed wing transport. Future education efforts should emphasize the importance of calculating TBSA to guide need for fluid resuscitation and restricting fluid boluses to patients that are hypotensive. Clarifying the American Burn Association burn center referral criteria to distinguish between immediate transfer vs outpatient referral may improve patient care and resource utilization.

  3. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    Science.gov (United States)

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  4. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Science.gov (United States)

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  5. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Directory of Open Access Journals (Sweden)

    Fristensky Brian

    2007-02-01

    Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  6. 76 FR 6627 - National Center for Research Resources; Notice of Closed Meeting

    Science.gov (United States)

    2011-02-07

    ... U.S.C., as amended. The contract proposals and the discussions could disclose confidential trade... concerning individuals associated with the contract proposals, the disclosure of which would constitute a... Resources Special Emphasis Panel; SBIR Contract Review. Date: March 16, 2011. Time: 8 a.m. to 5 p.m. Agenda...

  7. Amarillo National Resource Center for Plutonium. Quarterly technical progress report, May 1--July 31, 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-09-01

    Progress is reported on research projects related to the following: Electronic resource library; Environment, safety, and health; Communication, education, training, and community involvement; Nuclear and other materials; and Reporting, evaluation, monitoring, and administration. Technical studies investigate remedial action of high explosives-contaminated lands, radioactive waste management, nondestructive assay methods, and plutonium processing, handling, and storage.

  8. The culture collection and herbarium of the Center for Forest Mycology Research: A national resource

    Science.gov (United States)

    J.A. Glaeser; K.K. Nakasone; D.J. Lodge; B. Ortiz-Santana; D.L. Lindner

    2013-01-01

    The Center for Forest Mycology Research (CFMR), U.S. Forest Service, Northern Research Station, Madison, WI, is home to the world's largest collection of wood-inhabiting fungi. These collections constitute a library of the fungal kingdom that is used by researchers thoughout the world. The CFMR collections have many practical uses that have improved the lives of...

  9. Department of Music honors women's month with benefit concert for the Women's Resource Center

    OpenAIRE

    Adams, Louise

    2008-01-01

    The Virginia Tech Department of Music presents guest pianist Lise Keiter-Brotzman in "A Tribute to Women Composers" on Wednesday, March 19 at 8 p.m., in the Squires Recital Salon located in the Squires Student Center on College Avneue adjacent to downtown Blacksburg.

  10. Family Literacy Project. Learning Centers for Parents and Children. A Resource Guide.

    Science.gov (United States)

    Crocker, M. Judith, Ed.; And Others

    This guide is intended to help adult education programs establish family literacy programs and create Family Learning Centers in Cleveland Public Schools. The information should assist program coordinators in developing educational components that offer activities to raise the self-esteem of the parents and provide them with the knowledge and…

  11. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  12. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  13. Geological characteristics and resource potentials of oil shale in Ordos Basin, Center China

    Energy Technology Data Exchange (ETDEWEB)

    Yunlai, Bai; Yingcheng, Zhao; Long, Ma; Wu-jun, Wu; Yu-hu, Ma

    2010-09-15

    It has been shown that not only there are abundant oil, gas, coal, coal-bed gas, groundwater and giant uranium deposits but also there are abundant oil shale resources in Ordos basin. It has been shown also that the thickness of oil shale is, usually, 4-36m, oil-bearing 1.5%-13.7%, caloric value 1.66-20.98MJ/kg. The resource amount of oil shale with burial depth less than 2000 m is over 2000x108t (334). Within it, confirmed reserve is about 1x108t (121). Not only huge economic benefit but also precious experience in developing oil shale may be obtained in Ordos basin.

  14. Center for Fetal Monkey Gene Transfer for Heart, Lung, and Blood Diseases: An NHLBI Resource for the Gene Therapy Community

    Science.gov (United States)

    Skarlatos, Sonia I.

    2012-01-01

    Abstract The goals of the National Heart, Lung, and Blood Institute (NHLBI) Center for Fetal Monkey Gene Transfer for Heart, Lung, and Blood Diseases are to conduct gene transfer studies in monkeys to evaluate safety and efficiency; and to provide NHLBI-supported investigators with expertise, resources, and services to actively pursue gene transfer approaches in monkeys in their research programs. NHLBI-supported projects span investigators throughout the United States and have addressed novel approaches to gene delivery; “proof-of-principle”; assessed whether findings in small-animal models could be demonstrated in a primate species; or were conducted to enable new grant or IND submissions. The Center for Fetal Monkey Gene Transfer for Heart, Lung, and Blood Diseases successfully aids the gene therapy community in addressing regulatory barriers, and serves as an effective vehicle for advancing the field. PMID:22974119

  15. CoryneCenter – An online resource for the integrated analysis of corynebacterial genome and transcriptome data

    Directory of Open Access Journals (Sweden)

    Hüser Andrea T

    2007-11-01

    Full Text Available Abstract Background The introduction of high-throughput genome sequencing and post-genome analysis technologies, e.g. DNA microarray approaches, has created the potential to unravel and scrutinize complex gene-regulatory networks on a large scale. The discovery of transcriptional regulatory interactions has become a major topic in modern functional genomics. Results To facilitate the analysis of gene-regulatory networks, we have developed CoryneCenter, a web-based resource for the systematic integration and analysis of genome, transcriptome, and gene regulatory information for prokaryotes, especially corynebacteria. For this purpose, we extended and combined the following systems into a common platform: (1 GenDB, an open source genome annotation system, (2 EMMA, a MAGE compliant application for high-throughput transcriptome data storage and analysis, and (3 CoryneRegNet, an ontology-based data warehouse designed to facilitate the reconstruction and analysis of gene regulatory interactions. We demonstrate the potential of CoryneCenter by means of an application example. Using microarray hybridization data, we compare the gene expression of Corynebacterium glutamicum under acetate and glucose feeding conditions: Known regulatory networks are confirmed, but moreover CoryneCenter points out additional regulatory interactions. Conclusion CoryneCenter provides more than the sum of its parts. Its novel analysis and visualization features significantly simplify the process of obtaining new biological insights into complex regulatory systems. Although the platform currently focusses on corynebacteria, the integrated tools are by no means restricted to these species, and the presented approach offers a general strategy for the analysis and verification of gene regulatory networks. CoryneCenter provides freely accessible projects with the underlying genome annotation, gene expression, and gene regulation data. The system is publicly available at http://www.CoryneCenter.de.

  16. The Sharjah Center for Astronomy and Space Sciences (SCASS 2015): Concept and Resources

    Science.gov (United States)

    Naimiy, Hamid M. K. Al

    2015-08-01

    The Sharjah Center for Astronomy and Space Sciences (SCASS) was launched this year 2015 at the University of Sharjah in the UAE. The center will serve to enrich research in the fields of astronomy and space sciences, promote these fields at all educational levels, and encourage community involvement in these sciences. SCASS consists of:The Planetarium: Contains a semi-circle display screen (18 meters in diameter) installed at an angle of 10° which displays high-definition images using an advanced digital display system consisting of seven (7) high-performance light-display channels. The Planetarium Theatre offers a 200-seat capacity with seats placed at highly calculated angles. The Planetarium also contains an enormous star display (Star Ball - 10 million stars) located in the heart of the celestial dome theatre.The Sharjah Astronomy Observatory: A small optical observatory consisting of a reflector telescope 45 centimeters in diameter to observe the galaxies, stars and planets. Connected to it is a refractor telescope of 20 centimeters in diameter to observe the sun and moon with highly developed astronomical devices, including a digital camera (CCD) and a high-resolution Echelle Spectrograph with auto-giving and remote calibration ports.Astronomy, space and physics educational displays for various age groups include:An advanced space display that allows for viewing the universe during four (4) different time periods as seen by:1) The naked eye; 2) Galileo; 3) Spectrographic technology; and 4) The space technology of today.A space technology display that includes space discoveries since the launching of the first satellite in 1940s until now.The Design Concept for the Center (450,000 sq. meters) was originated by HH Sheikh Sultan bin Mohammed Al Qasimi, Ruler of Sharjah, and depicts the dome as representing the sun in the middle of the center surrounded by planetary bodies in orbit to form the solar system as seen in the sky.

  17. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries.

  18. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  19. Application of machine learning methods in bioinformatics

    Science.gov (United States)

    Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen

    2018-05-01

    Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

  20. Performance evaluation of multi-stratum resources integration based on network function virtualization in software defined elastic data center optical interconnect.

    Science.gov (United States)

    Yang, Hui; Zhang, Jie; Ji, Yuefeng; Tian, Rui; Han, Jianrui; Lee, Young

    2015-11-30

    Data center interconnect with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented multi-stratum resilience between IP and elastic optical networks that allows to accommodate data center services. In view of this, this study extends to consider the resource integration by breaking the limit of network device, which can enhance the resource utilization. We propose a novel multi-stratum resources integration (MSRI) architecture based on network function virtualization in software defined elastic data center optical interconnect. A resource integrated mapping (RIM) scheme for MSRI is introduced in the proposed architecture. The MSRI can accommodate the data center services with resources integration when the single function or resource is relatively scarce to provision the services, and enhance globally integrated optimization of optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of OpenFlow-based enhanced software defined networking (eSDN) testbed. The performance of RIM scheme under heavy traffic load scenario is also quantitatively evaluated based on MSRI architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning schemes.

  1. Amarillo National Resource Center for plutonium. Work plan progress report, November 1, 1995--January 31, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Cluff, D. [Texas Tech Univ., Lubbock, TX (United States)

    1996-04-01

    The Center operates under a cooperative agreement between DOE and the State of Texas and is directed and administered by an education consortium. Its programs include developing peaceful uses for the materials removed from dismantled weapons, studying effects of nuclear materials on environment and public health, remedying contaminated soils and water, studying storage, disposition, and transport of Pu, HE, and other hazardous materials removed from weapons, providing research and counsel to US in carrying out weapons reductions in cooperation with Russia, and conducting a variety of education and training programs.

  2. GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training

    Science.gov (United States)

    Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.

    2015-01-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076

  3. Combining medical informatics and bioinformatics toward tools for personalized medicine.

    Science.gov (United States)

    Sarachan, B D; Simmons, M K; Subramanian, P; Temkin, J M

    2003-01-01

    Key bioinformatics and medical informatics research areas need to be identified to advance knowledge and understanding of disease risk factors and molecular disease pathology in the 21 st century toward new diagnoses, prognoses, and treatments. Three high-impact informatics areas are identified: predictive medicine (to identify significant correlations within clinical data using statistical and artificial intelligence methods), along with pathway informatics and cellular simulations (that combine biological knowledge with advanced informatics to elucidate molecular disease pathology). Initial predictive models have been developed for a pilot study in Huntington's disease. An initial bioinformatics platform has been developed for the reconstruction and analysis of pathways, and work has begun on pathway simulation. A bioinformatics research program has been established at GE Global Research Center as an important technology toward next generation medical diagnostics. We anticipate that 21 st century medical research will be a combination of informatics tools with traditional biology wet lab research, and that this will translate to increased use of informatics techniques in the clinic.

  4. Establishment of the South-Eastern Norway Regional Health Authority Resource Center for Children with Prenatal Alcohol/Drug Exposure

    Directory of Open Access Journals (Sweden)

    Gro C. C. Løhaugen

    2015-01-01

    Full Text Available This paper presents a new initiative in the South-Eastern Health Region of Norway to establish a regional resource center focusing on services for children and adolescents aged 2–18 years with prenatal exposure to alcohol or other drugs. In Norway, the prevalence of fetal alcohol spectrum (FAS is not known but has been estimated to be between 1 and 2 children per 1000 births, while the prevalence of prenatal exposure to illicit drugs is unknown. The resource center is the first of its kind in Scandinavia and will have three main objectives: (1 provide hospital staff, community health and child welfare personnel, and special educators with information, educational courses, and seminars focused on the identification, diagnosis, and treatment of children with a history of prenatal alcohol/drug exposure; (2 provide specialized health services, such as diagnostic services and intervention planning, for children referred from hospitals in the South-Eastern Health Region of Norway; and (3 initiate multicenter studies focusing on the diagnostic process and evaluation of interventions.

  5. U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center-fiscal year 2010 annual report

    Science.gov (United States)

    Nelson, Janice S.

    2011-01-01

    The Earth Resources Observation and Science (EROS) Center is a U.S. Geological Survey (USGS) facility focused on providing science and imagery to better understand our Earth. The work of the Center is shaped by the earth sciences, the missions of our stakeholders, and implemented through strong program and project management, and application of state-of-the-art information technologies. Fundamentally, EROS contributes to the understanding of a changing Earth through 'research to operations' activities that include developing, implementing, and operating remote-sensing-based terrestrial monitoring capabilities needed to address interdisciplinary science and applications objectives at all levels-both nationally and internationally. The Center's programs and projects continually strive to meet, and where possible exceed, the changing needs of the USGS, the Department of the Interior, our Nation, and international constituents. The Center's multidisciplinary staff uses their unique expertise in remote sensing science and technologies to conduct basic and applied research, data acquisition, systems engineering, information access and management, and archive preservation to address the Nation's most critical needs. Of particular note is the role of EROS as the primary provider of Landsat data, the longest comprehensive global land Earth observation record ever collected. This report is intended to provide an overview of the scientific and engineering achievements and illustrate the range and scope of the activities and accomplishments at EROS throughout fiscal year (FY) 2010. Additional information concerning the scientific, engineering, and operational achievements can be obtained from the scientific papers and other documents published by EROS staff or by visiting our web site at http://eros.usgs.gov. We welcome comments and follow-up questions on any aspect of this Annual Report and invite any of our customers or partners to contact us at their convenience. To

  6. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    Bioinformatics is an emerging scientific discipline that uses information ... complex biological questions. ... and computer programs for various purposes of primer ..... polymerase chain reaction: Human Immunodeficiency Virus 1 model studies.

  7. Challenge: A Multidisciplinary Degree Program in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Mudasser Fraz Wyne

    2006-06-01

    Full Text Available Bioinformatics is a new field that is poorly served by any of the traditional science programs in Biology, Computer science or Biochemistry. Known to be a rapidly evolving discipline, Bioinformatics has emerged from experimental molecular biology and biochemistry as well as from the artificial intelligence, database, pattern recognition, and algorithms disciplines of computer science. While institutions are responding to this increased demand by establishing graduate programs in bioinformatics, entrance barriers for these programs are high, largely due to the significant prerequisite knowledge which is required, both in the fields of biochemistry and computer science. Although many schools currently have or are proposing graduate programs in bioinformatics, few are actually developing new undergraduate programs. In this paper I explore the blend of a multidisciplinary approach, discuss the response of academia and highlight challenges faced by this emerging field.

  8. Deciphering psoriasis. A bioinformatic approach.

    Science.gov (United States)

    Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni

    2018-02-01

    Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  9. Concepts and introduction to RNA bioinformatics

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.

    2014-01-01

    RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....

  10. Empowering patients of a mental rehabilitation center in a low-resource context: a Moroccan experience as a case study.

    Science.gov (United States)

    Khabbache, Hicham; Jebbar, Abdelhak; Rania, Nadia; Doucet, Marie-Chantal; Watfa, Ali Assad; Candau, Joël; Martini, Mariano; Siri, Anna; Brigo, Francesco; Bragazzi, Nicola Luigi

    2017-01-01

    Mental, neurological and substance use (MNS) disorders represent a major source of disability and premature mortality worldwide. However, in developing countries patients with MNS disorders are often poorly managed and treated, particularly in marginalized, impoverished areas where the mental health gap and the treatment gap can reach 90%. Efforts should be made in promoting help by making mental health care more accessible. In this article, we address the challenges that psychological and psychiatric services have to face in a low-resource context, taking our experience at a Moroccan rehabilitation center as a case study. A sample of 60 patients were interviewed using a semi-structured questionnaire during the period of 2014-2015. The questionnaire investigated the reactions and feelings of the patients to the rehabilitation program, and their perceived psychological status and mental improvement, if any. Interviews were then transcribed and processed using ATLAS.ti V.7.0 qualitative analysis software. Frequencies and co-occurrence analyses were carried out. Despite approximately 30 million inhabitants within the working age group, Morocco suffers from a shortage of specialized health workers. Our ethnographic observations show that psychiatric treatment can be ensured, notwithstanding these hurdles, if a public health perspective is assumed. In resource-limited settings, working in the field of mental health means putting oneself on the line, exposing oneself to new experiences, and reorganizing one's own skills and expertise. In the present article, we have used our clinical experience at a rehabilitation center in Fes as a case study and we have shown how to use peer therapy to overcome the drawbacks that we are encountered daily in a setting of limited resources.

  11. Engaging Community Stakeholders to Evaluate the Design, Usability, and Acceptability of a Chronic Obstructive Pulmonary Disease Social Media Resource Center

    Science.gov (United States)

    Chaney, Beth; Chaney, Don; Paige, Samantha; Payne-Purvis, Caroline; Tennant, Bethany; Walsh-Childers, Kim; Sriram, PS; Alber, Julia

    2015-01-01

    Background Patients with chronic obstructive pulmonary disease (COPD) often report inadequate access to comprehensive patient education resources. Objective The purpose of this study was to incorporate community-engagement principles within a mixed-method research design to evaluate the usability and acceptability of a self-tailored social media resource center for medically underserved patients with COPD. Methods A multiphase sequential design (qual → QUANT → quant + QUAL) was incorporated into the current study, whereby a small-scale qualitative (qual) study informed the design of a social media website prototype that was tested with patients during a computer-based usability study (QUANT). To identify usability violations and determine whether or not patients found the website prototype acceptable for use, each patient was asked to complete an 18-item website usability and acceptability questionnaire, as well as a retrospective, in-depth, semistructured interview (quant + QUAL). Results The majority of medically underserved patients with COPD (n=8, mean 56 years, SD 7) found the social media website prototype to be easy to navigate and relevant to their self-management information needs. Mean responses on the 18-item website usability and acceptability questionnaire were very high on a scale of 1 (strongly disagree) to 5 (strongly agree) (mean 4.72, SD 0.33). However, the majority of patients identified several usability violations related to the prototype’s information design, interactive capabilities, and navigational structure. Specifically, 6 out of 8 (75%) patients struggled to create a log-in account to access the prototype, and 7 out of 8 patients (88%) experienced difficulty posting and replying to comments on an interactive discussion forum. Conclusions Patient perceptions of most social media website prototype features (eg, clickable picture-based screenshots of videos, comment tools) were largely positive. Mixed-method stakeholder feedback was

  12. The Radiation Safety Information Computational Center (RSICC): A Resource for Nuclear Science Applications

    International Nuclear Information System (INIS)

    Kirk, Bernadette Lugue

    2009-01-01

    The Radiation Safety Information Computational Center (RSICC) has been in existence since 1963. RSICC collects, organizes, evaluates and disseminates technical information (software and nuclear data) involving the transport of neutral and charged particle radiation, and shielding and protection from the radiation associated with: nuclear weapons and materials, fission and fusion reactors, outer space, accelerators, medical facilities, and nuclear waste management. RSICC serves over 12,000 scientists and engineers from about 100 countries. An important activity of RSICC is its participation in international efforts on computational and experimental benchmarks. An example is the Shielding Integral Benchmarks Archival Database (SINBAD), which includes shielding benchmarks for fission, fusion and accelerators. RSICC is funded by the United States Department of Energy, Department of Homeland Security and Nuclear Regulatory Commission.

  13. Design of SCADA water resource management control center by a bi-objective redundancy allocation problem and particle swarm optimization

    International Nuclear Information System (INIS)

    Dolatshahi-Zand, Ali; Khalili-Damghani, Kaveh

    2015-01-01

    SCADA is an essential system to control critical facilities in big cities. SCADA is utilized in several sectors such as water resource management, power plants, electricity distribution centers, traffic control centers, and gas deputy. The failure of SCADA results in crisis. Hence, the design of SCADA system in order to serve a high reliability considering limited budget and other constraints is essential. In this paper, a bi-objective redundancy allocation problem (RAP) is proposed to design Tehran's SCADA water resource management control center. Reliability maximization and cost minimization are concurrently considered. Since the proposed RAP is a non-linear multi-objective mathematical programming so the exact methods cannot efficiently handle it. A multi-objective particle swarm optimization (MOPSO) algorithm is designed to solve it. Several features such as dynamic parameter tuning, efficient constraint handling and Pareto gridding are inserted in proposed MOPSO. The results of proposed MOPSO are compared with an efficient ε-constraint method. Several non-dominated designs of SCADA system are generated using both methods. Comparison metrics based on accuracy and diversity of Pareto front are calculated for both methods. The proposed MOPSO algorithm reports better performance. Finally, in order to choose the practical design, the TOPSIS algorithm is used to prune the Pareto front. - Highlights: • Multi-objective redundancy allocation problem (MORAP) is proposed to design SCADA system. • Multi-objective particle swarm optimization (MOPSO) is proposed to solve MORAP. • Efficient epsilon-constraint method is adapted to solve MORAP. • Non-dominated solutions are generated on Pareto front of MORAP by both methods. • Several multi-objective metrics are calculated to compare the performance of methods

  14. Postoperative Central Nervous System Infection After Neurosurgery in a Modernized, Resource-Limited Tertiary Neurosurgical Center in South Asia.

    Science.gov (United States)

    Chidambaram, Swathi; Nair, M Nathan; Krishnan, Shyam Sundar; Cai, Ling; Gu, Weiling; Vasudevan, Madabushi Chakravarthy

    2015-12-01

    Postoperative central nervous system infections (PCNSIs) are rare but serious complications after neurosurgery. The purpose of this study was to examine the prevalence and causative pathogens of PCNSIs at a modernized, resource-limited neurosurgical center in South Asia. A retrospective analysis was conducted of the medical records of all 363 neurosurgical cases performed between June 1, 2012, and June 30, 2013, at a neurosurgical center in South Asia. Data from all operative neurosurgical cases during the 13-month period were included. Cerebrospinal fluid (CSF) analysis indicated that 71 of the 363 surgical cases had low CSF glucose or CSF leukocytosis. These 71 cases were categorized as PCNSIs. The PCNSIs with positive CSF cultures (9.86%) all had gram-negative bacteria with Pseudomonas aeruginosa (n = 5), Escherichia coli (n = 1), or Klebsiella pneumoniae (n = 1). The data suggest a higher rate of death (P = 0.031), a higher rate of CSF leak (P < 0.001), and a higher rate of cranial procedures (P < 0.001) among the infected patients and a higher rate of CSF leak among the patients with culture-positive infections (P = 0.038). This study summarizes the prevalence, causative organism of PCNSI, and antibiotic usage for all of the neurosurgical cases over a 13-month period in a modernized yet resource-limited neurosurgical center located in South Asia. The results from this study highlight the PCNSI landscape in an area of the world that is often underreported in the neurosurgical literature because of the paucity of clinical neurosurgical research undertaken there. This study shows an increasing prevalence of gram-negative organisms in CSF cultures from PCNSIs, which supports a trend in the recent literature of increasing gram-negative bacillary meningitis. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Readability of Online Patient Educational Resources Found on NCI-Designated Cancer Center Web Sites.

    Science.gov (United States)

    Rosenberg, Stephen A; Francis, David; Hullett, Craig R; Morris, Zachary S; Fisher, Michael M; Brower, Jeffrey V; Bradley, Kristin A; Anderson, Bethany M; Bassetti, Michael F; Kimple, Randall J

    2016-06-01

    The NIH and Department of Health & Human Services recommend online patient information (OPI) be written at a sixth grade level. We used a panel of readability analyses to assess OPI from NCI-Designated Cancer Center (NCIDCC) Web sites. Cancer.gov was used to identify 68 NCIDCC Web sites from which we collected both general OPI and OPI specific to breast, prostate, lung, and colon cancers. This text was analyzed by 10 commonly used readability tests: the New Dale-Chall Readability Formula, Flesch Reading Ease scale, Flesch-Kinaid Grade Level, FORCAST scale, Fry Readability Graph, Simple Measure of Gobbledygook test, Gunning Frequency of Gobbledygook index, New Fog Count, Raygor Readability Estimate Graph, and Coleman-Liau Index. We tested the hypothesis that the readability of NCIDCC OPI was written at the sixth grade level. Secondary analyses were performed to compare readability of OPI between comprehensive and noncomprehensive centers, by region, and to OPI produced by the American Cancer Society (ACS). A mean of 30,507 words from 40 comprehensive and 18 noncomprehensive NCIDCCs was analyzed (7 nonclinical and 3 without appropriate OPI were excluded). Using a composite grade level score, the mean readability score of 12.46 (ie, college level: 95% CI, 12.13-12.79) was significantly greater than the target grade level of 6 (middle-school: Preadability metrics (P<.05). ACS OPI provides easier language, at the seventh to ninth grade level, across all tests (P<.01). OPI from NCIDCC Web sites is more complex than recommended for the average patient. Copyright © 2016 by the National Comprehensive Cancer Network.

  16. Bioinformática como recurso pedagógico para o curso de ciências biológicas na Universidade Estadual do Ceará – UECE – Fortaleza, Estado do Ceará = Bioinformatics as a pedagogical resource for the biology course in the State University of Ceara - UECE - Fortaleza, Ceará State

    Directory of Open Access Journals (Sweden)

    Howard Lopes Ribeiro Junior

    2012-01-01

    Full Text Available O objetivo deste estudo foi aplicar e avaliar conteúdos teórico-práticos de Bioinformática para estudantes do curso de Licenciatura Plena em Ciências Biológicas, matriculados nas disciplinas de Genética geral e Biologia Molecular na Universidade Estadual do Ceará, no ano de 2010. A abordagem teórica consistiu de uma apresentação de conceitos históricos, básicos e específicos dos atuais avanços das pesquisas envolvidas nas áreas da biologia Molecular. A prática de ‘Construção de uma Filogenia Molecular in Silico’ foi elaborada para tornar funcionais os conceitos apresentados na prática anterior (RIBEIRO JUNIOR, 2011, com a utilização do banco de dados do National Center for Biotechnology Information, NCBI, e sua ferramenta de alinhamento de sequências, o BLASTp (Basic Local Alignment Search Tool Protein-Protein. Os resultados positivos obtidos com a aplicação da aula teórica de Introdução à Bioinformática e das atividades práticas foram destacados com as caracterizações das filogenias moleculares das sequências hipotéticas propostas para a execução dos alinhamentos e com as falas dos alunos anteriormente citados. Essas atividades foram consideradas essenciais para que os alunos pudessem vivenciar o passo a passo para uma melhor compreensão da emergente área das ciências da vida: a Bioinformática.The objective of this study was to evaluate and apply the Bioinformatics theoretical contents and practical for the course students in Biological Sciences Degree Fully enrolled in the disciplines of General Genetics and Molecular Biology, State University of Ceara in 2010. The theoretical approach previously tested (RIBEIRO JUNIOR, 2011 consisted of a presentation of historical concepts, basic and specific to current advances in research involved the areas of molecular biology. The practice of "Building a Molecular Phylogeny in Silico" is designed to become functional in practice the concepts presented above

  17. Final Report: Phase II Nevada Water Resources Data, Modeling, and Visualization (DMV) Center

    Energy Technology Data Exchange (ETDEWEB)

    Jackman, Thomas [Desert Research Institute; Minor, Timothy [Desert Research Institute; Pohll, Gregory [Desert Research Institute

    2013-07-22

    Water is unquestionably a critical resource throughout the United States. In the semi-arid west -- an area stressed by increase in human population and sprawl of the built environment -- water is the most important limiting resource. Crucially, science must understand factors that affect availability and distribution of water. To sustain growing consumptive demand, science needs to translate understanding into reliable and robust predictions of availability under weather conditions that could be average but might be extreme. These predictions are needed to support current and long-term planning. Similar to the role of weather forecast and climate prediction, water prediction over short and long temporal scales can contribute to resource strategy, governmental policy and municipal infrastructure decisions, which are arguably tied to the natural variability and unnatural change to climate. Change in seasonal and annual temperature, precipitation, snowmelt, and runoff affect the distribution of water over large temporal and spatial scales, which impact the risk of flooding and the groundwater recharge. Anthropogenic influences and impacts increase the complexity and urgency of the challenge. The goal of this project has been to develop a decision support framework of data acquisition, digital modeling, and 3D visualization. This integrated framework consists of tools for compiling, discovering and projecting our understanding of processes that control the availability and distribution of water. The framework is intended to support the analysis of the complex interactions between processes that affect water supply, from controlled availability to either scarcity or deluge. The developed framework enables DRI to promote excellence in water resource management, particularly within the Lake Tahoe basin. In principle, this framework could be replicated for other watersheds throughout the United States. Phase II of this project builds upon the research conducted during

  18. Detailed prospective stages of inventory of U resources in Mentawa and Seruyan, Center of Kalimantan

    International Nuclear Information System (INIS)

    Ramadanus; Sudjiman; Agus, S.

    1996-01-01

    Indication of U mineralization in granite biotite 1.500 cps and metasilt boulders 500 cps - 15.000 cps was found in Mentawa River. This detailed examination was done with the aim to gather geological information and U mineralization and to obtain knowledge about potential U resources. Methods used were geological mapping, the radiometric measuring and peeling the chosen outcrop samples were taken from outcrop and anomaly boulders, and stream sediment as mud and heavy minerals. This research was backed up with laboratory analysis in the form of petrography, mineragraphy, autoradiography, total and mobile U content. The result of this research stratigraphy of Mentawa and Seruyan which consisted of schist and tonalite. Outcrop of U mineralization was found in schist in the from of uraninite generally, filled up SSE-NNW subvertical-vertical dipping. Boulder of U mineralizations was found in the from of uraninite, gumite and autonite associated with turmaline. Those U mineralizations mentioned were found in Mentawa Satu River and upper reaches of Rengka River with distribution width + 7 km 2

  19. Fort Collins Science Center Ecosystem Dynamics branch--interdisciplinary research for addressing complex natural resource issues across landscapes and time

    Science.gov (United States)

    Bowen, Zachary H.; Melcher, Cynthia P.; Wilson, Juliette T.

    2013-01-01

    The Ecosystem Dynamics Branch of the Fort Collins Science Center offers an interdisciplinary team of talented and creative scientists with expertise in biology, botany, ecology, geology, biogeochemistry, physical sciences, geographic information systems, and remote-sensing, for tackling complex questions about natural resources. As demand for natural resources increases, the issues facing natural resource managers, planners, policy makers, industry, and private landowners are increasing in spatial and temporal scope, often involving entire regions, multiple jurisdictions, and long timeframes. Needs for addressing these issues include (1) a better understanding of biotic and abiotic ecosystem components and their complex interactions; (2) the ability to easily monitor, assess, and visualize the spatially complex movements of animals, plants, water, and elements across highly variable landscapes; and (3) the techniques for accurately predicting both immediate and long-term responses of system components to natural and human-caused change. The overall objectives of our research are to provide the knowledge, tools, and techniques needed by the U.S. Department of the Interior, state agencies, and other stakeholders in their endeavors to meet the demand for natural resources while conserving biodiversity and ecosystem services. Ecosystem Dynamics scientists use field and laboratory research, data assimilation, and ecological modeling to understand ecosystem patterns, trends, and mechanistic processes. This information is used to predict the outcomes of changes imposed on species, habitats, landscapes, and climate across spatiotemporal scales. The products we develop include conceptual models to illustrate system structure and processes; regional baseline and integrated assessments; predictive spatial and mathematical models; literature syntheses; and frameworks or protocols for improved ecosystem monitoring, adaptive management, and program evaluation. The descriptions

  20. Nuclear structure and radioactive decay resources at the US National Nuclear Data Center

    International Nuclear Information System (INIS)

    Sonzogni, A.A.; Burrows, T.W.; Pritychenko, B.; Tuli, J.K.; Winchell, D.F.

    2008-01-01

    The National Nuclear Data Center has a long tradition of evaluating nuclear structure and decay data as well as offering tools to assist in nuclear science research and applications. With these tools, users can obtain recommended values for nuclear structure and radioactive decay observables as well as links to the relevant articles. The main databases or tools are ENSDF, NSR, NuDat and the new Endf decay data library. The Evaluated Nuclear Structure Data File (ENSDF) stores recommended nuclear structure and decay data for all nuclei. ENSDF deals with properties such as: -) nuclear level energies, spin and parity, half-life and decay modes, -) nuclear radiation energy and intensity for different types, -) nuclear decay modes and their probabilities. The Nuclear Science References (NSR) is a bibliographic database containing nearly 200.000 nuclear sciences articles indexed according to content. About 4000 are added each year covering 80 journals as well as conference proceedings and laboratory reports. NuDat is a software product with 2 main goals, to present nuclear structure and decay information from ENSDF in a user-friendly way and to allow users to execute complex search operations in the wealth of data contained in ENSDF. The recently released Endf-B7.0 contains a decay data sub-library which has been derived from ENSDF. The way all these databases and tools have been offered to the public has undergone a drastic improvement due to advancements in information technology

  1. Iowa State University's undergraduate minor, online graduate certificate and resource center in NDE

    Science.gov (United States)

    Bowler, Nicola; Larson, Brian F.; Gray, Joseph N.

    2014-02-01

    Nondestructive evaluation is a `niche' subject that is not yet offered as an undergraduate or graduate major in the United States. The undergraduate minor in NDE offered within the College of Engineering at Iowa State University (ISU) provides a unique opportunity for undergraduate aspiring engineers to obtain a qualification in the multi-disciplinary subject of NDE. The minor requires 16 credits of course work within which a core course and laboratory in NDE are compulsory. The industrial sponsors of Iowa State's Center for Nondestructive Evaluation, and others, strongly support the NDE minor and actively recruit students from this pool. Since 2007 the program has graduated 10 students per year and enrollment is rising. In 2011, ISU's College of Engineering established an online graduate certificate in NDE, accessible not only to campus-based students but also to practicing engineers via the web. The certificate teaches the fundamentals of three major NDE techniques; eddy-current, ultrasonic and X-ray methods. This paper describes the structure of these programs and plans for development of an online, coursework-only, Master of Engineering in NDE and thesis-based Master of Science degrees in NDE.

  2. RESOURCE TRAINING AND METHODOLOGICAL CENTER FOR THE TRAINING OF PEOPLE WITH DISABILITIES: EXPERIENCE AND DIRECTION OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    A. A. Fedorov

    2018-01-01

    Full Text Available Introduction: The presented article is devoted to the new and actual direction in the system of higher education - the development of inclusive education. The experience of creating a resource training and methodological center (RТMC of the University of Minin in 2017 is presented, the directions of its activity in 2017 and the results are described. The article outlines the role of RТMC in the development of inclusive culture.Materials and methods: The method of analyzing the literature of domestic and foreign authors was used as the basis for writing the article; the monitoring data of the state of inclusive higher education, which was implemented within the framework of the State Contract dated 07.06.2016 No. 05.020.11 007 on the project «Monitoring Information and Analytical Support of Activities regional resource centers for higher education for disabled people».Results: Analyzing the results of the RТMC activity, the authors update the problems that arose during the project implementation and suggest ways of their solution. The authors see the development of the RТMC activity through the development of forms and mechanisms of interdepartmental, interregional and inter-institutional cooperation in order to achieve coherence of actions and effectiveness of all participants in the support of inclusion in higher education, taking into account the educational needs of entrants and labor market needs throughout the fixed territory. As a special mission of the RТMC, the authors see the management of the development of inclusive culture in the university. The system of higher education is considered as an instrument of fulfilling the social order for the formation of a generation of people who tolerate and organically perceive the fact of inclusion in all spheres of life.Discussion and conclusion: The role of the resource training and methodological center in the development of inclusive higher education is determined by the identification

  3. Screening Genetic Resources of Capsicum Peppers in Their Primary Center of Diversity in Bolivia and Peru.

    Science.gov (United States)

    van Zonneveld, Maarten; Ramirez, Marleni; Williams, David E; Petz, Michael; Meckelmann, Sven; Avila, Teresa; Bejarano, Carlos; Ríos, Llermé; Peña, Karla; Jäger, Matthias; Libreros, Dimary; Amaya, Karen; Scheldeman, Xavier

    2015-01-01

    For most crops, like Capsicum, their diversity remains under-researched for traits of interest for food, nutrition and other purposes. A small investment in screening this diversity for a wide range of traits is likely to reveal many traditional varieties with distinguished values. One objective of this study was to demonstrate, with Capsicum as model crop, the application of indicators of phenotypic and geographic diversity as effective criteria for selecting promising genebank accessions for multiple uses from crop centers of diversity. A second objective was to evaluate the expression of biochemical and agromorphological properties of the selected Capsicum accessions in different conditions. Four steps were involved: 1) Develop the necessary diversity by expanding genebank collections in Bolivia and Peru; 2) Establish representative subsets of ~100 accessions for biochemical screening of Capsicum fruits; 3) Select promising accessions for different uses after screening; and 4) Examine how these promising accessions express biochemical and agromorphological properties when grown in different environmental conditions. The Peruvian Capsicum collection now contains 712 accessions encompassing all five domesticated species (C. annuum, C. chinense, C. frutescens, C. baccatum, and C. pubescens). The collection in Bolivia now contains 487 accessions, representing all five domesticates plus four wild taxa (C. baccatum var. baccatum, C. caballeroi, C. cardenasii, and C. eximium). Following the biochemical screening, 44 Bolivian and 39 Peruvian accessions were selected as promising, representing wide variation in levels of antioxidant capacity, capsaicinoids, fat, flavonoids, polyphenols, quercetins, tocopherols, and color. In Peru, 23 promising accessions performed well in different environments, while each of the promising Bolivian accessions only performed well in a certain environment. Differences in Capsicum diversity and local contexts led to distinct outcomes in

  4. [Establish and manage a National Resource Center for plutonium, Quarterly report, April 1, 1995--June 30, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Mulder, R.

    1995-06-27

    The initial phase of the Plutonium Information Resource is well under way. Board members developed linkages with Russian scientists and engineers and obtained names of technical team members. Nuclear proposals were reviewed by the Nuclear Review Group, and the proposals were modified to incorporate the review group`s comments. Portions of the proposals were approved by the Governing Board. Proposals for education and outreach were reviewed by the Education Proposal Review Group, considered by the Governing Board and approved. The Senior Technical Review Group met to consider the R&D programs associated with fissile materials disposal. A newsletter was published. Progress continued on the high explosives demonstration project, on site-specific environmental work, and the multiattribute utility analysis. Center offices in Amarillo were furnished, equipment was purchased, and the lease was modified.

  5. U.S. Department of Energy Regional Resource Centers Report: State of the Wind Industry in the Regions

    Energy Technology Data Exchange (ETDEWEB)

    Baranowski, Ruth [National Renewable Energy Lab. (NREL), Golden, CO (United St; Oteri, Frank [National Renewable Energy Lab. (NREL), Golden, CO (United St; Baring-Gould, Ian [National Renewable Energy Lab. (NREL), Golden, CO (United St; Tegen, Suzanne [National Renewable Energy Lab. (NREL), Golden, CO (United St

    2016-03-01

    The wind industry and the U.S. Department of Energy (DOE) are addressing technical challenges to increasing wind energy's contribution to the national grid (such as reducing turbine costs and increasing energy production and reliability), and they recognize that public acceptance issues can be challenges for wind energy deployment. Wind project development decisions are best made using unbiased information about the benefits and impacts of wind energy. In 2014, DOE established six wind Regional Resource Centers (RRCs) to provide information about wind energy, focusing on regional qualities. This document summarizes the status and drivers for U.S. wind energy development on regional and state levels. It is intended to be a companion to DOE's 2014 Distributed Wind Market Report, 2014 Wind Technologies Market Report, and 2014 Offshore Wind Market and Economic Analysis that provide assessments of the national wind markets for each of these technologies.

  6. [Establish and manage a National Resource Center for plutonium, Quarterly report, April 1, 1995--June 30, 1995

    International Nuclear Information System (INIS)

    Mulder, R.

    1995-01-01

    The initial phase of the Plutonium Information Resource is well under way. Board members developed linkages with Russian scientists and engineers and obtained names of technical team members. Nuclear proposals were reviewed by the Nuclear Review Group, and the proposals were modified to incorporate the review group's comments. Portions of the proposals were approved by the Governing Board. Proposals for education and outreach were reviewed by the Education Proposal Review Group, considered by the Governing Board and approved. The Senior Technical Review Group met to consider the R ampersand D programs associated with fissile materials disposal. A newsletter was published. Progress continued on the high explosives demonstration project, on site-specific environmental work, and the multiattribute utility analysis. Center offices in Amarillo were furnished, equipment was purchased, and the lease was modified

  7. U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center-Fiscal Year 2009 Annual Report

    Science.gov (United States)

    Nelson, Janice S.

    2010-01-01

    The Earth Resources Observation and Science (EROS) Center is a U.S. Geological Survey (USGS) facility focused on providing science and imagery to better understand our Earth. As part of the USGS Geography Discipline, EROS contributes to the Land Remote Sensing (LRS) Program, the Geographic Analysis and Monitoring (GAM) Program, and the National Geospatial Program (NGP), as well as our Federal partners and cooperators. The work of the Center is shaped by the Earth sciences, the missions of our stakeholders, and implemented through strong program and project management and application of state-of-the-art information technologies. Fundamentally, EROS contributes to the understanding of a changing Earth through 'research to operations' activities that include developing, implementing, and operating remote sensing based terrestrial monitoring capabilities needed to address interdisciplinary science and applications objectives at all levels-both nationally and internationally. The Center's programs and projects continually strive to meet and/or exceed the changing needs of the USGS, the Department of the Interior, our Nation, and international constituents. The Center's multidisciplinary staff uses their unique expertise in remote sensing science and technologies to conduct basic and applied research, data acquisition, systems engineering, information access and management, and archive preservation to address the Nation's most critical needs. Of particular note is the role of EROS as the primary provider of Landsat data, the longest comprehensive global land Earth observation record ever collected. This report is intended to provide an overview of the scientific and engineering achievements and illustrate the range and scope of the activities and accomplishments at EROS throughout fiscal year (FY) 2009. Additional information concerning the scientific, engineering, and operational achievements can be obtained from the scientific papers and other documents published by

  8. Fish bone foreign body presenting with an acute fulminating retropharyngeal abscess in a resource-challenged center: a case report

    Directory of Open Access Journals (Sweden)

    Oyewole Ezekiel O

    2011-04-01

    Full Text Available Abstract Introduction A retropharyngeal abscess is a potentially life-threatening infection in the deep space of the neck, which can compromise the airway. Its management requires highly specialized care, including surgery and intensive care, to reduce mortality. This is the first case of a gas-forming abscess reported from this region, but not the first such report in the literature. Case presentation We present a case of a 16-month-old Yoruba baby girl with a gas-forming retropharyngeal abscess secondary to fish bone foreign body with laryngeal spasm that was managed in the recovery room. We highlight specific problems encountered in the management of this case in a resource-challenged center such as ours. Conclusion We describe an unusual presentation of a gas-forming organism causing a retropharyngeal abscess in a child. The patient's condition was treated despite the challenges of inadequate resources for its management. We recommend early recognition through adequate evaluation of any oropharyngeal injuries or infection and early referral to the specialist with prompt surgical intervention.

  9. Water-resources and land-surface deformation evaluation studies at Fort Irwin National Training Center, Mojave Desert, California

    Science.gov (United States)

    Densmore-Judy, Jill; Dishart, Justine E.; Miller, David; Buesch, David C.; Ball, Lyndsay B.; Bedrosian, Paul A.; Woolfenden, Linda R.; Cromwell, Geoffrey; Burgess, Matthew K.; Nawikas, Joseph; O'Leary, David; Kjos, Adam; Sneed, Michelle; Brandt, Justin

    2017-01-01

    The U.S. Army Fort Irwin National Training Center (NTC), in the Mojave Desert, obtains all of its potable water supply from three groundwater basins (Irwin, Langford, and Bicycle) within the NTC boundaries (fig. 1; California Department of Water Resources, 2003). Because of increasing water demands at the NTC, the U.S. Geological Survey (USGS), in cooperation with the U.S. Army, completed several studies to evaluate water resources in the developed and undeveloped groundwater basins underlying the NTC. In all of the developed basins, groundwater withdrawals exceed natural recharge, resulting in water-level declines. However, artificial recharge of treated wastewater has had some success in offsetting water-level declines in Irwin Basin. Additionally, localized water-quality changes have occurred in some parts of Irwin Basin as a result of human activities (i.e., wastewater disposal practices, landscape irrigation, and/or leaking pipes). As part of the multi-faceted NTC-wide studies, traditional datacollection methods were used and include lithological and geophysical logging at newly drilled boreholes, hydrologic data collection (i.e. water-level, water-quality, aquifer tests, wellbore flow). Because these data cover a small portion of the 1,177 square-mile (mi2 ) NTC, regional mapping, including geologic, gravity, aeromagnetic, and InSAR, also were done. In addition, ground and airborne electromagnetic surveys were completed and analyzed to provide more detailed subsurface information on a regional, base-wide scale. The traditional and regional ground and airborne data are being analyzed and will be used to help develop preliminary hydrogeologic framework and groundwater-flow models in all basins. This report is intended to provide an overview of recent water-resources and land-surface deformation studies at the NTC.

  10. A semantic web approach applied to integrative bioinformatics experimentation: a biological use case with genomics data.

    NARCIS (Netherlands)

    Post, L.J.G.; Roos, M.; Marshall, M.S.; van Driel, R.; Breit, T.M.

    2007-01-01

    The numerous public data resources make integrative bioinformatics experimentation increasingly important in life sciences research. However, it is severely hampered by the way the data and information are made available. The semantic web approach enhances data exchange and integration by providing

  11. Planning bioinformatics workflows using an expert system

    Science.gov (United States)

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  12. Are the resources adoptive for conducting team-based diabetes management clinics? An explorative study at primary health care centers in Muscat, Oman.

    Science.gov (United States)

    Al-Alawi, Kamila; Johansson, Helene; Al Mandhari, Ahmed; Norberg, Margareta

    2018-05-08

    AimThe aim of this study is to explore the perceptions among primary health center staff concerning competencies, values, skills and resources related to team-based diabetes management and to describe the availability of needed resources for team-based approaches. The diabetes epidemic challenges services available at primary health care centers in the Middle East. Therefore, there is a demand for evaluation of the available resources and team-based diabetes management in relation to the National Diabetes Management Guidelines. A cross-sectional study was conducted with 26 public primary health care centers in Muscat, the capital of Oman. Data were collected from manual and electronic resources as well as a questionnaire that was distributed to the physician-in-charge and diabetes management team members.FindingsThe study revealed significant differences between professional groups regarding how they perceived their own competencies, values and skills as well as available resources related to team-based diabetes management. The perceived competencies were high among all professions. The perceived team-related values and skills were also generally high but with overall lower recordings among the nurses. This pattern, along with the fact that very few nurses have specialized qualifications, is a barrier to providing team-based diabetes management. Participants indicated that there were sufficient laboratory resources; however, reported that pharmacological, technical and human resources were lacking. Further work should be done at public primary diabetes management clinics in order to fully implement team-based diabetes management.

  13. Empowering patients of a mental rehabilitation center in a low-resource context: a Moroccan experience as a case study

    Directory of Open Access Journals (Sweden)

    Khabbache H

    2017-04-01

    Full Text Available Hicham Khabbache,1 Abdelhak Jebbar,2,* Nadia Rania,3,* Marie-Chantal Doucet,4 Ali Assad Watfa,5 Joël Candau,6 Mariano Martini,7 Anna Siri,8,* Francesco Brigo,9,10,* Nicola Luigi Bragazzi1,2,4–8,11,* 1Faculty of Literature and Humanistic Studies, Sais, Sidi Mohamed Ben Abdellah University, Fez, 2Faculty of Art and Humanities, Sultan Moulay Slimane University, Beni-Mellal, Morocco; 3School of Social Sciences, Department of Education Sciences, University of Genoa, Genova, Italy; 4Faculty of Human Sciences, School of Social Work, University of Québec-Montréal, Montreal, QC, Canada; 5Faculty of Education, Kuwait University, Kuwait City, Kuwait; 6Laboratory of Anthropology and Cognitive and Social Psychology, University of Nice Sophia Antipolis, Nice, France; 7Department of Health Sciences (DISSAL, Section of Bioethics, University of Genoa, 8UNESCO Chair “Health Anthropology, Biosphere and Healing Systems”, Genova, 9Department of Neurology, Franz Tappeiner Hospital, Merano, 10Department of Neurological, Biomedical, and Movement Sciences, University of Verona, Verona, 11School of Public Health, Department of Health Sciences (DISSAL, University of Genoa, Genova, Italy *These authors contributed equally to this work Abstract: Mental, neurological and substance use (MNS disorders represent a major source of disability and premature mortality worldwide. However, in developing countries patients with MNS disorders are often poorly managed and treated, particularly in marginalized, impoverished areas where the mental health gap and the treatment gap can reach 90%. Efforts should be made in promoting help by making mental health care more accessible. In this article, we address the challenges that psychological and psychiatric services have to face in a low-resource context, taking our experience at a Moroccan rehabilitation center as a case study. A sample of 60 patients were interviewed using a semi-structured questionnaire during the period of

  14. Education, Outreach, and Diversity Partnerships and Science Education Resources From the Center for Multi-scale Modeling of Atmospheric Processes

    Science.gov (United States)

    Foster, S. Q.; Randall, D.; Denning, S.; Jones, B.; Russell, R.; Gardiner, L.; Hatheway, B.; Johnson, R. M.; Drossman, H.; Pandya, R.; Swartz, D.; Lanting, J.; Pitot, L.

    2007-12-01

    The need for improving the representation of cloud processes in climate models has been one of the most important limitations of the reliability of climate-change simulations. The new National Science Foundation- funded Center for Multi-scale Modeling of Atmospheric Processes (CMMAP) at Colorado State University (CSU) is a major research program addressing this problem over the next five years through a revolutionary new approach to representing cloud processes on their native scales, including the cloud-scale interactions among the many physical and chemical processes that are active in cloud systems. At the end of its first year, CMMAP has established effective partnerships between scientists, students, and teachers to meet its goals to: (1) provide first-rate graduate education in atmospheric science; (2) recruit diverse undergraduates into graduate education and careers in climate science; and (3) develop, evaluate, and disseminate educational resources designed to inform K-12 students, teachers, and the general public about the nature of the climate system, global climate change, and career opportunities in climate science. This presentation will describe the partners, our challenges and successes, and measures of achievement involved in the integrated suite of programs launched in the first year. They include: (1) a new high school Colorado Climate Conference drawing prestigious climate scientists to speak to students, (2) a summer Weather and Climate Workshop at CSU and the National Center for Atmospheric Research introducing K-12 teachers to Earth system science and a rich toolkit of teaching materials, (3) a program from CSU's Little Shop of Physics reaching 50 schools and 20,000 K-12 students through the new "It's Up In the Air" program, (4) expanded content, imagery, and interactives on clouds, weather, climate, and modeling for students, teachers, and the public on The Windows to the Universe web site at University Corporation for Atmospheric Research

  15. Sexual and Reproductive Health Services and Related Health Information on Pregnancy Resource Center Websites: A Statewide Content Analysis.

    Science.gov (United States)

    Swartzendruber, Andrea; Newton-Levinson, Anna; Feuchs, Ashley E; Phillips, Ashley L; Hickey, Jennifer; Steiner, Riley J

    Pregnancy resource centers (PRCs) are nonprofit organizations with a primary mission of promoting childbirth among pregnant women. Given a new state grant program to publicly fund PRCs, we analyzed Georgia PRC websites to describe advertised services and related health information. We systematically identified all accessible Georgia PRC websites available from April to June 2016. Entire websites were obtained and coded using defined protocols. Of 64 reviewed websites, pregnancy tests and testing (98%) and options counseling (84%) were most frequently advertised. However, 58% of sites did not provide notice that PRCs do not provide or refer for abortion, and 53% included false or misleading statements regarding the need to make a decision about abortion or links between abortion and mental health problems or breast cancer. Advertised contraceptive services were limited to counseling about natural family planning (3%) and emergency contraception (14%). Most sites (89%) did not provide notice that PRCs do not provide or refer for contraceptives. Two sites (3%) advertised unproven "abortion reversal" services. Approximately 63% advertised ultrasound examinations, 22% sexually transmitted infection testing, and 5% sexually transmitted infection treatment. None promoted consistent and correct condom use; 78% with content about condoms included statements that seemed to be designed to undermine confidence in condom effectiveness. Approximately 84% advertised educational programs, and 61% material resources. Georgia PRC websites contain high levels of false and misleading health information; the advertised services do not seem to align with prevailing medical guidelines. Public funding for PRCs, an increasing national trend, should be rigorously examined. Increased regulation may be warranted to ensure quality health information and services. Copyright © 2017 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  16. Animation company "Fast Forwards" production with HP Utility Data Center; film built using Adaptive Enterprise framework enabled by shared, virtual resource

    CERN Multimedia

    2003-01-01

    Hewlett Packard have produced a commercial-quality animated film using an experimental rendering service from HP Labs and running on an HP Utility Data Center (UDC). The project demonstrates how computing resources can be managed virtually and illustrates the value of utility computing, in which an end-user taps into a large pool of virtual resources, but pays only for what is used (1 page).

  17. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.

  18. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  19. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  20. Protein raftophilicity. How bioinformatics can help membranologists

    DEFF Research Database (Denmark)

    Nielsen, Henrik; Sperotto, Maria Maddalena

    )-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...

  1. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  2. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  3. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  4. Bioboxes: standardised containers for interchangeable bioinformatics software.

    Science.gov (United States)

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.

  5. Development and implementation of a bioinformatics online ...

    African Journals Online (AJOL)

    Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...

  6. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    Science.gov (United States)

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  7. REDIdb: an upgraded bioinformatics resource for organellar RNA editing sites.

    Science.gov (United States)

    Picardi, Ernesto; Regina, Teresa M R; Verbitskiy, Daniil; Brennicke, Axel; Quagliariello, Carla

    2011-03-01

    RNA editing is a post-transcriptional molecular process whereby the information in a genetic message is modified from that in the corresponding DNA template by means of nucleotide substitutions, insertions and/or deletions. It occurs mostly in organelles by clade-specific diverse and unrelated biochemical mechanisms. RNA editing events have been annotated in primary databases as GenBank and at more sophisticated level in the specialized databases REDIdb, dbRES and EdRNA. At present, REDIdb is the only freely available database that focuses on the organellar RNA editing process and annotates each editing modification in its biological context. Here we present an updated and upgraded release of REDIdb with a web-interface refurbished with graphical and computational facilities that improve RNA editing investigations. Details of the REDIdb features and novelties are illustrated and compared to other RNA editing databases. REDIdb is freely queried at http://biologia.unical.it/py_script/REDIdb/. Copyright © 2010 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  8. Web-based bioinformatic resources for protein and nucleic acids ...

    African Journals Online (AJOL)

    Admin

    DNA sequencing is the deciphering of hereditary information. It is an indispensable prerequisite for many biotechnical applications and technologies and the continual acquisition of genomic information is very important. This opens the door not only for further research and better understanding of the architectural plan of life ...

  9. Teaching Sustainable Water Resources and Low Impact Development: A Project Centered Course for First-Year Undergraduates

    Science.gov (United States)

    Cianfrani, C. M.

    2009-12-01

    Teaching Sustainable Water Resources and Low Impact Development: A Project Centered Course for First-Year Undergraduates Christina M. Cianfrani Assistant Professor, School of Natural Science, Hampshire College, 893 West Avenue, Amherst, MA 01002 Sustainable water resources and low impact development principles are taught to first-year undergraduate students using an applied design project sited on campus. All students at Hampshire College are required to take at least one natural science course during their first year as part of their liberal arts education. This requirement is often met with resistance from non-science students. However, ‘sustainability’ has shown to be a popular topic on campus and ‘Sustainable Water Resources’ typically attracts ~25 students (a large class size for Hampshire College). Five second- or third-year students are accepted in the class as advanced students and serve as project leaders. The first-year students often enter the class with only basic high school science background. The class begins with an introduction to global water resources issues to provide a broad perspective. The students then analyze water budgets, both on a watershed basis and a personal daily-use basis. The students form groups of 4 to complete their semester project. Lectures on low impact design principles are combined with group work sessions for the second half of the semester. Students tour the physical site located across the street from campus and begin their project with a site analysis including soils, landcover and topography. They then develop a building plan and identify preventative and mitigative measures for dealing with stormwater. Each group completes TR-55 stormwater calculations for their design (pre- and post-development) to show the state regulations for quantity will be met with their design. Finally, they present their projects to the class and prepare a formal written report. The students have produced a wide variety of creative

  10. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    Science.gov (United States)

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    Science.gov (United States)

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  12. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    Science.gov (United States)

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  13. The Montpellier Leishmania Collection, from a Laboratory Collection to a Biological Resource Center: A 39-Year-Long Story.

    Science.gov (United States)

    Pratlong, Francine; Balard, Yves; Lami, Patrick; Talignani, Loïc; Ravel, Christophe; Dereure, Jacques; Lefebvre, Michèle; Serres, Ghislaine; Bastien, Patrick; Dedet, Jean-Pierre

    2016-12-01

    We report the development of a laboratory collection of Leishmania that was initiated in 1975 and, after 39 years, has become an international Biological Resource Center (BRC-Leish, Montpellier, France, BioBank No. BB-0033-00052), which includes 6353 strains belonging to 36 Leishmania taxa. This is a retrospective analysis of the technical and organizational changes that have been adopted over time to take into account the technological advances and related modifications in the collection management and quality system. The technical improvements concerned the culture and cryopreservation techniques, strain identification by isoenzymatic and molecular techniques, data computerization and quality management to meet the changes in international standards, and in the cryogenic and microbiological safety procedures. The BRC is working toward obtaining the NF-S 96-900 certification in the coming years. Our long-term expertise in Leishmania storage and typing and collection maintenance should encourage field epidemiologists and clinical practitioners in endemic countries to secure their own strain collection with the help of the French BRC-Leish.

  14. Health status, resource consumption, and costs of dysthymia. A multi-center two-year longitudinal study.

    Science.gov (United States)

    Barbui, Corrado; Motterlini, Nicola; Garattini, Livio

    2006-02-01

    In this study we estimated the health status, resource consumption and costs of a large cohort of patients with early and late-onset dysthymia. The DYSCO (DYSthymia COsts) project is a multi-center observational study which prospectively followed for two years a randomly chosen sample of patients with dysthymia in the Italian primary health care system. A total of 501 patients were followed for two years; 81% had early-onset dysthymic disorder. During the study, improvement was seen in most domains of the 36-Item Short Form Health Survey (SF-36) questionnaire. Comparison of the SF-36 scores for the two groups showed that only the physical health index significantly differed during the two years. The use of outpatient consultations, laboratory tests and diagnostic procedures was similar in the two groups, but patients with early-onset dysthymia were admitted significantly more than late-onset cases. Hospital admissions were almost entirely responsible for the higher total cost per patient per year of early-onset dysthymia. A first limitation of this study is that general practitioners were selected on the basis of their willingness to participate, not at random; secondly, no information was collected on concomitant psychiatric comorbidities. The present study provides the first prospective, long-term data on service use and costs in patients with dysthymia. Differently from patients with early-onset dysthymia, patients with late-onset dysthymia were admitted less and cost less.

  15. Bioinformatics education in high school: implications for promoting science, technology, engineering, and mathematics careers.

    Science.gov (United States)

    Kovarik, Dina N; Patterson, Davis G; Cohen, Carolyn; Sanders, Elizabeth A; Peterson, Karen A; Porter, Sandra G; Chowning, Jeanne Ting

    2013-01-01

    We investigated the effects of our Bio-ITEST teacher professional development model and bioinformatics curricula on cognitive traits (awareness, engagement, self-efficacy, and relevance) in high school teachers and students that are known to accompany a developing interest in science, technology, engineering, and mathematics (STEM) careers. The program included best practices in adult education and diverse resources to empower teachers to integrate STEM career information into their classrooms. The introductory unit, Using Bioinformatics: Genetic Testing, uses bioinformatics to teach basic concepts in genetics and molecular biology, and the advanced unit, Using Bioinformatics: Genetic Research, utilizes bioinformatics to study evolution and support student research with DNA barcoding. Pre-post surveys demonstrated significant growth (n = 24) among teachers in their preparation to teach the curricula and infuse career awareness into their classes, and these gains were sustained through the end of the academic year. Introductory unit students (n = 289) showed significant gains in awareness, relevance, and self-efficacy. While these students did not show significant gains in engagement, advanced unit students (n = 41) showed gains in all four cognitive areas. Lessons learned during Bio-ITEST are explored in the context of recommendations for other programs that wish to increase student interest in STEM careers.

  16. Cultural resource survey report for construction of office building, driveway, and parking lot at the Stanford Linear Accelerator Center. Part 1

    International Nuclear Information System (INIS)

    Perry, M.E.

    1995-01-01

    An Environmental Assessment and associated documentation is reported for the construction of an office building and parking lot in support of environmental management personnel activities. As part of the documentation process, the DOE determined that the proposed project constituted an undertaking as defined in Section 106 of the National Historic Preservation Act. In accordance with the regulations implementing Section 106 of the National Historic Preservation Act, a records and literature search and historic resource identification effort were carried out on behalf of the Stanford Linear Accelerator Center (SLAC). This report summarizes cultural resource literature and record searches and a historic resource identification effort

  17. jORCA: easily integrating bioinformatics Web Services.

    Science.gov (United States)

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  18. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    Science.gov (United States)

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  19. Biowep: a workflow enactment portal for bioinformatics applications.

    Science.gov (United States)

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of

  20. Biowep: a workflow enactment portal for bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Romano Paolo

    2007-03-01

    Full Text Available Abstract Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS, can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical

  1. Promoting synergistic research and education in genomics and bioinformatics.

    Science.gov (United States)

    Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  2. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  3. Facilitating the use of large-scale biological data and tools in the era of translational bioinformatics

    DEFF Research Database (Denmark)

    Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren

    2014-01-01

    As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...

  4. Bioinformatics and systems biology research update from the 15th International Conference on Bioinformatics (InCoB2016).

    Science.gov (United States)

    Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba

    2016-12-22

    The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.

  5. Model of Activities of the Resource Training Center of the Russian State Social University in Terms of Professional Orientation and Employment of Persons with Disabilities

    Directory of Open Access Journals (Sweden)

    Bikbulatova A.A.,

    2017-08-01

    Full Text Available The paper focuses on the importance of professional and vocational guidance for persons with disabilities. It describes the main approaches to providing such type of guidance to the disabled students and reveals the technologies of motivating people with disabilities to seek education and to make informed choices of profession. The research was aimed at developing the model of career guidance offered at resource and training centers established by the Ministry of Education and Science of the Russian Federation on the basis higher educational institutions. The paper presents the developed model of professional and vocational guidance for persons with disabilities and explains the algorithm of its implementation in the resource and training centers. Also, the paper gives recommendations on how to change the technology of communication between universities, regional job centers and offices of medical and social assessment.

  6. A Study of Mars Dust Environment Simulation at NASA Johnson Space Center Energy Systems Test Area Resource Conversion Test Facility

    Science.gov (United States)

    Chen, Yuan-Liang Albert

    1999-01-01

    The dust environment on Mars is planned to be simulated in a 20 foot thermal-vacuum chamber at the Johnson Space Center, Energy Systems Test Area Resource Conversion Test Facility in Houston, Texas. This vacuum chamber will be used to perform tests and study the interactions between the dust in Martian air and ISPP hardware. This project is to research, theorize, quantify, and document the Mars dust/wind environment needed for the 20 foot simulation chamber. This simulation work is to support the safety, endurance, and cost reduction of the hardware for the future missions. The Martian dust environment conditions is discussed. Two issues of Martian dust, (1) Dust Contamination related hazards, and (2) Dust Charging caused electrical hazards, are of our interest. The different methods of dust particles measurement are given. The design trade off and feasibility were studied. A glass bell jar system is used to evaluate various concepts for the Mars dust/wind environment simulation. It was observed that the external dust source injection is the best method to introduce the dust into the simulation system. The dust concentration of 30 Mg/M3 should be employed for preparing for the worst possible Martian atmosphere condition in the future. Two approaches thermal-panel shroud for the hardware conditioning are discussed. It is suggested the wind tunnel approach be used to study the dust charging characteristics then to be apply to the close-system cyclone approach. For the operation cost reduction purpose, a dehumidified ambient air could be used to replace the expensive CO2 mixture for some tests.

  7. Bioinformatics in New Generation Flavivirus Vaccines

    Directory of Open Access Journals (Sweden)

    Penelope Koraka

    2010-01-01

    Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.

  8. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  9. Achievements and challenges in structural bioinformatics and computational biophysics.

    Science.gov (United States)

    Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J

    2015-01-01

    The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.

  10. The growing need for microservices in bioinformatics

    Directory of Open Access Journals (Sweden)

    Christopher L Williams

    2016-01-01

    Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework

  11. The growing need for microservices in bioinformatics.

    Science.gov (United States)

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and

  12. The growing need for microservices in bioinformatics

    Science.gov (United States)

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective

  13. Bioinformatics of cardiovascular miRNA biology.

    Science.gov (United States)

    Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas

    2015-12-01

    MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Comprehensive decision tree models in bioinformatics.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets

  15. Comprehensive decision tree models in bioinformatics.

    Science.gov (United States)

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly

  16. Penalized feature selection and classification in bioinformatics

    OpenAIRE

    Ma, Shuangge; Huang, Jian

    2008-01-01

    In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...

  17. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  18. Application of Bioinformatics in Chronobiology Research

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2013-01-01

    Full Text Available Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.

  19. Chapter 16: text mining for translational bioinformatics.

    Science.gov (United States)

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  20. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  1. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  2. Stakeholders' Perceptions of Quality and Potential Improvements in the Learning Resources Centers at Omani Basic Education Schools

    Science.gov (United States)

    Al Musawi, Ali; Amer, Talal

    2017-01-01

    This study attempts to investigate the stakeholders' perceptions of quality and prospective improvements in the learning resources centres (LRC) at Omani basic education schools. It focuses on different aspects of the LRCs: organisation, human resources, technological, and educational aspects along with the difficulties faced by these LRCs and…

  3. Modern bioinformatics meets traditional Chinese medicine.

    Science.gov (United States)

    Gu, Peiqin; Chen, Huajun

    2014-11-01

    Traditional Chinese medicine (TCM) is gaining increasing attention with the emergence of integrative medicine and personalized medicine, characterized by pattern differentiation on individual variance and treatments based on natural herbal synergism. Investigating the effectiveness and safety of the potential mechanisms of TCM and the combination principles of drug therapies will bridge the cultural gap with Western medicine and improve the development of integrative medicine. Dealing with rapidly growing amounts of biomedical data and their heterogeneous nature are two important tasks among modern biomedical communities. Bioinformatics, as an emerging interdisciplinary field of computer science and biology, has become a useful tool for easing the data deluge pressure by automating the computation processes with informatics methods. Using these methods to retrieve, store and analyze the biomedical data can effectively reveal the associated knowledge hidden in the data, and thus promote the discovery of integrated information. Recently, these techniques of bioinformatics have been used for facilitating the interactional effects of both Western medicine and TCM. The analysis of TCM data using computational technologies provides biological evidence for the basic understanding of TCM mechanisms, safety and efficacy of TCM treatments. At the same time, the carrier and targets associated with TCM remedies can inspire the rethinking of modern drug development. This review summarizes the significant achievements of applying bioinformatics techniques to many aspects of the research in TCM, such as analysis of TCM-related '-omics' data and techniques for analyzing biological processes and pharmaceutical mechanisms of TCM, which have shown certain potential of bringing new thoughts to both sides. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. CRISIS UNDER THE RADAR: ILLICIT AMPHETAMINE USE IS REACHING EPIDEMIC PROPORTIONS AND CONTRIBUTING TO RESOURCE OVER-UTILIZATION AT A LEVEL 1 TRAUMA CENTER.

    Science.gov (United States)

    Gemma, Vincent A; Chapple, Kristina A; Goslar, Pamela W; Israr, Sharjeel; Petersen, Scott R; Weinberg, Jordan A

    2018-05-21

    Trauma centers reported illicit amphetamine use in approximately 10% of trauma admissions in the previous decade. From experience at a trauma center located in a southwestern metropolis, our perception is that illicit amphetamine use is on the rise, and that these patients utilize in-hospital resources beyond what would be expected for their injuries. The purpose of this study was to document the incidence of illicit amphetamine use among our trauma patients and to evaluate its impact on resource utilization. We conducted a retrospective cohort study using 7 consecutive years of data (starting July 2010) from our institution's trauma registry. Toxicology screenings were used to categorize patients into one of three groups: illicit amphetamine, other drugs, or drug free. Adjusted linear and logistic regression models were used to predict hospital cost, length of stay, ICU admission and ventilation between drug groups. Models were conducted with combined injury severity (ISS) and then repeated for ISS <9, ISS 9-15 and ISS 16 and above. 8,589 patients were categorized into the following three toxicology groups: 1255 (14.6%) illicit amphetamine, 2214 (25.8%) other drugs, and 5120 (59.6%) drug free. Illicit amphetamine use increased threefold over the course of the study (from 7.85% to 25.0% of annual trauma admissions). Adjusted linear models demonstrated that illicit amphetamine among patients with ISS<9 was associated with 4.6% increase in hospital cost (P=.019) and 7.4% increase in LOS (P=.043). Logistic models revealed significantly increased odds of ventilation across all ISS groups and increased odds of ICU admission when all ISS groups were combined (P=.001) and within the ISS<9 group (P=.002). Hospital resource utilization of amphetamine patients with minor injuries is significant. Trauma centers with similar epidemic growth in proportion of amphetamine patients face a potentially significant resource strain relative to other centers. Prognostic and

  5. Multiobjective optimization in bioinformatics and computational biology.

    Science.gov (United States)

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  6. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  7. Introducing bioinformatics, the biosciences' genomic revolution

    CERN Document Server

    Zanella, Paolo

    1999-01-01

    The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.

  8. The Community Health Applied Research Network (CHARN) Data Warehouse: a Resource for Patient-Centered Outcomes Research and Quality Improvement in Underserved, Safety Net Populations.

    Science.gov (United States)

    Laws, Reesa; Gillespie, Suzanne; Puro, Jon; Van Rompaey, Stephan; Quach, Thu; Carroll, Joseph; Weir, Rosy Chang; Crawford, Phil; Grasso, Chris; Kaleba, Erin; McBurnie, Mary Ann

    2014-01-01

    The Community Health Applied Research Network, funded by the Health Resources and Services Administration, is a research network comprising 18 Community Health Centers organized into four Research Nodes (each including an academic partner) and a data coordinating center. The network represents more than 500,000 diverse safety net patients across 11 states. The primary objective of this paper is to describe the development and implementation process of the CHARN data warehouse. The methods involved regulatory and governance development and approval, development of content and structure of the warehouse and processes for extracting the data locally, performing validation, and finally submitting data to the data coordinating center. Version 1 of the warehouse has been developed. Tables have been added, the population and the years of electronic health records (EHR) have been expanded for Version 2. It is feasible to create a national, centralized data warehouse with multiple Community Health Center partners using different EHR systems. It is essential to allow sufficient time: (1) to develop collaborative, trusting relationships among new partners with varied technology, backgrounds, expertise, and interests; (2) to complete institutional, business, and regulatory review processes; (3) to identify and address technical challenges associated with diverse data environments, practices, and resources; and (4) to provide continuing data quality assessments to ensure data accuracy.

  9. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    Science.gov (United States)

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  10. The Effects of International Trade on Resource Misallocation : Trade Partner Matters (Replaced by CentER DP 2012-046)

    NARCIS (Netherlands)

    Curuk, M.

    2011-01-01

    This paper suggests that contingent on the productivity level of the trade partner; international trade may create resource misallocation in less productive countries. It theoretically shows how productivity spillovers induced by trade with more productive countries and heterogeneity in pro-

  11. Resources, environment and solid waste management; Shigen {center{underscore}dot} kankyo mondai to haikibutsu shori no tenkai

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Nobuo [Kyoto University, Kyoto (Japan). Department of Environmental Engineering

    1999-09-20

    Solid waste management should be considered in close relation to conservation of energy and resources. The history and situation of solid waste management in Japan is outlined and the new concept of waste management is discussed for sustainable development. (author)

  12. clubber: removing the bioinformatics bottleneck in big data analyses

    Science.gov (United States)

    Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana

    2018-01-01

    With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295

  13. clubber: removing the bioinformatics bottleneck in big data analyses.

    Science.gov (United States)

    Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana

    2017-06-13

    With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.

  14. clubber: removing the bioinformatics bottleneck in big data analyses

    Directory of Open Access Journals (Sweden)

    Miller Maximilian

    2017-06-01

    Full Text Available With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min clearly illustrate the importance of clubber in the everyday computational biology environment.

  15. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    Directory of Open Access Journals (Sweden)

    Enis Afgan

    Full Text Available Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise.We designed and implemented the Genomics Virtual Laboratory (GVL as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic.This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints

  16. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    Science.gov (United States)

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the

  17. The Revolution in Viral Genomics as Exemplified by the Bioinformatic Analysis of Human Adenoviruses

    Directory of Open Access Journals (Sweden)

    Sarah Torres

    2010-06-01

    Full Text Available Over the past 30 years, genomic and bioinformatic analysis of human adenoviruses has been achieved using a variety of DNA sequencing methods; initially with the use of restriction enzymes and more currently with the use of the GS FLX pyrosequencing technology. Following the conception of DNA sequencing in the 1970s, analysis of adenoviruses has evolved from 100 base pair mRNA fragments to entire genomes. Comparative genomics of adenoviruses made its debut in 1984 when nucleotides and amino acids of coding sequences within the hexon genes of two human adenoviruses (HAdV, HAdV–C2 and HAdV–C5, were compared and analyzed. It was determined that there were three different zones (1-393, 394-1410, 1411-2910 within the hexon gene, of which HAdV–C2 and HAdV–C5 shared zones 1 and 3 with 95% and 89.5% nucleotide identity, respectively. In 1992, HAdV-C5 became the first adenovirus genome to be fully sequenced using the Sanger method. Over the next seven years, whole genome analysis and characterization was completed using bioinformatic tools such as blastn, tblastx, ClustalV and FASTA, in order to determine key proteins in species HAdV-A through HAdV-F. The bioinformatic revolution was initiated with the introduction of a novel species, HAdV-G, that was typed and named by the use of whole genome sequencing and phylogenetics as opposed to traditional serology. HAdV bioinformatics will continue to advance as the latest sequencing technology enables scientists to add to and expand the resource databases. As a result of these advancements, how novel HAdVs are typed has changed. Bioinformatic analysis has become the revolutionary tool that has significantly accelerated the in-depth study of HAdV microevolution through comparative genomics.

  18. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  19. Needs assessment of science teachers in secondary schools in Kumasi, Ghana: A basis for in-service education training programs at the Science Resource Centers

    Science.gov (United States)

    Gyamfi, Alexander

    The purpose of this study was twofold. First, it identified the priority needs common to all science teachers in secondary schools in Kumasi, Ghana. Second, it investigated the relationship existing between the identified priority needs and the teacher demographic variables (type of school, teacher qualification, teaching experience, subject discipline, and sex of teacher) to be used as a basis for implementing in-service education training programs at the Science Resource Centers in Kumasi Ghana. An adapted version of the Moore Assessment Profile (MAP) survey instrument and a set of open-ended questions were used to collect data from the science teachers. The researcher handed out one hundred and fifty questionnaire packets, and all one hundred and fifty (100%) were collected within a period of six weeks. The data were analyzed using descriptive statistics, content analysis, and inferential statistics. The descriptive statistics reported the frequency of responses, and it was used to calculate the Need Index (N) of the identified needs of teachers. Sixteen top-priority needs were identified, and the needs were arranged in a hierarchical order according to the magnitude of the Need Index (0.000 ≤ N ≤ 1.000). Content analysis was used to analyze the responses to the open-ended questions. One-way analysis of variance (ANOVA) was used to test the null hypotheses of the study on each of the sixteen identified top-priority needs and the teacher demographic variables. The findings of this study were as follows: (1) The science teachers identified needs related to "more effective use of instructional materials" as a crucial area for in-service training. (2) Host and Satellite schools exhibited significant difference on procuring supplementary science books for students. Subject discipline of teachers exhibited significant differences on utilizing the library and its facilities by students, obtaining information on where to get help on effective science teaching

  20. Use of Evidence-Based Practice Resources and Empirically Supported Treatments for Posttraumatic Stress Disorder among University Counseling Center Psychologists

    Science.gov (United States)

    Juel, Morgen Joray

    2012-01-01

    In the present study, an attempt was made to determine the degree to which psychologists at college and university counseling centers (UCCs) utilized empirically supported treatments with their posttraumatic stress disorder (PTSD) clients. In addition, an attempt was made to determine how frequently UCC psychologists utilized a number of…

  1. Bioinformatic Analysis of Strawberry GSTF12 Gene

    Science.gov (United States)

    Wang, Xiran; Jiang, Leiyu; Tang, Haoru

    2018-01-01

    GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.

  2. Bioinformatics for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Alberto Magi

    2010-09-01

    Full Text Available The emergence of next-generation sequencing (NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis and the management of the huge amounts of data generated by these technologies. Even at the early stages of their commercial availability, a large number of softwares already exist for analyzing NGS data. These tools can be fit into many general categories including alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, structural variant detection and genome browsing. This manuscript aims to guide readers in the choice of the available computational tools that can be used to face the several steps of the data analysis workflow.

  3. Combining multiple decisions: applications to bioinformatics

    International Nuclear Information System (INIS)

    Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods

  4. Data mining in bioinformatics using Weka.

    Science.gov (United States)

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  5. Bioinformatic and Biometric Methods in Plant Morphology

    Directory of Open Access Journals (Sweden)

    Surangi W. Punyasena

    2014-08-01

    Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.

  6. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  7. Proceedings of the 2013 MidSouth Computational Biology and Bioinformatics Society (MCBIOS) Conference.

    Science.gov (United States)

    Wren, Jonathan D; Dozmorov, Mikhail G; Burian, Dennis; Kaundal, Rakesh; Perkins, Andy; Perkins, Ed; Kupfer, Doris M; Springer, Gordon K

    2013-01-01

    The tenth annual conference of the MidSouth Computational Biology and Bioinformatics Society (MCBIOS 2013), "The 10th Anniversary in a Decade of Change: Discovery in a Sea of Data", took place at the Stoney Creek Inn & Conference Center in Columbia, Missouri on April 5-6, 2013. This year's Conference Chairs were Gordon Springer and Chi-Ren Shyu from the University of Missouri and Edward Perkins from the US Army Corps of Engineers Engineering Research and Development Center, who is also the current MCBIOS President (2012-3). There were 151 registrants and a total of 111 abstracts (51 oral presentations and 60 poster session abstracts).

  8. A Review of Recent Advances in Translational Bioinformatics: Bridges from Biology to Medicine.

    Science.gov (United States)

    Vamathevan, J; Birney, E

    2017-08-01

    Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.

  9. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    Science.gov (United States)

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  10. Bioinformatics and its application in animal health: a review | Soetan ...

    African Journals Online (AJOL)

    Bioinformatics is an interdisciplinary subject, which uses computer application, statistics, mathematics and engineering for the analysis and management of biological information. It has become an important tool for basic and applied research in veterinary sciences. Bioinformatics has brought about advancements into ...

  11. Recent developments in life sciences research: Role of bioinformatics

    African Journals Online (AJOL)

    Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...

  12. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  13. Assessment of a Bioinformatics across Life Science Curricula Initiative

    Science.gov (United States)

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  14. Concepts Of Bioinformatics And Its Application In Veterinary ...

    African Journals Online (AJOL)

    Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...

  15. Current status and future perspectives of bioinformatics in Tanzania ...

    African Journals Online (AJOL)

    The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...

  16. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  17. Is there room for ethics within bioinformatics education?

    Science.gov (United States)

    Taneri, Bahar

    2011-07-01

    When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.

  18. Using Bioinformatics to Treat Hospitalized Smokers: Successes and Challenges of a Tobacco Treatment Service.

    Science.gov (United States)

    Ylioja, Thomas; Reddy, Vivek; Ambrosino, Richard; Davis, Esa M; Douaihy, Antoine; Slovenkay, Kristin; Kogut, Valerie; Frenak, Beth; Palombo, Kathy; Schulze, Anna; Cochran, Gerald; Tindle, Hilary A

    2017-12-01

    Hospitals face increasing regulations to provide and document inpatient tobacco treatment, yet few blueprint data exist to implement a tobacco treatment service (TTS). A hospitalwide, opt-out TTS with three full-time certified counselors was developed in a large tertiary care hospital to proactively treat smokers according to Chronic Care Model principles and national treatment guidelines. A bioinformatics platform facilitated integration into the electronic health record to meet evolving Centers for Medicare & Medicaid Services meaningful use and Joint Commission standards. TTS counselors visited smokers at the bedside and offered counseling, recommended smoking cessation medication to be ordered by the primary clinical service, and arranged for postdischarge resources. During a 3.5-year span, 21,229 smokers (31,778 admissions) were identified; TTS specialists reached 37.4% (7,943), and 33.3% (5,888) of daily smokers received a smoking cessation medication order. Adjusted odds ratios (AORs) of receiving a chart order for smoking cessation medication during the hospital stay and at discharge were higher among patients the TTS counseled > 3 minutes and recommended medication: inpatient AOR = 7.15 (95% confidence interval [CI] = 6.59-7.75); discharge AOR = 5.3 (95% CI = 4.71-5.97). As implementation progressed, TTS counseling reach and medication orders increased. To assess smoking status ≤ 1 month postdischarge, three methods were piloted, all of which were limited by low follow-up rates (4.5%-28.6%). The TTS counseled approximately 3,000 patients annually, with increases over time for reach and implementation. Remaining challenges include the development of strategies to engage inpatient care teams to follow TTS recommendations, and patients postdischarge in order to optimize postdischarge smoking cessation. Copyright © 2017. Published by Elsevier Inc.

  19. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  20. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  1. The development and application of bioinformatics core competencies to improve bioinformatics training and education.

    Science.gov (United States)

    Mulder, Nicola; Schwartz, Russell; Brazas, Michelle D; Brooksbank, Cath; Gaeta, Bruno; Morgan, Sarah L; Pauley, Mark A; Rosenwald, Anne; Rustici, Gabriella; Sierk, Michael; Warnow, Tandy; Welch, Lonnie

    2018-02-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans.

  2. The development and application of bioinformatics core competencies to improve bioinformatics training and education

    Science.gov (United States)

    Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie

    2018-01-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004

  3. GeneDig: a web application for accessing genomic and bioinformatics knowledge.

    Science.gov (United States)

    Suciu, Radu M; Aydin, Emir; Chen, Brian E

    2015-02-28

    With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.

  4. A middleware-based platform for the integration of bioinformatic services

    Directory of Open Access Journals (Sweden)

    Guzmán Llambías

    2015-08-01

    Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.

  5. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology.

    Science.gov (United States)

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-11-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another.

  6. Can the availability of unrestricted financial support improve the quality of care of thalassemics in a center with limited resources? A single center study from India

    Directory of Open Access Journals (Sweden)

    Prantar Chakrabarti

    2012-12-01

    Full Text Available Comprehensive management of thalassemia demands a multidisciplinary approach, sufficient financial resources, carefully developed expertise of the care givers, as well as significant compliance on the patients’ part. Studies exploring the utility of unrestricted financing within the existing infrastructure, for the management of thalassemia, particularly in the context of a developing country, are scarce. This study aimed to assess the impact of sponsored comprehensive care compared to the routine care of thalassemics provided at Institute of Haematology and Transfusion Medicine, Kolkata, India. Two hundred and twenty patients were selected for the study and distributed in two arms. Regular monthly follow up was done including a Health Related Quality of Life (HRQoL assessment with SF 36 v2 (validated Bengali version. Patients receiving sponsored comprehensive care showed a significant improvement in the mean hemoglobin levels and decrease in mean ferritin. HRQoL assessment revealed a better score in the physical domain though the mental health domain score was not significantly better at nine months. Unrestricted financial support in the form of comprehensive care has a positive impact on the thalassemia patients in a developing country not only in terms of clinical parameters but also in health related quality of life. 地中海贫血症的综合管理需要多学科的研究方法、充足的财政资源,护理人员应具备丰富的专业知识,并且患者应尽可能服从安排。探讨现有基础设施内无限制财政支持的实用性和地中海贫血症管理(尤其是在发展中国家)的研究甚少。 此研究旨在评估与印度加尔各答血液及输血医学会提供的地中海贫血症常规护理相比,综合护理对患者的影响 。 此研究筛选了 220名患者,并分为两组进行研究。每月定期跟进两组患者情况,包括使用第2版SF 36(经验证的孟加拉语版本)进行

  7. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  8. Designing a course model for distance-based online bioinformatics training in Africa: The H3ABioNet experience

    Science.gov (United States)

    Panji, Sumir; Fernandes, Pedro L.; Judge, David P.; Ghouila, Amel; Salifu, Samson P.; Ahmed, Rehab; Kayondo, Jonathan; Ssemwanga, Deogratius

    2017-01-01

    Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org), the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge “Introduction to Bioinformatics” course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery–mode learning model was selected for this 3-month course in order to increase access to (mostly) African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016), classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online “question and discussion” forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings. PMID:28981516

  9. Postoperative Neurosurgical Infection Rates After Shared-Resource Intraoperative Magnetic Resonance Imaging: A Single-Center Experience with 195 Cases.

    Science.gov (United States)

    Dinevski, Nikolaj; Sarnthein, Johannes; Vasella, Flavio; Fierstra, Jorn; Pangalu, Athina; Holzmann, David; Regli, Luca; Bozinov, Oliver

    2017-07-01

    To determine the rate of surgical-site infections (SSI) in neurosurgical procedures involving a shared-resource intraoperative magnetic resonance imaging (ioMRI) scanner at a single institution derived from a prospective clinical quality management database. All consecutive neurosurgical procedures that were performed with a high-field, 2-room ioMRI between April 2013 and June 2016 were included (N = 195; 109 craniotomies and 86 endoscopic transsphenoidal procedures). The incidence of SSIs within 3 months after surgery was assessed for both operative groups (craniotomies vs. transsphenoidal approach). Of the 109 craniotomies, 6 patients developed an SSI (5.5%, 95% confidence interval [CI] 1.2-9.8%), including 1 superficial SSI, 2 cases of bone flap osteitis, 1 intracranial abscess, and 2 cases of meningitis/ventriculitis. Wound revision surgery due to infection was necessary in 4 patients (4%). Of the 86 transsphenoidal skull base surgeries, 6 patients (7.0%, 95% CI 1.5-12.4%) developed an infection, including 2 non-central nervous system intranasal SSIs (3%) and 4 cases of meningitis (5%). Logistic regression analysis revealed that the likelihood of infection significantly decreased with the number of operations in the new operational setting (odds ratio 0.982, 95% CI 0.969-0.995, P = 0.008). The use of a shared-resource ioMRI in neurosurgery did not demonstrate increased rates of infection compared with the current available literature. The likelihood of infection decreased with the accumulating number of operations, underlining the importance of surgical staff training after the introduction of a shared-resource ioMRI. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  11. The European Bioinformatics Institute in 2017: data coordination and integration

    Science.gov (United States)

    Cochrane, Guy; Apweiler, Rolf; Birney, Ewan

    2018-01-01

    Abstract The European Bioinformatics Institute (EMBL-EBI) supports life-science research throughout the world by providing open data, open-source software and analytical tools, and technical infrastructure (https://www.ebi.ac.uk). We accommodate an increasingly diverse range of data types and integrate them, so that biologists in all disciplines can explore life in ever-increasing detail. We maintain over 40 data resources, many of which are run collaboratively with partners in 16 countries (https://www.ebi.ac.uk/services). Submissions continue to increase exponentially: our data storage has doubled in less than two years to 120 petabytes. Recent advances in cellular imaging and single-cell sequencing techniques are generating a vast amount of high-dimensional data, bringing to light new cell types and new perspectives on anatomy. Accordingly, one of our main focus areas is integrating high-quality information from bioimaging, biobanking and other types of molecular data. This is reflected in our deep involvement in Open Targets, stewarding of plant phenotyping standards (MIAPPE) and partnership in the Human Cell Atlas data coordination platform, as well as the 2017 launch of the Omics Discovery Index. This update gives a birds-eye view of EMBL-EBI’s approach to data integration and service development as genomics begins to enter the clinic. PMID:29186510

  12. The WHO/PEPFAR collaboration to prepare an operations manual for HIV prevention, care, and treatment at primary health centers in high-prevalence, resource-constrained settings: defining laboratory services.

    Science.gov (United States)

    Spira, Thomas; Lindegren, Mary Lou; Ferris, Robert; Habiyambere, Vincent; Ellerbrock, Tedd

    2009-06-01

    The expansion of HIV/AIDS care and treatment in resource-constrained countries, especially in sub-Saharan Africa, has generally developed in a top-down manner. Further expansion will involve primary health centers where human and other resources are limited. This article describes the World Health Organization/President's Emergency Plan for AIDS Relief collaboration formed to help scale up HIV services in primary health centers in high-prevalence, resource-constrained settings. It reviews the contents of the Operations Manual developed, with emphasis on the Laboratory Services chapter, which discusses essential laboratory services, both at the center and the district hospital level, laboratory safety, laboratory testing, specimen transport, how to set up a laboratory, human resources, equipment maintenance, training materials, and references. The chapter provides specific information on essential tests and generic job aids for them. It also includes annexes containing a list of laboratory supplies for the health center and sample forms.

  13. Bioinformatics research in the Asia Pacific: a 2007 update.

    Science.gov (United States)

    Ranganathan, Shoba; Gribskov, Michael; Tan, Tin Wee

    2008-01-01

    We provide a 2007 update on the bioinformatics research in the Asia-Pacific from the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998. From 2002, APBioNet has organized the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2007 Conference was organized as the 6th annual conference of the Asia-Pacific Bioinformatics Network, on Aug. 27-30, 2007 at Hong Kong, following a series of successful events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea) and New Delhi (India). Besides a scientific meeting at Hong Kong, satellite events organized are a pre-conference training workshop at Hanoi, Vietnam and a post-conference workshop at Nansha, China. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. We have organized the papers into thematic areas, highlighting the growing contribution of research excellence from this region, to global bioinformatics endeavours.

  14. Continuing Education Workshops in Bioinformatics Positively Impact Research and Careers.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2016-06-01

    Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.

  15. Bioinformatics approaches for identifying new therapeutic bioactive peptides in food

    Directory of Open Access Journals (Sweden)

    Nora Khaldi

    2012-10-01

    Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.

  16. What Is the Return on Investment for Implementation of a Crew Resource Management Program at an Academic Medical Center?

    Science.gov (United States)

    Moffatt-Bruce, Susan D; Hefner, Jennifer L; Mekhjian, Hagop; McAlearney, John S; Latimer, Tina; Ellison, Chris; McAlearney, Ann Scheck

    Crew Resource Management (CRM) training has been used successfully within hospital units to improve quality and safety. This article presents a description of a health system-wide implementation of CRM focusing on the return on investment (ROI). The costs included training, programmatic fixed costs, time away from work, and leadership time. Cost savings were calculated based on the reduction in avoidable adverse events and cost estimates from the literature. Between July 2010 and July 2013, roughly 3000 health system employees across 12 areas were trained, costing $3.6 million. The total number of adverse events avoided was 735-a 25.7% reduction in observed relative to expected events. Savings ranged from a conservative estimate of $12.6 million to as much as $28.0 million. Therefore, the overall ROI for CRM training was in the range of $9.1 to $24.4 million. CRM presents a financially viable way to systematically organize for quality improvement.

  17. Bioinformatics in cancer therapy and drug design

    International Nuclear Information System (INIS)

    Horbach, D.Y.; Usanov, S.A.

    2005-01-01

    One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)

  18. Bioinformatics in cancer therapy and drug design

    Energy Technology Data Exchange (ETDEWEB)

    Horbach, D Y [International A. Sakharov environmental univ., Minsk (Belarus); Usanov, S A [Inst. of bioorganic chemistry, National academy of sciences of Belarus, Minsk (Belarus)

    2005-05-15

    One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)

  19. Bioinformatics study of the mangrove actin genes

    Science.gov (United States)

    Basyuni, M.; Wasilah, M.; Sumardi

    2017-01-01

    This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.

  20. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Evaluation of a fever-management algorithm in a pediatric cancer center in a low-resource setting.

    Science.gov (United States)

    Mukkada, Sheena; Smith, Cristel Kate; Aguilar, Delta; Sykes, April; Tang, Li; Dolendo, Mae; Caniza, Miguela A

    2018-02-01

    In low- and middle-income countries (LMICs), inconsistent or delayed management of fever contributes to poor outcomes among pediatric patients with cancer. We hypothesized that standardizing practice with a clinical algorithm adapted to local resources would improve outcomes. Therefore, we developed a resource-specific algorithm for fever management in Davao City, Philippines. The primary objective of this study was to evaluate adherence to the algorithm. This was a prospective cohort study of algorithm adherence to assess the types of deviation, reasons for deviation, and pathogens isolated. All pediatric oncology patients who were admitted with fever (defined as an axillary temperature  >37.7°C on one occasion or ≥37.4°C on two occasions 1 hr apart) or who developed fever within 48 hr of admission were included. Univariate and multiple linear regression analyses were used to determine the relation between clinical predictors and length of hospitalization. During the study, 93 patients had 141 qualifying febrile episodes. Even though the algorithm was designed locally, deviations occurred in 70 (50%) of 141 febrile episodes on day 0, reflecting implementation barriers at the patient, provider, and institutional levels. There were 259 deviations during the first 7 days of admission in 92 (65%) of 141 patient episodes. Failure to identify high-risk patients, missed antimicrobial doses, and pathogen isolation were associated with prolonged hospitalization. Monitoring algorithm adherence helps in assessing the quality of pediatric oncology care in LMICs and identifying opportunities for improvement. Measures that decrease high-frequency/high-impact algorithm deviations may shorten hospitalizations and improve healthcare use in LMICs. © 2017 Wiley Periodicals, Inc.

  2. Bioconductor: open software development for computational biology and bioinformatics

    DEFF Research Database (Denmark)

    Gentleman, R.C.; Carey, V.J.; Bates, D.M.

    2004-01-01

    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....

  3. Arthritis - resources

    Science.gov (United States)

    Resources - arthritis ... The following organizations provide more information on arthritis : American Academy of Orthopaedic Surgeons -- orthoinfo.aaos.org/menus/arthritis.cfm Arthritis Foundation -- www.arthritis.org Centers for Disease Control and Prevention -- www. ...

  4. Hemophilia - resources

    Science.gov (United States)

    Resources - hemophilia ... The following organizations provide further information on hemophilia : Centers for Disease Control and Prevention -- www.cdc.gov/ncbddd/hemophilia/index.html National Heart, Lung, and Blood Institute -- www.nhlbi.nih.gov/ ...

  5. Diabetes - resources

    Science.gov (United States)

    Resources - diabetes ... The following sites provide further information on diabetes: American Diabetes Association -- www.diabetes.org Juvenile Diabetes Research Foundation International -- www.jdrf.org National Center for Chronic Disease Prevention and Health Promotion -- ...

  6. Heart Information Center

    Science.gov (United States)

    ... Rounds Seminar Series & Daily Conferences Fellowships and Residencies School of Perfusion Technology Education Resources Library & Learning Resource Center CME Resources THI Journal THI Cardiac Society Register for the Cardiac Society ...

  7. EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats

    Science.gov (United States)

    Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter

    2013-01-01

    Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: jison@ebi.ac.uk PMID:23479348

  8. Using Bioinformatics to Develop and Test Hypotheses: E. coli-Specific Virulence Determinants

    Directory of Open Access Journals (Sweden)

    Joanna R. Klein

    2012-09-01

    Full Text Available Bioinformatics, the use of computer resources to understand biological information, is an important tool in research, and can be easily integrated into the curriculum of undergraduate courses. Such an example is provided in this series of four activities that introduces students to the field of bioinformatics as they design PCR based tests for pathogenic E. coli strains. A variety of computer tools are used including BLAST searches at NCBI, bacterial genome searches at the Integrated Microbial Genomes (IMG database, protein analysis at Pfam and literature research at PubMed. In the process, students also learn about virulence factors, enzyme function and horizontal gene transfer. Some or all of the four activities can be incorporated into microbiology or general biology courses taken by students at a variety of levels, ranging from high school through college. The activities build on one another as they teach and reinforce knowledge and skills, promote critical thinking, and provide for student collaboration and presentation. The computer-based activities can be done either in class or outside of class, thus are appropriate for inclusion in online or blended learning formats. Assessment data showed that students learned general microbiology concepts related to pathogenesis and enzyme function, gained skills in using tools of bioinformatics and molecular biology, and successfully developed and tested a scientific hypothesis.

  9. Tissue Banking, Bioinformatics, and Electronic Medical Records: The Front-End Requirements for Personalized Medicine

    Science.gov (United States)

    Suh, K. Stephen; Sarojini, Sreeja; Youssif, Maher; Nalley, Kip; Milinovikj, Natasha; Elloumi, Fathi; Russell, Steven; Pecora, Andrew; Schecter, Elyssa; Goy, Andre

    2013-01-01

    Personalized medicine promises patient-tailored treatments that enhance patient care and decrease overall treatment costs by focusing on genetics and “-omics” data obtained from patient biospecimens and records to guide therapy choices that generate good clinical outcomes. The approach relies on diagnostic and prognostic use of novel biomarkers discovered through combinations of tissue banking, bioinformatics, and electronic medical records (EMRs). The analytical power of bioinformatic platforms combined with patient clinical data from EMRs can reveal potential biomarkers and clinical phenotypes that allow researchers to develop experimental strategies using selected patient biospecimens stored in tissue banks. For cancer, high-quality biospecimens collected at diagnosis, first relapse, and various treatment stages provide crucial resources for study designs. To enlarge biospecimen collections, patient education regarding the value of specimen donation is vital. One approach for increasing consent is to offer publically available illustrations and game-like engagements demonstrating how wider sample availability facilitates development of novel therapies. The critical value of tissue bank samples, bioinformatics, and EMR in the early stages of the biomarker discovery process for personalized medicine is often overlooked. The data obtained also require cross-disciplinary collaborations to translate experimental results into clinical practice and diagnostic and prognostic use in personalized medicine. PMID:23818899

  10. Workplace rehabilitation centers for people with mental illness in Madrid: A resource for employment in crisis times (2008-2012

    Directory of Open Access Journals (Sweden)

    Segundo Valmorisco Pizarro

    2015-04-01

    Full Text Available The current article tries to detect the variables that explain labour insertion rates (close to 50% of people with severe and enduring mental illness who come to work rehabilitation centres (CRL´s in the Community of Madrid. To this end, firstly, has been used a documentary methodology through the use of activity memoirs of the CRL´s in the Community of Madrid with activity in 2008-2012. And, second, a qualitative methodology using In-depth interviews with professionals of different profiles of various CRL´s as well as the technical coordinator of the public network of social care and people with severe and enduring mental illness of the Community of Madrid; and Focus groups according to professional category, as well as people served and family. The results show that the public network of care for people with severe and enduring mental illness, offers more than 5,900 seats in different collective resources (psychosocial rehabilitation centres, day centres, social support, vocational rehabilitation centres, nursing homes or supervised apartments. Specifically, CRL´s  serving a total of 1,313 people, of which 47.4% find employment (622 people with severe and enduring mental illness.

  11. Use of brackish ground water resources for regional energy center development, Tularosa Basin, New Mexico: preliminary evaluation. Executive summary

    International Nuclear Information System (INIS)

    1977-03-01

    The objective of this study was to develop an impact and suitability profile for the use of the Tularosa Basin in south-central New Mexico as the potential location of an energy center. Underyling the Tularosa Basin is an aquifer system containing perhaps 40 million acre-feet of fresh and slightly saline (1-3 g/l) water that is theoretically recoverable and could be used for cooling and other energy-related or industrial purposes, particularly if energy development projects in other areas of the state and region are delayed, impeded, or cancelled because of uncertain availability or accessibility of water. This preliminary investigation of the Tularosa Basin reveals no outstanding features that would discourage further detailed analysis and planning for an energy complex. A major program of exploratory drilling, well logging, and testing is needed to determine aquifer characteristics and factors affecting well design. Since industrial development in the basin will necessarily involve Federal, state, and private lands, any serious plan will require collaboration of Federal, state, and local authorities

  12. Virginia Bioinformatics Institute to expand cyberinfrastructure education and outreach project

    OpenAIRE

    Whyte, Barry James

    2008-01-01

    The National Science Foundation has awarded the Virginia Bioinformatics Institute at Virginia Tech $918,000 to expand its education and outreach program in Cyberinfrastructure - Training, Education, Advancement and Mentoring, commonly known as the CI-TEAM.

  13. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal; Salama, Khaled N.; Zidan, Mohammed A.

    2012-01-01

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we

  14. Metagenomics and Bioinformatics in Microbial Ecology: Current Status and Beyond.

    Science.gov (United States)

    Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru

    2016-09-29

    Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.

  15. In silico cloning and bioinformatic analysis of PEPCK gene in ...

    African Journals Online (AJOL)

    Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...

  16. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    Science.gov (United States)

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  17. Bioinformatics tools for development of fast and cost effective simple ...

    African Journals Online (AJOL)

    Bioinformatics tools for development of fast and cost effective simple sequence repeat ... comparative mapping and exploration of functional genetic diversity in the ... Already, a number of computer programs have been implemented that aim at ...

  18. Skate Genome Project: Cyber-Enabled Bioinformatics Collaboration

    Science.gov (United States)

    Vincent, J.

    2011-01-01

    The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.

  19. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  20. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  1. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  2. BioXSD: the common data-exchange format for everyday bioinformatics web services

    Science.gov (United States)

    Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: matus.kalas@bccs.uib.no; developers@bioxsd.org; support@bioxsd.org PMID:20823319

  3. BioXSD: the common data-exchange format for everyday bioinformatics web services.

    Science.gov (United States)

    Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-09-15

    The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.

  4. Assessment of Data Reliability of Wireless Sensor Network for Bioinformatics

    Directory of Open Access Journals (Sweden)

    Ting Dong

    2017-09-01

    Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.

  5. Containment of Ebola and Polio in Low-Resource Settings Using Principles and Practices of Emergency Operations Centers in Public Health.

    Science.gov (United States)

    Shuaib, Faisal M; Musa, Philip F; Muhammad, Ado; Musa, Emmanuel; Nyanti, Sara; Mkanda, Pascal; Mahoney, Frank; Corkum, Melissa; Durojaiye, Modupeoluwa; Nganda, Gatei Wa; Sani, Samuel Usman; Dieng, Boubacar; Banda, Richard; Ali Pate, Muhammad

    Emergency Operations Centers (EOCs) have been credited with driving the recent successes achieved in the Nigeria polio eradication program. EOC concept was also applied to the Ebola virus disease outbreak and is applicable to a range of other public health emergencies. This article outlines the structure and functionality of a typical EOC in addressing public health emergencies in low-resource settings. It ascribes the successful polio and Ebola responses in Nigeria to several factors including political commitment, population willingness to engage, accountability, and operational and strategic changes made by the effective use of an EOC and Incident Management System. In countries such as Nigeria where the central or federal government does not directly hold states accountable, the EOC provides a means to improve performance and use data to hold health workers accountable by using innovative technologies such as geographic position systems, dashboards, and scorecards.

  6. An interdepartmental Ph.D. program in computational biology and bioinformatics: the Yale perspective.

    Science.gov (United States)

    Gerstein, Mark; Greenbaum, Dov; Cheung, Kei; Miller, Perry L

    2007-02-01

    Computational biology and bioinformatics (CBB), the terms often used interchangeably, represent a rapidly evolving biological discipline. With the clear potential for discovery and innovation, and the need to deal with the deluge of biological data, many academic institutions are committing significant resources to develop CBB research and training programs. Yale formally established an interdepartmental Ph.D. program in CBB in May 2003. This paper describes Yale's program, discussing the scope of the field, the program's goals and curriculum, as well as a number of issues that arose in implementing the program. (Further updated information is available from the program's website, www.cbb.yale.edu.)

  7. Open source tools and toolkits for bioinformatics: significance, and where are we?

    Science.gov (United States)

    Stajich, Jason E; Lapp, Hilmar

    2006-09-01

    This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.

  8. BioQueue: a novel pipeline framework to accelerate bioinformatics analysis.

    Science.gov (United States)

    Yao, Li; Wang, Heming; Song, Yuanyuan; Sui, Guangchao

    2017-10-15

    With the rapid development of Next-Generation Sequencing, a large amount of data is now available for bioinformatics research. Meanwhile, the presence of many pipeline frameworks makes it possible to analyse these data. However, these tools concentrate mainly on their syntax and design paradigms, and dispatch jobs based on users' experience about the resources needed by the execution of a certain step in a protocol. As a result, it is difficult for these tools to maximize the potential of computing resources, and avoid errors caused by overload, such as memory overflow. Here, we have developed BioQueue, a web-based framework that contains a checkpoint before each step to automatically estimate the system resources (CPU, memory and disk) needed by the step and then dispatch jobs accordingly. BioQueue possesses a shell command-like syntax instead of implementing a new script language, which means most biologists without computer programming background can access the efficient queue system with ease. BioQueue is freely available at https://github.com/liyao001/BioQueue. The extensive documentation can be found at http://bioqueue.readthedocs.io. li_yao@outlook.com or gcsui@nefu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Impact of Operating Room Environment on Postoperative Central Nervous System Infection in a Resource-Limited Neurosurgical Center in South Asia.

    Science.gov (United States)

    Chidambaram, Swathi; Vasudevan, Madabushi Chakravarthy; Nair, Mani Nathan; Joyce, Cara; Germanwala, Anand V

    2018-02-01

    Postoperative central nervous system infections (PCNSIs) are serious complications following neurosurgical intervention. We previously investigated the incidence and causative pathogens of PCNSIs at a resource-limited, neurosurgical center in south Asia. This follow-up study was conducted to analyze differences in PCNSIs at the same institution following only one apparent change: the operating room air filtration system. This was a retrospective study of all neurosurgical cases performed between December 1, 2013, and March 31, 2016 at our center. Providers, patient demographic data, case types, perioperative care, rate of PCNSI, and rates of other complications were reviewed. These results were then compared with the findings of our previous study of neurosurgical cases between June 1, 2012, and June 30, 2013. All 623 neurosurgical operative cases over the study period were reviewed. Four patients (0.6%) had a PCNSI, and no patients had a positive cerebrospinal fluid (CSF) culture. In the previous study, among 363 cases, 71 patients (19.6%) had a PCNSI and 7 (1.9%) had a positive CSF culture (all Gram-negative organisms). The differences in both parameters are statistically significant (P system inside the neurosurgical operating rooms; this environmental change occurred during the 5 months between the 2 studies. This study demonstrates the impact of environmental factors in reducing infections. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The Beamline X28C of the Center for Synchrotron Biosciences: a National Resource for Biomolecular Structure and Dynamics Experiments Using Synchrotron Footprinting

    International Nuclear Information System (INIS)

    Gupta, S.; Sullivan, M.; Toomey, J.; Kiselar, J.; Chance, M.

    2007-01-01

    Structural mapping of proteins and nucleic acids with high resolution in solution is of critical importance for understanding their biological function. A wide range of footprinting technologies have been developed over the last ten years to address this need. Beamline X28C, a white-beam X-ray source at the National Synchrotron Light Source of Brookhaven National Laboratory, functions as a platform for synchrotron footprinting research and further technology development in this growing field. An expanding set of user groups utilize this national resource funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health. The facility is operated by the Center for Synchrotron Biosciences and the Center for Proteomics of Case Western Reserve University. The facility includes instrumentation suitable for conducting both steady-state and millisecond time-resolved footprinting experiments based on the production of hydroxyl radicals by X-rays. Footprinting studies of nucleic acids are routinely conducted with X-ray exposures of tens of milliseconds, which include studies of nucleic acid folding and their interactions with proteins. This technology can also be used to study protein structure and dynamics in solution as well as protein-protein interactions in large macromolecular complexes. This article provides an overview of the X28C beamline technology and defines protocols for its adoption at other synchrotron facilities. Lastly, several examples of published results provide illustrations of the kinds of experiments likely to be successful using these approaches

  11. What can bioinformatics do for Natural History museums?

    Directory of Open Access Journals (Sweden)

    Becerra, José María

    2003-06-01

    Full Text Available We propose the founding of a Natural History bioinformatics framework, which would solve one of the main problems in Natural History: data which is scattered around in many incompatible systems (not only computer systems, but also paper ones. This framework consists of computer resources (hardware and software, methodologies that ease the circulation of data, and staff expert in dealing with computers, who will develop software solutions to the problems encountered by naturalists. This system is organized in three layers: acquisition, data and analysis. Each layer is described, and an account of the elements that constitute it given.

    Se presentan las bases de una estructura bioinformática para Historia Natural, que trata de resolver uno de los principales problemas en ésta: la presencia de datos distribuidos a lo largo de muchos sistemas incompatibles entre sí (y no sólo hablamos de sistemas informáticos, sino también en papel. Esta estructura se sustenta en recursos informáticos (en sus dos vertientes: hardware y software, en metodologías que permitan la fácil circulación de los datos, y personal experto en el uso de ordenadores que se encargue de desarrollar soluciones software a los problemas que plantean los naturalistas. Este sistema estaría organizado en tres capas: de adquisición, de datos y de análisis. Cada una de estas capas se describe, indicando los elementos que la componen.

  12. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  13. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  14. Bioinformatics: Cheap and robust method to explore biomaterial from Indonesia biodiversity

    Science.gov (United States)

    Widodo

    2015-02-01

    Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another

  15. Human Performance Resource Center (HPRC)

    Data.gov (United States)

    Federal Laboratory Consortium — HPRC is aligned under Force Health Protection and Readiness and is the educational arm of the Consortium for Health and Military Performance (CHAMP) at the Uniformed...

  16. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    Science.gov (United States)

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  17. BioXSD: the common data-exchange format for everyday bioinformatics web services

    DEFF Research Database (Denmark)

    Kalas, M.; Puntervoll, P.; Joseph, A.

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...... and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth...... interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web....

  18. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    Energy Technology Data Exchange (ETDEWEB)

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics

  19. 5th HUPO BPP Bioinformatics Meeting at the European Bioinformatics Institute in Hinxton, UK--Setting the analysis frame.

    Science.gov (United States)

    Stephan, Christian; Hamacher, Michael; Blüggel, Martin; Körting, Gerhard; Chamrad, Daniel; Scheer, Christian; Marcus, Katrin; Reidegeld, Kai A; Lohaus, Christiane; Schäfer, Heike; Martens, Lennart; Jones, Philip; Müller, Michael; Auyeung, Kevin; Taylor, Chris; Binz, Pierre-Alain; Thiele, Herbert; Parkinson, David; Meyer, Helmut E; Apweiler, Rolf

    2005-09-01

    The Bioinformatics Committee of the HUPO Brain Proteome Project (HUPO BPP) meets regularly to execute the post-lab analyses of the data produced in the HUPO BPP pilot studies. On July 7, 2005 the members came together for the 5th time at the European Bioinformatics Institute (EBI) in Hinxton, UK, hosted by Rolf Apweiler. As a main result, the parameter set of the semi-automated data re-analysis of MS/MS spectra has been elaborated and the subsequent work steps have been defined.

  20. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... Community Outreach Resource Center Toy Recall Statistics CO Poster Contest Pool Safely Business & Manufacturing Business & Manufacturing Business ... Featured Resources CPSC announces winners of carbon monoxide poster contest Video View the blog Clues You Can ...

  1. G-DOC Plus - an integrative bioinformatics platform for precision medicine.

    Science.gov (United States)

    Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha

    2016-04-30

    G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available

  2. Hydrologic Engineering Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Hydrologic Engineering Center (HEC), an organization within the Institute for Water Resources, is the designated Center of Expertise for the U.S. Army Corps of...

  3. Carbon Monoxide Information Center

    Medline Plus

    Full Text Available ... OnSafety Blog Safety Education Centers Neighborhood Safety Network Community Outreach Resource Center Toy Recall Statistics CO Poster ... Sitemap RSS E-mail Inside CPSC Accessibility Privacy Policy Budget, Performances & Finance Open Government Freedom of Information ( ...

  4. Intrageneric Primer Design: Bringing Bioinformatics Tools to the Class

    Science.gov (United States)

    Lima, Andre O. S.; Garces, Sergio P. S.

    2006-01-01

    Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…

  5. Bioinformatics in the Netherlands : The value of a nationwide community

    NARCIS (Netherlands)

    van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap

    2017-01-01

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures

  6. Bioinformatic tools and guideline for PCR primer design | Abd ...

    African Journals Online (AJOL)

    Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...

  7. CROSSWORK for Glycans: Glycan Identificatin Through Mass Spectrometry and Bioinformatics

    DEFF Research Database (Denmark)

    Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter

      We have developed "GLYCANthrope " - CROSSWORKS for glycans:  a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...

  8. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  9. Hidden in the Middle: Culture, Value and Reward in Bioinformatics

    Science.gov (United States)

    Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul

    2016-01-01

    Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…

  10. Bioinformatics for Undergraduates: Steps toward a Quantitative Bioscience Curriculum

    Science.gov (United States)

    Chapman, Barbara S.; Christmann, James L.; Thatcher, Eileen F.

    2006-01-01

    We describe an innovative bioinformatics course developed under grants from the National Science Foundation and the California State University Program in Research and Education in Biotechnology for undergraduate biology students. The project has been part of a continuing effort to offer students classroom experiences focused on principles and…

  11. Mathematics and evolutionary biology make bioinformatics education comprehensible

    Science.gov (United States)

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  12. The structural bioinformatics library: modeling in biomolecular science and beyond.

    Science.gov (United States)

    Cazals, Frédéric; Dreyfus, Tom

    2017-04-01

    Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Rapid cloning and bioinformatic analysis of spinach Y chromosome ...

    Indian Academy of Sciences (India)

    Rapid cloning and bioinformatic analysis of spinach Y chromosome- specific EST sequences. Chuan-Liang Deng, Wei-Li Zhang, Ying Cao, Shao-Jing Wang, ... Arabidopsis thaliana mRNA for mitochondrial half-ABC transporter (STA1 gene). 389 2.31E-13. 98.96. SP3−12. Betula pendula histidine kinase 3 (HK3) mRNA, ...

  14. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    Science.gov (United States)

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  15. Challenges of Treating Childhood Medulloblastoma in a Country With Limited Resources: 20 Years of Experience at a Single Tertiary Center in Malaysia.

    Science.gov (United States)

    Rajagopal, Revathi; Abd-Ghafar, Sayyidatul; Ganesan, Dharmendra; Bustam Mainudin, Anita Zarina; Wong, Kum Thong; Ramli, Norlisah; Jawin, Vida; Lum, Su Han; Yap, Tsiao Yi; Bouffet, Eric; Qaddoumi, Ibrahim; Krishnan, Shekhar; Ariffin, Hany; Abdullah, Wan Ariffin

    2017-04-01

    Pediatric medulloblastoma (MB) treatment has evolved over the past few decades; however, treating children in countries with limited resources remains challenging. Until now, the literature regarding childhood MB in Malaysia has been nonexistent. Our objectives were to review the demographics and outcome of pediatric MB treated at the University Malaya Medical Center between January 1994 and December 2013 and describe the challenges encountered. Fifty-one patients with childhood MB were seen at University Malaya Medical Center. Data from 43 patients were analyzed; eight patients were excluded because their families refused treatment after surgery. Headache and vomiting were the most common presenting symptoms, and the mean interval between symptom onset and diagnosis was 4 weeks. Fourteen patients presented with metastatic disease. Five-year progression-free survival (± SE) for patients ≥ 3 years old was 41.7% ± 14.2% (95% CI, 21.3% to 81.4%) in the high-risk group and 68.6% ± 18.6% (95% CI, 40.3% to 100%) in the average-risk group, and 5-year overall survival (± SE) in these two groups was 41.7% ± 14.2% (95% CI, 21.3% to 81.4%) and 58.3% ± 18.6% (95% CI, 31.3% to 100%), respectively. Children younger than 3 years old had 5-year progression-free and overall survival rates (± SE) of 47.6% ± 12.1% (95% CI, 28.9% to 78.4%) and 45.6% ± 11.7% (95% CI, 27.6% to 75.5%), respectively. Time to relapse ranged from 4 to 132 months. Most patients who experienced relapse died within 1 year. Febrile neutropenia, hearing loss, and endocrinopathy were the most common treatment-related complications. The survival rate of childhood MB in Malaysia is inferior to that usually reported in the literature. We postulate that the following factors contribute to this difference: lack of a multidisciplinary neuro-oncology team, limited health care facilities, inconsistent risk assessment, insufficient data in the National Cancer Registry and pathology reports, inadequate long

  16. Challenges of Treating Childhood Medulloblastoma in a Country With Limited Resources: 20 Years of Experience at a Single Tertiary Center in Malaysia

    Directory of Open Access Journals (Sweden)

    Revathi Rajagopal

    2017-04-01

    Full Text Available Purpose: Pediatric medulloblastoma (MB treatment has evolved over the past few decades; however, treating children in countries with limited resources remains challenging. Until now, the literature regarding childhood MB in Malaysia has been nonexistent. Our objectives were to review the demographics and outcome of pediatric MB treated at the University Malaya Medical Center between January 1994 and December 2013 and describe the challenges encountered. Methods: Fifty-one patients with childhood MB were seen at University Malaya Medical Center. Data from 43 patients were analyzed; eight patients were excluded because their families refused treatment after surgery. Results: Headache and vomiting were the most common presenting symptoms, and the mean interval between symptom onset and diagnosis was 4 weeks. Fourteen patients presented with metastatic disease. Five-year progression-free survival (± SE for patients ≥ 3 years old was 41.7% ± 14.2% (95% CI, 21.3% to 81.4% in the high-risk group and 68.6% ± 18.6% (95% CI, 40.3% to 100% in the average-risk group, and 5-year overall survival (± SE in these two groups was 41.7% ± 14.2% (95% CI, 21.3% to 81.4% and 58.3% ± 18.6% (95% CI, 31.3% to 100%, respectively. Children younger than 3 years old had 5-year progression-free and overall survival rates (± SE of 47.6% ± 12.1% (95% CI, 28.9% to 78.4% and 45.6% ± 11.7% (95% CI, 27.6% to 75.5%, respectively. Time to relapse ranged from 4 to 132 months. Most patients who experienced relapse died within 1 year. Febrile neutropenia, hearing loss, and endocrinopathy were the most common treatment-related complications. Conclusion: The survival rate of childhood MB in Malaysia is inferior to that usually reported in the literature. We postulate that the following factors contribute to this difference: lack of a multidisciplinary neuro-oncology team, limited health care facilities, inconsistent risk assessment, insufficient data in the National Cancer

  17. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  18. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  19. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    Science.gov (United States)

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  20. OralCard: a bioinformatic tool for the study of oral proteome.

    Science.gov (United States)

    Arrais, Joel P; Rosa, Nuno; Melo, José; Coelho, Edgar D; Amaral, Diana; Correia, Maria José; Barros, Marlene; Oliveira, José Luís

    2013-07-01

    The molecular complexity of the human oral cavity can only be clarified through identification of components that participate within it. However current proteomic techniques produce high volumes of information that are dispersed over several online databases. Collecting all of this data and using an integrative approach capable of identifying unknown associations is still an unsolved problem. This is the main motivation for this work. We present the online bioinformatic tool OralCard, which comprises results from 55 manually curated articles reflecting the oral molecular ecosystem (OralPhysiOme). It comprises experimental information available from the oral proteome both of human (OralOme) and microbial origin (MicroOralOme) structured in protein, disease and organism. This tool is a key resource for researchers to understand the molecular foundations implicated in biology and disease mechanisms of the oral cavity. The usefulness of this tool is illustrated with the analysis of the oral proteome associated with diabetes melitus type 2. OralCard is available at http://bioinformatics.ua.pt/oralcard. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The secondary metabolite bioinformatics portal: Computational tools to facilitate synthetic biology of secondary metabolite production

    Directory of Open Access Journals (Sweden)

    Tilmann Weber

    2016-06-01

    Full Text Available Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http://www.secondarymetabolites.org is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field.

  2. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Identification of differentially expressed genes and signaling pathways in ovarian cancer by integrated bioinformatics analysis

    Directory of Open Access Journals (Sweden)

    Yang X

    2018-03-01

    Full Text Available Xiao Yang,1 Shaoming Zhu,2 Li Li,3 Li Zhang,1 Shu Xian,1 Yanqing Wang,1 Yanxiang Cheng1 1Department of Obstetrics and Gynecology, 2Department of Urology, Renmin Hospital of Wuhan University, 3Department of Pharmacology, Wuhan University Health Science Center, Wuhan, Hubei, People’s Republic of China Background: The mortality rate associated with ovarian cancer ranks the highest among gynecological malignancies. However, the cause and underlying molecular events of ovarian cancer are not clear. Here, we applied integrated bioinformatics to identify key pathogenic genes involved in ovarian cancer and reveal potential molecular mechanisms. Results: The expression profiles of GDS3592, GSE54388, and GSE66957 were downloaded from the Gene Expression Omnibus (GEO database, which contained 115 samples, including 85 cases of ovarian cancer samples and 30 cases of normal ovarian samples. The three microarray datasets were integrated to obtain differentially expressed genes (DEGs and were deeply analyzed by bioinformatics methods. The gene ontology (GO and Kyoto Encyclopedia of Genes and Genomes (KEGG pathway enrichments of DEGs were performed by DAVID and KOBAS online analyses, respectively. The protein–protein interaction (PPI networks of the DEGs were constructed from the STRING database. A total of 190 DEGs were identified in the three GEO datasets, of which 99 genes were upregulated and 91 genes were downregulated. GO analysis showed that the biological functions of DEGs focused primarily on regulating cell proliferation, adhesion, and differentiation and intracellular signal cascades. The main cellular components include cell membranes, exosomes, the cytoskeleton, and the extracellular matrix. The molecular functions include growth factor activity, protein kinase regulation, DNA binding, and oxygen transport activity. KEGG pathway analysis showed that these DEGs were mainly involved in the Wnt signaling pathway, amino acid metabolism, and the

  4. MEDICAL DIAGNOSTICS BY MICROSTRUCTURAL ANALYSIS OF BIOLOGICAL LIQUID DRIED PATTERNS AS A PROBLEM OF BIOINFORMATICS

    Directory of Open Access Journals (Sweden)

    Petr Vladimirovich Lebedev-Stepanov, Dr.

    2018-02-01

    Full Text Available Motivation: It is important to develop the high-precision computerized methods for medical rapid diagnostic which is generalizing the unique clinical experience obtained in the past decade as specialized solutions for diagnostic problems of control of specific diseases and, potentially, for a wide health monitoring of virtually healthy population, identify the reserves of human health and take the actions to prevent of these reserves depletion. In this work we present one of the new directions in bioinformatics, i.e. medical diagnostics by automated expert system on basis of morphology analysis of digital image of biological liquid dried pattern. Results: Proposed method is combination of bioinformatics and biochemistry approaches for obtaining diagnostic information from a morphological analysis of standardized dried patterns of biological liquid sessile drop. We have carried out own research in collaboration with medical diagnostic centers and formed the electronic database for recognition the following types of diseases: candidiasis; neoplasms; diabetes mellitus; diseases of the circulatory system; cerebrovascular disease; diseases of the digestive system; diseases of the genitourinary system; infectious diseases; factors relevant to the work; factors associated with environmental pollution; factors related to lifestyle. The laboratory setup for diagnostics of the human body in pathology states is developed. The diagnostic results are considered. Availability: Access to testing the software can be obtained on request to the contact email below.

  5. Vermont Natural Resources Atlas

    Data.gov (United States)

    Vermont Center for Geographic Information — The purpose of the�Natural Resources Atlas�is to provide geographic information about environmental features and sites that the Vermont Agency of Natural Resources...

  6. Applying Instructional Design Theories to Bioinformatics Education in Microarray Analysis and Primer Design Workshops

    Science.gov (United States)

    Shachak, Aviv; Ophir, Ron; Rubin, Eitan

    2005-01-01

    The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…

  7. Introductory Bioinformatics Exercises Utilizing Hemoglobin and Chymotrypsin to Reinforce the Protein Sequence-Structure-Function Relationship

    Science.gov (United States)

    Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany

    2007-01-01

    We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…

  8. PTSD: National Center for PTSD

    Medline Plus

    Full Text Available ... Budget, & Performance VA Center for Innovation (VACI) Agency Financial Report (AFR) Budget Submission Recovery Act Resources Business ... Community Providers and Clergy Co-Occurring Conditions Continuing Education Publications List of Center Publications Articles by Center ...

  9. Quantum Bio-Informatics II From Quantum Information to Bio-Informatics

    Science.gov (United States)

    Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori

    2009-02-01

    / H. Kamimura -- Massive collection of full-length complementary DNA clones and microarray analyses: keys to rice transcriptome analysis / S. Kikuchi -- Changes of influenza A(H5) viruses by means of entropic chaos degree / K. Sato and M. Ohya -- Basics of genome sequence analysis in bioinformatics - its fundamental ideas and problems / T. Suzuki and S. Miyazaki -- A basic introduction to gene expression studies using microarray expression data analysis / D. Wanke and J. Kilian -- Integrating biological perspectives: a quantum leap for microarray expression analysis / D. Wanke ... [et al.].

  10. Meeting review: 2002 O'Reilly Bioinformatics Technology Conference.

    Science.gov (United States)

    Counsell, Damian

    2002-01-01

    At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.

  11. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  12. Rise and demise of bioinformatics? Promise and progress.

    Directory of Open Access Journals (Sweden)

    Christos A Ouzounis

    Full Text Available The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself.

  13. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Shan Li

    2014-01-01

    Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  14. Architecture exploration of FPGA based accelerators for bioinformatics applications

    CERN Document Server

    Varma, B Sharat Chandra; Balakrishnan, M

    2016-01-01

    This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.

  15. 2nd Colombian Congress on Computational Biology and Bioinformatics

    CERN Document Server

    Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan

    2014-01-01

    This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis.  .

  16. Bioinformatics for whole-genome shotgun sequencing of microbial communities.

    Directory of Open Access Journals (Sweden)

    Kevin Chen

    2005-07-01

    Full Text Available The application of whole-genome shotgun sequencing to microbial communities represents a major development in metagenomics, the study of uncultured microbes via the tools of modern genomic analysis. In the past year, whole-genome shotgun sequencing projects of prokaryotic communities from an acid mine biofilm, the Sargasso Sea, Minnesota farm soil, three deep-sea whale falls, and deep-sea sediments have been reported, adding to previously published work on viral communities from marine and fecal samples. The interpretation of this new kind of data poses a wide variety of exciting and difficult bioinformatics problems. The aim of this review is to introduce the bioinformatics community to this emerging field by surveying existing techniques and promising new approaches for several of the most interesting of these computational problems.

  17. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  18. Case Study: Organizational Realignment at Tripler Army Medical Center to Reflect "Best Business Practice." Facilitate Coordinated Care, and Maximize the Use of Resources

    National Research Council Canada - National Science Library

    Gawlik, John

    2000-01-01

    ...) was established to evaluate Tripler's Utilization Management, Resource Management, Managed Care, Patient Administration, Information Management, and Clinical Support divisions to maximize billing...

  19. The World-Wide Web: An Interface between Research and Teaching in Bioinformatics

    Directory of Open Access Journals (Sweden)

    James F. Aiton

    1994-01-01

    Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.

  20. Cloning and bioinformatic analysis of lovastatin biosynthesis regulatory gene lovE.

    Science.gov (United States)

    Huang, Xin; Li, Hao-ming

    2009-08-05

    Lovastatin is an effective drug for treatment of hyperlipidemia. This study aimed to clone lovastatin biosynthesis regulatory gene lovE and analyze the structure and function of its encoding protein. According to the lovastatin synthase gene sequence from genebank, primers were designed to amplify and clone the lovastatin biosynthesis regulatory gene lovE from Aspergillus terrus genomic DNA. Bioinformatic analysis of lovE and its encoding animo acid sequence was performed through internet resources and software like DNAMAN. Target fragment lovE, almost 1500 bp in length, was amplified from Aspergillus terrus genomic DNA and the secondary and three-dimensional structures of LovE protein were predicted. In the lovastatin biosynthesis process lovE is a regulatory gene and LovE protein is a GAL4-like transcriptional factor.