WorldWideScience

Sample records for resource discovery tool

  1. Resource Discovery and Universal Access: Understanding Enablers and Barriers from the User Perspective

    OpenAIRE

    Beyene, Wondwossen

    2016-01-01

    Resource discovery tools are keys to explore, find , and retrieve resources from multitudes of collections hosted by library and information systems. Modern resource discovery tools provide facet - rich interfaces that provide multiple alternatives to ex pose resources for their potential users and help them navigate to the resources they need. This paper examines one of those tools from the perspective of universal access, ...

  2. Usability Test Results for a Discovery Tool in an Academic Library

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2008-03-01

    Full Text Available Discovery tools are emerging in libraries. These tools offer library patrons the ability to concurrently search the library catalog and journal articles. While vendors rush to provide feature-rich interfaces and access to as much content as possible, librarians wonder about the usefulness of these tools to library patrons. In order to learn about both the utility and usability of EBSCO Discovery Service, James Madison University conducted a usability test with eight students and two faculty members. The test consisted of nine tasks focused on common patron requests or related to the utility of specific discovery tool features. Software recorded participants’ actions and time on task, human observers judged the success of each task, and a post-survey questionnaire gathered qualitative feedback and comments from the participants.  Overall, participants were successful at most tasks, but specific usability problems suggested some interface changes for both EBSCO Discovery Service and JMU’s customizations of the tool.  The study also raised several questions for libraries above and beyond any specific discovery tool interface, including the scope and purpose of a discovery tool versus other library systems, working with the large result sets made possible by discovery tools, and navigation between the tool and other library services and resources.  This article will be of interest to those who are investigating discovery tools, selecting products, integrating discovery tools into a library web presence, or performing evaluations of similar systems.

  3. Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.

    Science.gov (United States)

    Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu

    2018-05-07

    In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Role of Open Source Tools and Resources in Virtual Screening for Drug Discovery.

    Science.gov (United States)

    Karthikeyan, Muthukumarasamy; Vyas, Renu

    2015-01-01

    Advancement in chemoinformatics research in parallel with availability of high performance computing platform has made handling of large scale multi-dimensional scientific data for high throughput drug discovery easier. In this study we have explored publicly available molecular databases with the help of open-source based integrated in-house molecular informatics tools for virtual screening. The virtual screening literature for past decade has been extensively investigated and thoroughly analyzed to reveal interesting patterns with respect to the drug, target, scaffold and disease space. The review also focuses on the integrated chemoinformatics tools that are capable of harvesting chemical data from textual literature information and transform them into truly computable chemical structures, identification of unique fragments and scaffolds from a class of compounds, automatic generation of focused virtual libraries, computation of molecular descriptors for structure-activity relationship studies, application of conventional filters used in lead discovery along with in-house developed exhaustive PTC (Pharmacophore, Toxicophores and Chemophores) filters and machine learning tools for the design of potential disease specific inhibitors. A case study on kinase inhibitors is provided as an example.

  5. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  6. The Biomedical Resource Ontology (BRO) to enable resource discovery in clinical and translational research.

    Science.gov (United States)

    Tenenbaum, Jessica D; Whetzel, Patricia L; Anderson, Kent; Borromeo, Charles D; Dinov, Ivo D; Gabriel, Davera; Kirschner, Beth; Mirel, Barbara; Morris, Tim; Noy, Natasha; Nyulas, Csongor; Rubenson, David; Saxman, Paul R; Singh, Harpreet; Whelan, Nancy; Wright, Zach; Athey, Brian D; Becich, Michael J; Ginsburg, Geoffrey S; Musen, Mark A; Smith, Kevin A; Tarantal, Alice F; Rubin, Daniel L; Lyster, Peter

    2011-02-01

    The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Semantic distributed resource discovery for multiple resource providers

    NARCIS (Netherlands)

    Pittaras, C.; Ghijsen, M.; Wibisono, A.; Grosso, P.; van der Ham, J.; de Laat, C.

    2012-01-01

    An emerging modus operandi among providers of cloud infrastructures is the one where they share and combine their heterogenous resources to offer end user services tailored to specific scientific and business needs. A challenge to overcome is the discovery of suitable resources among these multiple

  8. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    Science.gov (United States)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value

  9. Resource Discovery in Activity-Based Sensor Networks

    DEFF Research Database (Denmark)

    Bucur, Doina; Bardram, Jakob

    This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networ....... ABSN enhances the generic Extended Zone Routing Protocol with logical sensor grouping and greatly lowers network overhead during the process of discovery, while keeping discovery latency close to optimal.......This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networks...... (ABSNs) knowledge about their usage even at the network layer. ABSN redesigns classical network-level service discovery protocols to include and use this logical structuring of the network for a more practically applicable service discovery scheme. Noting that in practical settings activity-based sensor...

  10. Laboratory informatics tools integration strategies for drug discovery: integration of LIMS, ELN, CDS, and SDMS.

    Science.gov (United States)

    Machina, Hari K; Wild, David J

    2013-04-01

    There are technologies on the horizon that could dramatically change how informatics organizations design, develop, deliver, and support applications and data infrastructures to deliver maximum value to drug discovery organizations. Effective integration of data and laboratory informatics tools promises the ability of organizations to make better informed decisions about resource allocation during the drug discovery and development process and for more informed decisions to be made with respect to the market opportunity for compounds. We propose in this article a new integration model called ELN-centric laboratory informatics tools integration.

  11. Tools and data services registry: a community effort to document bioinformatics resources

    Science.gov (United States)

    Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599

  12. Discovery and Use of Online Learning Resources: Case Study Findings

    Directory of Open Access Journals (Sweden)

    Laurie Miller Nelson

    2004-04-01

    Full Text Available Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called ‘learning objects.’ Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article describes a case study of mathematics and science teachers’ practices and desires surrounding the discovery, selection, and use of digital library resources for instructional purposes. Findings suggest that the teacher participants used a broad range of search strategies in order to find resources that they deemed were age-appropriate, current, and accurate. They intended to include these resources with little modifications into planned instructional activities. The article concludes with a discussion of the implications of the findings for improving the design of educational digital library systems, including tools supporting resource reuse.

  13. Wide-Area Publish/Subscribe Mobile Resource Discovery Based on IPv6 GeoNetworking

    OpenAIRE

    Noguchi, Satoru; Matsuura, Satoshi; Inomata, Atsuo; Fujikawa, Kazutoshi; Sunahara, Hideki

    2013-01-01

    Resource discovery is an essential function for distributed mobile applications integrated in vehicular communication systems. Key requirements of the mobile resource discovery are wide-area geographic-based discovery and scalable resource discovery not only inside a vehicular ad-hoc network but also through the Internet. While a number of resource discovery solutions have been proposed, most of them have focused on specific scale of network. Furthermore, managing a large number of mobile res...

  14. The Role of School District Science Coordinators in the District-Wide Appropriation of an Online Resource Discovery and Sharing Tool for Teachers

    Science.gov (United States)

    Lee, Victor R.; Leary, Heather M.; Sellers, Linda; Recker, Mimi

    2014-06-01

    When introducing and implementing a new technology for science teachers within a school district, we must consider not only the end users but also the roles and influence district personnel have on the eventual appropriation of that technology. School districts are, by their nature, complex systems with multiple individuals at different levels in the organization who are involved in supporting and providing instruction. Varying levels of support for new technologies between district coordinators and teachers can sometimes lead to counterintuitive outcomes. In this article, we examine the role of the district science coordinator in five school districts that participated in the implementation of an online resource discovery and sharing tool for Earth science teachers. Using a qualitative approach, we conducted and coded interviews with district coordinators and teachers to examine the varied responsibilities associated with the district coordinator and to infer the relationships that were developed and perceived by teachers. We then examine and discuss two cases that illustrate how those relationships could have influenced how the tool was adopted and used to differing degrees in the two districts. Specifically, the district that had high support for online resource use from its coordinator appeared to have the lowest level of tool use, and the district with much less visible support from its coordinator had the highest level of tool use. We explain this difference in terms of how the coordinator's promotion of teacher autonomy took distinctly different forms at those two districts.

  15. Resource Discovery in Activity-Based Sensor Networks

    DEFF Research Database (Denmark)

    Bucur, Doina; Bardram, Jakob

    This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networks...... (ABSNs) knowledge about their usage even at the network layer. ABSN redesigns classical network-level service discovery protocols to include and use this logical structuring of the network for a more practically applicable service discovery scheme. Noting that in practical settings activity-based sensor...

  16. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  17. Retrieval of Legal Information Through Discovery Layers: A Case Study Related to Indian Law Libraries

    Directory of Open Access Journals (Sweden)

    Kushwah, Shivpal Singh

    2016-09-01

    Full Text Available Purpose. The purpose of this paper is to analyze and evaluate discovery layer search tools for retrieval of legal information in Indian law libraries. This paper covers current practices in legal information retrieval with special reference to Indian academic law libraries, and analyses its importance in the domain of law.Design/Methodology/Approach. A web survey and observational study method are used to collect the data. Data related to the discovery tools were collected using email and further discussion held with the discovery layer/ tool /product developers and their representatives.Findings. Results show that most of the Indian law libraries are subscribing to bundles of legal information resources such as Hein Online, JSTOR, LexisNexis Academic, Manupatra, Westlaw India, SCC web, AIR Online (CDROM, and so on. International legal and academic resources are compatible with discovery tools because they support various standards related to online publishing and dissemination such as OAI/PMH, Open URL, MARC21, and Z39.50, but Indian legal resources such as Manupatra, Air, and SCC are not compatible with the discovery layers. The central index is one of the important components in a discovery search interface, and discovery layer services/tools could be useful for Indian law libraries also if they can include multiple legal and academic resources in their central index. But present practices and observations reveal that discovery layers are not providing facility to cover legal information resources. Therefore, in the present form, discovery tools are not very useful; they are an incomplete and half solution for Indian libraries because all available Indian legal resources available in the law libraries are not covered.Originality/Value. Very limited research or published literature is available in the area of discovery layers and their compatibility with legal information resources.

  18. Facilitating NCAR Data Discovery by Connecting Related Resources

    Science.gov (United States)

    Rosati, A.

    2012-12-01

    Linking datasets, creators, and users by employing the proper standards helps to increase the impact of funded research. In order for users to find a dataset, it must first be named. Data citations play the important role of giving datasets a persistent presence by assigning a formal "name" and location. This project focuses on the next step of the "name-find-use" sequence: enhancing discoverability of NCAR data by connecting related resources on the web. By examining metadata schemas that document datasets, I examined how Semantic Web approaches can help to ensure the widest possible range of data users. The focus was to move from search engine optimization (SEO) to information connectivity. Two main markup types are very visible in the Semantic Web and applicable to scientific dataset discovery: The Open Archives Initiative-Object Reuse and Exchange (OAI-ORE - www.openarchives.org) and Microdata (HTML5 and www.schema.org). My project creates pilot aggregations of related resources using both markup types for three case studies: The North American Regional Climate Change Assessment Program (NARCCAP) dataset and related publications, the Palmer Drought Severity Index (PSDI) animation and image files from NCAR's Visualization Lab (VisLab), and the multidisciplinary data types and formats from the Advanced Cooperative Arctic Data and Information Service (ACADIS). This project documents the differences between these markups and how each creates connectedness on the web. My recommendations point toward the most efficient and effective markup schema for aggregating resources within the three case studies based on the following assessment criteria: ease of use, current state of support and adoption of technology, integration with typical web tools, available vocabularies and geoinformatic standards, interoperability with current repositories and access portals (e.g. ESG, Java), and relation to data citation tools and methods.

  19. Choosing Discovery: A Literature Review on the Selection and Evaluation of Discovery Layers

    Science.gov (United States)

    Moore, Kate B.; Greene, Courtney

    2012-01-01

    Within the next few years, traditional online public access catalogs will be replaced by more robust and interconnected discovery layers that can serve as primary public interfaces to simultaneously search many separate collections of resources. Librarians have envisioned this type of discovery tool since the 1980s, and research shows that…

  20. mySearch changed my life – a resource discovery journey

    OpenAIRE

    Crowley, Emma J.

    2013-01-01

    mySearch: the federated years mySearch: choosing a new platform mySearch: EBSCO Discovery Service (EDS) Implementing a new system Technical challenges Has resource discovery enhanced experiences at BU? Ongoing challenges Implications for library management systems Implications for information literacy Questions

  1. Citation Discovery Tools for Conducting Adaptive Meta-analyses to Update Systematic Reviews.

    Science.gov (United States)

    Bae, Jong-Myon; Kim, Eun Hee

    2016-03-01

    The systematic review (SR) is a research methodology that aims to synthesize related evidence. Updating previously conducted SRs is necessary when new evidence has been produced, but no consensus has yet emerged on the appropriate update methodology. The authors have developed a new SR update method called 'adaptive meta-analysis' (AMA) using the 'cited by', 'similar articles', and 'related articles' citation discovery tools in the PubMed and Scopus databases. This study evaluates the usefulness of these citation discovery tools for updating SRs. Lists were constructed by applying the citation discovery tools in the two databases to the articles analyzed by a published SR. The degree of overlap between the lists and distribution of excluded results were evaluated. The articles ultimately selected for the SR update meta-analysis were found in the lists obtained from the 'cited by' and 'similar' tools in PubMed. Most of the selected articles appeared in both the 'cited by' lists in Scopus and PubMed. The Scopus 'related' tool did not identify the appropriate articles. The AMA, which involves using both citation discovery tools in PubMed, and optionally, the 'related' tool in Scopus, was found to be useful for updating an SR.

  2. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  3. Resource Discovery within the Networked "Hybrid" Library.

    Science.gov (United States)

    Leigh, Sally-Anne

    This paper focuses on the development, adoption, and integration of resource discovery, knowledge management, and/or knowledge sharing interfaces such as interactive portals, and the use of the library's World Wide Web presence to increase the availability and usability of information services. The introduction addresses changes in library…

  4. Estimating long-term uranium resource availability and discovery requirements. A Canadian case study

    International Nuclear Information System (INIS)

    Martin, H.L.; Azis, A.; Williams, R.M.

    1979-01-01

    Well-founded estimates of the rate at which a country's resources might be made available are a prime requisite for energy planners and policy makers at the national level. To meet this need, a method is discussed that can aid in the analysis of future supply patterns of uranium and other metals. Known sources are first appraised, on a mine-by-mine basis, in relation to projected domestic needs and expectable export levels. The gap between (a) production from current and anticipated mines, and (b) production levels needed to meet both domestic needs and export opportunities, would have to be met by new sources. Using as measuring sticks the resources and production capabilities of typical uranium deposits, a measure can be obtained of the required timing and magnitude of discovery needs. The new discoveries, when developed into mines, would need to be sufficient to meet not only any shortfalls in production capability, but also any special reserve requirements as stipulated, for example, under Canada's uranium export guidelines. Since the method can be followed simply and quickly, it can serve as a valuable tool for long-term supply assessments of any mineral commodity from a nation's mines. (author)

  5. SNPServer: a real-time SNP discovery tool.

    Science.gov (United States)

    Savage, David; Batley, Jacqueline; Erwin, Tim; Logan, Erica; Love, Christopher G; Lim, Geraldine A C; Mongin, Emmanuel; Barker, Gary; Spangenberg, German C; Edwards, David

    2005-07-01

    SNPServer is a real-time flexible tool for the discovery of SNPs (single nucleotide polymorphisms) within DNA sequence data. The program uses BLAST, to identify related sequences, and CAP3, to cluster and align these sequences. The alignments are parsed to the SNP discovery software autoSNP, a program that detects SNPs and insertion/deletion polymorphisms (indels). Alternatively, lists of related sequences or pre-assembled sequences may be entered for SNP discovery. SNPServer and autoSNP use redundancy to differentiate between candidate SNPs and sequence errors. For each candidate SNP, two measures of confidence are calculated, the redundancy of the polymorphism at a SNP locus and the co-segregation of the candidate SNP with other SNPs in the alignment. SNPServer is available at http://hornbill.cspp.latrobe.edu.au/snpdiscovery.html.

  6. Uranium Exploration (2004-2014): New Discoveries, New Resources

    International Nuclear Information System (INIS)

    Polak, Christian

    2014-01-01

    Conclusion: 10 years of discovery? • Large effort of exploration; • Large amount of compliant resources discovered or confirmed; • New process development for low cost and for low grade; • New production from this effort still limited < 10%; • Feasibilty studies must confirm viability of economic exploitation and therefore resources quality; • Consolidation to set up critical mass deposits. ► To be ready for the coming decades 2020 +A

  7. Recent development in software and automation tools for high-throughput discovery bioanalysis.

    Science.gov (United States)

    Shou, Wilson Z; Zhang, Jun

    2012-05-01

    Bioanalysis with LC-MS/MS has been established as the method of choice for quantitative determination of drug candidates in biological matrices in drug discovery and development. The LC-MS/MS bioanalytical support for drug discovery, especially for early discovery, often requires high-throughput (HT) analysis of large numbers of samples (hundreds to thousands per day) generated from many structurally diverse compounds (tens to hundreds per day) with a very quick turnaround time, in order to provide important activity and liability data to move discovery projects forward. Another important consideration for discovery bioanalysis is its fit-for-purpose quality requirement depending on the particular experiments being conducted at this stage, and it is usually not as stringent as those required in bioanalysis supporting drug development. These aforementioned attributes of HT discovery bioanalysis made it an ideal candidate for using software and automation tools to eliminate manual steps, remove bottlenecks, improve efficiency and reduce turnaround time while maintaining adequate quality. In this article we will review various recent developments that facilitate automation of individual bioanalytical procedures, such as sample preparation, MS/MS method development, sample analysis and data review, as well as fully integrated software tools that manage the entire bioanalytical workflow in HT discovery bioanalysis. In addition, software tools supporting the emerging high-resolution accurate MS bioanalytical approach are also discussed.

  8. Estimation of uranium resources by life-cycle or discovery-rate models: a critique

    International Nuclear Information System (INIS)

    Harris, D.P.

    1976-10-01

    This report was motivated primarily by M. A. Lieberman's ''United States Uranium Resources: An Analysis of Historical Data'' (Science, April 30). His conclusion that only 87,000 tons of U 3 O 8 resources recoverable at a forward cost of $8/lb remain to be discovered is criticized. It is shown that there is no theoretical basis for selecting the exponential or any other function for the discovery rate. Some of the economic (productivity, inflation) and data issues involved in the analysis of undiscovered, recoverable U 3 O 8 resources on discovery rates of $8 reserves are discussed. The problem of the ratio of undiscovered $30 resources to undiscovered $8 resources is considered. It is concluded that: all methods for the estimation of unknown resources must employ a model of some form of the endowment-exploration-production complex, but every model is a simplification of the real world, and every estimate is intrinsically uncertain. The life-cycle model is useless for the appraisal of undiscovered, recoverable U 3 O 8 , and the discovery rate model underestimates these resources

  9. Evaluating Music Discovery Tools on Spotify: The Role of User Preference Characteristics

    Directory of Open Access Journals (Sweden)

    Muh-Chyun Tang

    2017-06-01

    Full Text Available An experimental study was conducted to assess the effectiveness of the four music discovery tools available on Spotify, a popular music streaming service, namely: radio recommendation, regional charts, genres and moods, as well as following Facebook friends. Both subjective judgment of user experience and objective measures of search effectiveness were used as the performance criteria. Other than comparison of these four tools, we also compared how consistent are these performance measures. The results show that user experience criteria were not necessarily corresponded to search effectiveness. Furthermore, three user preference characteristics: preference diversity, preference insight, and openness to novelty were introduced as mediating variables, with an aim to investigating how these attributes might interact with these four music discovery tools on performance. The results suggest that users’ preference characteristics did have an impact on the performance of these music discovery tools.

  10. Open science resources for the discovery and analysis of Tara Oceans data.

    Science.gov (United States)

    Pesant, Stéphane; Not, Fabrice; Picheral, Marc; Kandels-Lewis, Stefanie; Le Bescot, Noan; Gorsky, Gabriel; Iudicone, Daniele; Karsenti, Eric; Speich, Sabrina; Troublé, Romain; Dimier, Céline; Searson, Sarah

    2015-01-01

    The Tara Oceans expedition (2009-2013) sampled contrasting ecosystems of the world oceans, collecting environmental data and plankton, from viruses to metazoans, for later analysis using modern sequencing and state-of-the-art imaging technologies. It surveyed 210 ecosystems in 20 biogeographic provinces, collecting over 35,000 samples of seawater and plankton. The interpretation of such an extensive collection of samples in their ecological context requires means to explore, assess and access raw and validated data sets. To address this challenge, the Tara Oceans Consortium offers open science resources, including the use of open access archives for nucleotides (ENA) and for environmental, biogeochemical, taxonomic and morphological data (PANGAEA), and the development of on line discovery tools and collaborative annotation tools for sequences and images. Here, we present an overview of Tara Oceans Data, and we provide detailed registries (data sets) of all campaigns (from port-to-port), stations and sampling events.

  11. A Metadata Schema for Geospatial Resource Discovery Use Cases

    Directory of Open Access Journals (Sweden)

    Darren Hardy

    2014-07-01

    Full Text Available We introduce a metadata schema that focuses on GIS discovery use cases for patrons in a research library setting. Text search, faceted refinement, and spatial search and relevancy are among GeoBlacklight's primary use cases for federated geospatial holdings. The schema supports a variety of GIS data types and enables contextual, collection-oriented discovery applications as well as traditional portal applications. One key limitation of GIS resource discovery is the general lack of normative metadata practices, which has led to a proliferation of metadata schemas and duplicate records. The ISO 19115/19139 and FGDC standards specify metadata formats, but are intricate, lengthy, and not focused on discovery. Moreover, they require sophisticated authoring environments and cataloging expertise. Geographic metadata standards target preservation and quality measure use cases, but they do not provide for simple inter-institutional sharing of metadata for discovery use cases. To this end, our schema reuses elements from Dublin Core and GeoRSS to leverage their normative semantics, community best practices, open-source software implementations, and extensive examples already deployed in discovery contexts such as web search and mapping. Finally, we discuss a Solr implementation of the schema using a "geo" extension to MODS.

  12. The Discovery Dome: A Tool for Increasing Student Engagement

    Science.gov (United States)

    Brevik, Corinne

    2015-04-01

    The Discovery Dome is a portable full-dome theater that plays professionally-created science films. Developed by the Houston Museum of Natural Science and Rice University, this inflatable planetarium offers a state-of-the-art visual learning experience that can address many different fields of science for any grade level. It surrounds students with roaring dinosaurs, fascinating planets, and explosive storms - all immersive, engaging, and realistic. Dickinson State University has chosen to utilize its Discovery Dome to address Earth Science education at two levels. University courses across the science disciplines can use the Discovery Dome as part of their curriculum. The digital shows immerse the students in various topics ranging from astronomy to geology to weather and climate. The dome has proven to be a valuable tool for introducing new material to students as well as for reinforcing concepts previously covered in lectures or laboratory settings. The Discovery Dome also serves as an amazing science public-outreach tool. University students are trained to run the dome, and they travel with it to schools and libraries around the region. During the 2013-14 school year, our Discovery Dome visited over 30 locations. Many of the schools visited are in rural settings which offer students few opportunities to experience state-of-the-art science technology. The school kids are extremely excited when the Discovery Dome visits their community, and they will talk about the experience for many weeks. Traveling with the dome is also very valuable for the university students who get involved in the program. They become very familiar with the science content, and they gain experience working with teachers as well as the general public. They get to share their love of science, and they get to help inspire a new generation of scientists.

  13. Discovery and Use of Online Learning Resources: Case Study Findings

    OpenAIRE

    Laurie Miller Nelson; James Dorward; Mimi M. Recker

    2004-01-01

    Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called learning objects. Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article describes a case study of mathematics and science teachers practices and desires surrounding the discovery, selection, and use of digital library resources for...

  14. Study of Tools for Network Discovery and Network Mapping

    Science.gov (United States)

    2003-11-01

    connected to the switch. iv. Accessibility of historical data and event data In general, network discovery tools keep a history of the collected...has the following software dependencies: - Java Virtual machine 76 - Perl modules - RRD Tool - TomCat - PostgreSQL STRENGTHS AND...systems - provide a simple view of the current network status - generate alarms on status change - generate history of status change VISUAL MAP

  15. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  16. Bringing your tools to CyVerse Discovery Environment using Docker.

    Science.gov (United States)

    Devisetty, Upendra Kumar; Kennedy, Kathleen; Sarando, Paul; Merchant, Nirav; Lyons, Eric

    2016-01-01

    Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks) can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE) which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.

  17. User Driven Development of Software Tools for Open Data Discovery and Exploration

    Science.gov (United States)

    Schlobinski, Sascha; Keppel, Frank; Dihe, Pascal; Boot, Gerben; Falkenroth, Esa

    2016-04-01

    The use of open data in research faces challenges not restricted to inherent properties such as data quality, resolution of open data sets. Often Open data is catalogued insufficiently or fragmented. Software tools that support the effective discovery including the assessment of the data's appropriateness for research have shortcomings such as the lack of essential functionalities like support for data provenance. We believe that one of the reasons is the neglect of real end users requirements in the development process of aforementioned software tools. In the context of the FP7 Switch-On project we have pro-actively engaged the relevant user user community to collaboratively develop a means to publish, find and bind open data relevant for hydrologic research. Implementing key concepts of data discovery and exploration we have used state of the art web technologies to provide an interactive software tool that is easy to use yet powerful enough to satisfy the data discovery and access requirements of the hydrological research community.

  18. Discovery of natural resources

    Science.gov (United States)

    Guild, P.W.

    1976-01-01

    Mankind will continue to need ores of more or less the types and grades used today to supply its needs for new mineral raw materials, at least until fusion or some other relatively cheap, inexhaustible energy source is developed. Most deposits being mined today were exposed at the surface or found by relatively simple geophysical or other prospecting techniques, but many of these will be depleted in the foreseeable future. The discovery of deeper or less obvious deposits to replace them will require the conjunction of science and technology to deduce the laws that governed the concentration of elements into ores and to detect and evaluate the evidence of their whereabouts. Great theoretical advances are being made to explain the origins of ore deposits and understand the general reasons for their localization. These advances have unquestionable value for exploration. Even a large deposit is, however, very small, and, with few exceptions, it was formed under conditions that have long since ceased to exist. The explorationist must suppress a great deal of "noise" to read and interpret correctly the "signals" that can define targets and guide the drilling required to find it. Is enough being done to ensure the long-term availability of mineral raw materials? The answer is probably no, in view of the expanding consumption and the difficulty of finding new deposits, but ingenuity, persistence, and continued development of new methods and tools to add to those already at hand should put off the day of "doing without" for many years. The possibility of resource exhaustion, especially in view of the long and increasing lead time needed to carry out basic field and laboratory studies in geology, geophysics, and geochemistry and to synthesize and analyze the information gained from them counsels against any letting down of our guard, however (17). Research and exploration by government, academia, and industry must be supported and encouraged; we cannot wait until an eleventh

  19. RDA: Resource Description and Access: The new standard for metadata and resource discovery in the digital age

    Directory of Open Access Journals (Sweden)

    Carlo Bianchini

    2015-01-01

    Full Text Available RDA (Resource Description and Access is going to promote a great change. In fact, guidelines – rather than rules – are addressed to anyone wishes to describe and make accessible a cultural heritage collection or tout court a collection: librarians, archivists, curators and professionals in any other branch of knowledge. The work is organized in two parts: the former contains theoretical foundations of cataloguing (FRBR, ICP, semantic web and linked data, the latter a critical presentation of RDA guidelines. RDA aims to make possible creation of well-structured metadata for any kind of resources, reusable in any context and technological environment. RDA offers a “set of guidelines and instructions to create data for discovery of resources”. Guidelines stress four actions – to identify, to relate (from FRBR/FRAD user tasks and ICP, to represent and to discover – and a noun: resource. To identify entities of Group 1 and Group 2 of FRBR (Work, Expression, Manifestation, Item, Person, Family, Corporate Body; to relate entities of Group 1 and Group 2 of FRBR, by means of relationships. To enable users to represent and discover entities of Group 1 and Group 2 by means of their attributes and relationships. These last two actions are the reason of users’ searches, and users are the pinpoint of the process. RDA enables the discovery of recorded knowledge, that is any resource conveying information, any resources transmitting intellectual or artistic content by means of any kind of carrier and media. RDA is a content standard, not a display standard nor an encoding standard: it gives instructions to identify data and does not care about how display or encode data produced by guidelines. RDA requires an original approach, a metanoia, a deep change in the way we think about cataloguing. Innovations in RDA are many: it promotes interoperability between catalogs and other search tools, it adopts terminology and concepts of the Semantic Web, it

  20. Discovery Mondays: Surveyors' Tools

    CERN Multimedia

    2003-01-01

    Surveyors of all ages, have your rulers and compasses at the ready! This sixth edition of Discovery Monday is your chance to learn about the surveyor's tools - the state of the art in measuring instruments - and see for yourself how they work. With their usual daunting precision, the members of CERN's Surveying Group have prepared some demonstrations and exercises for you to try. Find out the techniques for ensuring accelerator alignment and learn about high-tech metrology systems such as deviation indicators, tracking lasers and total stations. The surveyors will show you how they precisely measure magnet positioning, with accuracy of a few thousandths of a millimetre. You can try your hand at precision measurement using different types of sensor and a modern-day version of the Romans' bubble level, accurate to within a thousandth of a millimetre. You will learn that photogrammetry techniques can transform even a simple digital camera into a remarkable measuring instrument. Finally, you will have a chance t...

  1. Maximizing Academic Library Collections: Measuring Changes in Use Patterns Owing to EBSCO Discovery Service

    Science.gov (United States)

    Calvert, Kristin

    2015-01-01

    Despite the prevalence of academic libraries adopting web-scale discovery tools, few studies have quantified their effect on the use of library collections. This study measures the impact that EBSCO Discovery Service has had on use of library resources through circulation statistics, use of electronic resources, and interlibrary loan requests.…

  2. Radio Resource Management for V2V Discovery

    DEFF Research Database (Denmark)

    Alvarez, Beatriz Soret; Gatnau, Marta; Kovács, Istvan

    2016-01-01

    Big expectations are put into vehicular communications (V2X) for a safer and more intelligent driving. With human lives at risk, the system cannot afford to fail, which translates into very stringent reliability and latency requirements to the radio network. One of the challenges is to find...... efficient radio resource management (RRM) strategies for direct vehicle-to-vehicle (V2V) communication that can fulfil the requirements even with high traffic density. In cellular networks, a device-to-device (D2D) communication is usually split into two phases: the discovery process, for node awareness...... of each other; and the communication phase itself, where data exchange takes place. In the case of V2V, the discovery phase can utilize the status information that cars broadcast periodically as the beacons to detect the presence of neighbouring cars. For the delivery of specific messages (e...

  3. Bigger Data, Collaborative Tools and the Future of Predictive Drug Discovery

    Science.gov (United States)

    Clark, Alex M.; Swamidass, S. Joshua; Litterman, Nadia; Williams, Antony J.

    2014-01-01

    Over the past decade we have seen a growth in the provision of chemistry data and cheminformatics tools as either free websites or software as a service (SaaS) commercial offerings. These have transformed how we find molecule-related data and use such tools in our research. There have also been efforts to improve collaboration between researchers either openly or through secure transactions using commercial tools. A major challenge in the future will be how such databases and software approaches handle larger amounts of data as it accumulates from high throughput screening and enables the user to draw insights, enable predictions and move projects forward. We now discuss how information from some drug discovery datasets can be made more accessible and how privacy of data should not overwhelm the desire to share it at an appropriate time with collaborators. We also discuss additional software tools that could be made available and provide our thoughts on the future of predictive drug discovery in this age of big data. We use some examples from our own research on neglected diseases, collaborations, mobile apps and algorithm development to illustrate these ideas. PMID:24943138

  4. Updates on resources, software tools, and databases for plant proteomics in 2016-2017.

    Science.gov (United States)

    Misra, Biswapriya B

    2018-02-08

    Proteomics data processing, annotation, and analysis can often lead to major hurdles in large-scale high-throughput bottom-up proteomics experiments. Given the recent rise in protein-based big datasets being generated, efforts in in silico tool development occurrences have had an unprecedented increase; so much so, that it has become increasingly difficult to keep track of all the advances in a particular academic year. However, these tools benefit the plant proteomics community in circumventing critical issues in data analysis and visualization, as these continually developing open-source and community-developed tools hold potential in future research efforts. This review will aim to introduce and summarize more than 50 software tools, databases, and resources developed and published during 2016-2017 under the following categories: tools for data pre-processing and analysis, statistical analysis tools, peptide identification tools, databases and spectral libraries, and data visualization and interpretation tools. Intended for a well-informed proteomics community, finally, efforts in data archiving and validation datasets for the community will be discussed as well. Additionally, the author delineates the current and most commonly used proteomics tools in order to introduce novice readers to this -omics discovery platform. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. NETL's Energy Data Exchange (EDX) - a coordination, collaboration, and data resource discovery platform for energy science

    Science.gov (United States)

    Rose, K.; Rowan, C.; Rager, D.; Dehlin, M.; Baker, D. V.; McIntyre, D.

    2015-12-01

    Multi-organizational research teams working jointly on projects often encounter problems with discovery, access to relevant existing resources, and data sharing due to large file sizes, inappropriate file formats, or other inefficient options that make collaboration difficult. The Energy Data eXchange (EDX) from Department of Energy's (DOE) National Energy Technology Laboratory (NETL) is an evolving online research environment designed to overcome these challenges in support of DOE's fossil energy goals while offering improved access to data driven products of fossil energy R&D such as datasets, tools, and web applications. In 2011, development of NETL's Energy Data eXchange (EDX) was initiated and offers i) a means for better preserving of NETL's research and development products for future access and re-use, ii) efficient, discoverable access to authoritative, relevant, external resources, and iii) an improved approach and tools to support secure, private collaboration and coordination between multi-organizational teams to meet DOE mission and goals. EDX presently supports fossil energy and SubTER Crosscut research activities, with an ever-growing user base. EDX is built on a heavily customized instance of the open source platform, Comprehensive Knowledge Archive Network (CKAN). EDX connects users to externally relevant data and tools through connecting to external data repositories built on different platforms and other CKAN platforms (e.g. Data.gov). EDX does not download and repost data or tools that already have an online presence. This leads to redundancy and even error. If a relevant resource already has an online instance, is hosted by another online entity, EDX will point users to that external host either using web services, inventorying URLs and other methods. EDX offers users the ability to leverage private-secure capabilities custom built into the system. The team is presently working on version 3 of EDX which will incorporate big data analytical

  6. Uranium exploration (2004-2014): New discoveries, new resources

    International Nuclear Information System (INIS)

    Polack, C.

    2014-01-01

    The last decade has demonstrated the dynamic of the mining industry to respond of the need of the market to explore and discover new deposits. For the first time in the uranium industry, the effort was conducted not only by the majors but by numerous junior mining companies, more than 800 companies where involved. Junior miners introduced new methodologies, innovations and fresh approach. Working mainly on former prospects of the 70’s and 80’s they discovered new deposits, transformed historical resources into compliant resources and reserves and developed new large resources in Africa, North America and Australia. In Australia, the Four Mile, Mt Gee, Samphire (SA), Mount Isa (Qld), Mulga Rock, Wiluna-Lake Maitland, Carley Bore-Yanrey-Manyingee (WA) projects were all advanced to compliant resources or reserves by junior mining companies. In Canada, activity was mainly focused on Athabasca basin, Newfoundland and Québec, the results are quite amazing. In the Athabasca 2 new deposits were identified, Roughrider and Patterson South Lake, Whilst in Québec the Matouch project and in New Foundland the Michelin project are showing good potential. In Namibia, alaskite and surficial deposits, extended the model of the Dalmaradian Central belt with the extension of rich alaskite of Z20, Husab, Omahola and large deposits of Etango and Norasa. A new mine commenced production Langer Heinrich and two are well advanced on way to production: Trekkopje and Husab. The ISL model continues its success in Central Asia with large discoveries in Mongolia and China. Europe has been revisited by some juniors with an increase of resources in Spain (Salamanca) and Slovakia (Kuriskova). Some countries entered into the uranium club with maiden resources namely Mali (Falea), Mauritania and Peru (Macusani caldeira). The Karoo formation revitalised interest for exploration within Paraguay, South Africa (Rieskuil), Botswana (Lethlakane), Zambia (Mutanga, Chirundu) and the exploitation

  7. Bioinformatics Tools for the Discovery of New Nonribosomal Peptides

    DEFF Research Database (Denmark)

    Leclère, Valérie; Weber, Tilmann; Jacques, Philippe

    2016-01-01

    -dimensional structure of the peptides can be compared with the structural patterns of all known NRPs. The presented workflow leads to an efficient and rapid screening of genomic data generated by high throughput technologies. The exploration of such sequenced genomes may lead to the discovery of new drugs (i......This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes...... and the deciphering of the domain architecture of the nonribosomal peptide synthetases (NRPSs). In the next step, candidate peptides synthesized by these NRPSs are predicted in silico, considering the specificity of incorporated monomers together with their isomery. To assess their novelty, the two...

  8. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  9. Paths of Discovery: Comparing the Search Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and Conventional Library Resources

    Science.gov (United States)

    Asher, Andrew D.; Duke, Lynda M.; Wilson, Suzanne

    2013-01-01

    In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar, and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data were gathered on students' usage of these tools. Regardless of the…

  10. In silico tools used for compound selection during target-based drug discovery and development.

    Science.gov (United States)

    Caldwell, Gary W

    2015-01-01

    The target-based drug discovery process, including target selection, screening, hit-to-lead (H2L) and lead optimization stage gates, is the most common approach used in drug development. The full integration of in vitro and/or in vivo data with in silico tools across the entire process would be beneficial to R&D productivity by developing effective selection criteria and drug-design optimization strategies. This review focuses on understanding the impact and extent in the past 5 years of in silico tools on the various stage gates of the target-based drug discovery approach. There are a large number of in silico tools available for establishing selection criteria and drug-design optimization strategies in the target-based approach. However, the inconsistent use of in vitro and/or in vivo data integrated with predictive in silico multiparameter models throughout the process is contributing to R&D productivity issues. In particular, the lack of reliable in silico tools at the H2L stage gate is contributing to the suboptimal selection of viable lead compounds. It is suggested that further development of in silico multiparameter models and organizing biologists, medicinal and computational chemists into one team with a single accountable objective to expand the utilization of in silico tools in all phases of drug discovery would improve R&D productivity.

  11. The use of web ontology languages and other semantic web tools in drug discovery.

    Science.gov (United States)

    Chen, Huajun; Xie, Guotong

    2010-05-01

    To optimize drug development processes, pharmaceutical companies require principled approaches to integrate disparate data on a unified infrastructure, such as the web. The semantic web, developed on the web technology, provides a common, open framework capable of harmonizing diversified resources to enable networked and collaborative drug discovery. We survey the state of art of utilizing web ontologies and other semantic web technologies to interlink both data and people to support integrated drug discovery across domains and multiple disciplines. Particularly, the survey covers three major application categories including: i) semantic integration and open data linking; ii) semantic web service and scientific collaboration and iii) semantic data mining and integrative network analysis. The reader will gain: i) basic knowledge of the semantic web technologies; ii) an overview of the web ontology landscape for drug discovery and iii) a basic understanding of the values and benefits of utilizing the web ontologies in drug discovery. i) The semantic web enables a network effect for linking open data for integrated drug discovery; ii) The semantic web service technology can support instant ad hoc collaboration to improve pipeline productivity and iii) The semantic web encourages publishing data in a semantic way such as resource description framework attributes and thus helps move away from a reliance on pure textual content analysis toward more efficient semantic data mining.

  12. Tools and data services registry: a community effort to document bioinformatics resources

    NARCIS (Netherlands)

    Ison, J.; Rapacki, K.; Menager, H.; Kalas, M.; Rydza, E.; Chmura, P.; Anthon, C.; Beard, N.; Berka, K.; Bolser, D.; Booth, T.; Bretaudeau, A.; Brezovsky, J.; Casadio, R.; Cesareni, G.; Coppens, F.; Cornell, M.; Cuccuru, G.; Davidsen, K.; Vedova, G.D.; Dogan, T.; Doppelt-Azeroual, O.; Emery, L.; Gasteiger, E.; Gatter, T.; Goldberg, T.; Grosjean, M.; Gruning, B.; Helmer-Citterich, M.; Ienasescu, H.; Ioannidis, V.; Jespersen, M.C.; Jimenez, R.; Juty, N.; Juvan, P.; Koch, M.; Laibe, C.; Li, J.W.; Licata, L.; Mareuil, F.; Micetic, I.; Friborg, R.M.; Moretti, S.; Morris, C.; Moller, S.; Nenadic, A.; Peterson, H.; Profiti, G.; Rice, P.; Romano, P.; Roncaglia, P.; Saidi, R.; Schafferhans, A.; Schwammle, V.; Smith, C.; Sperotto, M.M.; Stockinger, H.; Varekova, R.S.; Tosatto, S.C.; Torre, V.; Uva, P.; Via, A.; Yachdav, G.; Zambelli, F.; Vriend, G.; Rost, B.; Parkinson, H.; Longreen, P.; Brunak, S.

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a

  13. CUAHSI Data Services: Tools and Cyberinfrastructure for Water Data Discovery, Research and Collaboration

    Science.gov (United States)

    Seul, M.; Brazil, L.; Castronova, A. M.

    2017-12-01

    CUAHSI Data Services: Tools and Cyberinfrastructure for Water Data Discovery, Research and CollaborationEnabling research surrounding interdisciplinary topics often requires a combination of finding, managing, and analyzing large data sets and models from multiple sources. This challenge has led the National Science Foundation to make strategic investments in developing community data tools and cyberinfrastructure that focus on water data, as it is central need for many of these research topics. CUAHSI (The Consortium of Universities for the Advancement of Hydrologic Science, Inc.) is a non-profit organization funded by the National Science Foundation to aid students, researchers, and educators in using and managing data and models to support research and education in the water sciences. This presentation will focus on open-source CUAHSI-supported tools that enable enhanced data discovery online using advanced searching capabilities and computational analysis run in virtual environments pre-designed for educators and scientists so they can focus their efforts on data analysis rather than IT set-up.

  14. Tools and data services registry

    DEFF Research Database (Denmark)

    Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across...... a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task...

  15. Augmented Reality-Based Simulators as Discovery Learning Tools: An Empirical Study

    Science.gov (United States)

    Ibáñez, María-Blanca; Di-Serio, Ángela; Villarán-Molina, Diego; Delgado-Kloos, Carlos

    2015-01-01

    This paper reports empirical evidence on having students use AR-SaBEr, a simulation tool based on augmented reality (AR), to discover the basic principles of electricity through a series of experiments. AR-SaBEr was enhanced with knowledge-based support and inquiry-based scaffolding mechanisms, which proved useful for discovery learning in…

  16. Paths of discovery: Comparing the search effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and conventional library resources.

    Directory of Open Access Journals (Sweden)

    Müge Akbulut

    2015-09-01

    Full Text Available It is becoming hard for users to select significant sources among many others as number of scientific publications increase (Henning and Gunn, 2012. Search engines that are using cloud computing methods such as Google can list related documents successfully answering user requirements (Johnson, Levine and Smith, 2009. In order to meet users’ increasing demands, libraries started to use systems which enable users to access printed and electronic sources through a single interface. This study uses quantitative and qualitative methods to compare search effectiveness between Serial Solutions Summon, EBSCO Discovery Service (EDS web discovery tools, Google Scholar (GS and conventional library databases among users from Bucknell University and Illinois Wesleyan University.

  17. Space technology in the discovery and development of mineral and energy resources

    Science.gov (United States)

    Lowman, P. D.

    1977-01-01

    Space technology, applied to the discovery and extraction of mineral and energy resources, is summarized. Orbital remote sensing for geological purposes has been widely applied through the use of LANDSAT satellites. These techniques also have been of value for protection against environmental hazards and for a better understanding of crustal structure.

  18. Web-based tools from AHRQ's National Resource Center.

    Science.gov (United States)

    Cusack, Caitlin M; Shah, Sapna

    2008-11-06

    The Agency for Healthcare Research and Quality (AHRQ) has made an investment of over $216 million in research around health information technology (health IT). As part of their investment, AHRQ has developed the National Resource Center for Health IT (NRC) which includes a public domain Web site. New content for the web site, such as white papers, toolkits, lessons from the health IT portfolio and web-based tools, is developed as needs are identified. Among the tools developed by the NRC are the Compendium of Surveys and the Clinical Decision Support (CDS) Resources. The Compendium of Surveys is a searchable repository of health IT evaluation surveys made available for public use. The CDS Resources contains content which may be used to develop clinical decision support tools, such as rules, reminders and templates. This live demonstration will show the access, use, and content of both these freely available web-based tools.

  19. Mouse Models for Drug Discovery. Can New Tools and Technology Improve Translational Power?

    Science.gov (United States)

    Zuberi, Aamir; Lutz, Cathleen

    2016-01-01

    Abstract The use of mouse models in biomedical research and preclinical drug evaluation is on the rise. The advent of new molecular genome-altering technologies such as CRISPR/Cas9 allows for genetic mutations to be introduced into the germ line of a mouse faster and less expensively than previous methods. In addition, the rapid progress in the development and use of somatic transgenesis using viral vectors, as well as manipulations of gene expression with siRNAs and antisense oligonucleotides, allow for even greater exploration into genomics and systems biology. These technological advances come at a time when cost reductions in genome sequencing have led to the identification of pathogenic mutations in patient populations, providing unprecedented opportunities in the use of mice to model human disease. The ease of genetic engineering in mice also offers a potential paradigm shift in resource sharing and the speed by which models are made available in the public domain. Predictively, the knowledge alone that a model can be quickly remade will provide relief to resources encumbered by licensing and Material Transfer Agreements. For decades, mouse strains have provided an exquisite experimental tool to study the pathophysiology of the disease and assess therapeutic options in a genetically defined system. However, a major limitation of the mouse has been the limited genetic diversity associated with common laboratory mice. This has been overcome with the recent development of the Collaborative Cross and Diversity Outbred mice. These strains provide new tools capable of replicating genetic diversity to that approaching the diversity found in human populations. The Collaborative Cross and Diversity Outbred strains thus provide a means to observe and characterize toxicity or efficacy of new therapeutic drugs for a given population. The combination of traditional and contemporary mouse genome editing tools, along with the addition of genetic diversity in new modeling

  20. Mouse Models for Drug Discovery. Can New Tools and Technology Improve Translational Power?

    Science.gov (United States)

    Zuberi, Aamir; Lutz, Cathleen

    2016-12-01

    The use of mouse models in biomedical research and preclinical drug evaluation is on the rise. The advent of new molecular genome-altering technologies such as CRISPR/Cas9 allows for genetic mutations to be introduced into the germ line of a mouse faster and less expensively than previous methods. In addition, the rapid progress in the development and use of somatic transgenesis using viral vectors, as well as manipulations of gene expression with siRNAs and antisense oligonucleotides, allow for even greater exploration into genomics and systems biology. These technological advances come at a time when cost reductions in genome sequencing have led to the identification of pathogenic mutations in patient populations, providing unprecedented opportunities in the use of mice to model human disease. The ease of genetic engineering in mice also offers a potential paradigm shift in resource sharing and the speed by which models are made available in the public domain. Predictively, the knowledge alone that a model can be quickly remade will provide relief to resources encumbered by licensing and Material Transfer Agreements. For decades, mouse strains have provided an exquisite experimental tool to study the pathophysiology of the disease and assess therapeutic options in a genetically defined system. However, a major limitation of the mouse has been the limited genetic diversity associated with common laboratory mice. This has been overcome with the recent development of the Collaborative Cross and Diversity Outbred mice. These strains provide new tools capable of replicating genetic diversity to that approaching the diversity found in human populations. The Collaborative Cross and Diversity Outbred strains thus provide a means to observe and characterize toxicity or efficacy of new therapeutic drugs for a given population. The combination of traditional and contemporary mouse genome editing tools, along with the addition of genetic diversity in new modeling systems

  1. A Linked Data Approach for the Discovery of Educational ICT Tools in the Web of Data

    Science.gov (United States)

    Ruiz-Calleja, Adolfo; Vega-Gorgojo, Guillermo; Asensio-Perez, Juan I.; Bote-Lorenzo, Miguel L.; Gomez-Sanchez, Eduardo; Alario-Hoyos, Carlos

    2012-01-01

    The use of Information and Communication Technologies (ICT) tools to support learning activities is nowadays generalized. Several educational registries provide information about ICT tools in order to help educators in their discovery and selection. These registries are typically isolated and require much effort to keep tool information up to…

  2. Global resource sharing

    CERN Document Server

    Frederiksen, Linda; Nance, Heidi

    2011-01-01

    Written from a global perspective, this book reviews sharing of library resources on a global scale. With expanded discovery tools and massive digitization projects, the rich and extensive holdings of the world's libraries are more visible now than at any time in the past. Advanced communication and transmission technologies, along with improved international standards, present a means for the sharing of library resources around the globe. Despite these significant improvements, a number of challenges remain. Global Resource Sharing provides librarians and library managers with a comprehensive

  3. Privacy and User Experience in 21st Century Library Discovery

    Directory of Open Access Journals (Sweden)

    Shayna Pekala

    2017-06-01

    Full Text Available Over the last decade, libraries have taken advantage of emerging technologies to provide new discovery tools to help users find information and resources more efficiently. In the wake of this technological shift in discovery, privacy has become an increasingly prominent and complex issue for libraries. The nature of the web, over which users interact with discovery tools, has substantially diminished the library’s ability to control patron privacy. The emergence of a data economy has led to a new wave of online tracking and surveillance, in which multiple third parties collect and share user data during the discovery process, making it much more difficult, if not impossible, for libraries to protect patron privacy. In addition, users are increasingly starting their searches with web search engines, diminishing the library’s control over privacy even further. While libraries have a legal and ethical responsibility to protect patron privacy, they are simultaneously challenged to meet evolving user needs for discovery. In a world where “search” is synonymous with Google, users increasingly expect their library discovery experience to mimic their experience using web search engines. However, web search engines rely on a drastically different set of privacy standards, as they strive to create tailored, personalized search results based on user data. Libraries are seemingly forced to make a choice between delivering the discovery experience users expect and protecting user privacy. This paper explores the competing interests of privacy and user experience, and proposes possible strategies to address them in the future design of library discovery tools.

  4. Using the Tools and Resources of the RCSB Protein Data Bank.

    Science.gov (United States)

    Costanzo, Luigi Di; Ghosh, Sutapa; Zardecki, Christine; Burley, Stephen K

    2016-09-07

    The Protein Data Bank (PDB) archive is the worldwide repository of experimentally determined three-dimensional structures of large biological molecules found in all three kingdoms of life. Atomic-level structures of these proteins, nucleic acids, and complex assemblies thereof are central to research and education in molecular, cellular, and organismal biology, biochemistry, biophysics, materials science, bioengineering, ecology, and medicine. Several types of information are associated with each PDB archival entry, including atomic coordinates, primary experimental data, polymer sequence(s), and summary metadata. The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB) serves as the U.S. data center for the PDB, distributing archival data and supporting both simple and complex queries that return results. These data can be freely downloaded, analyzed, and visualized using RCSB PDB tools and resources to gain a deeper understanding of fundamental biological processes, molecular evolution, human health and disease, and drug discovery. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  5. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  6. Bringing your tools to CyVerse Discovery Environment using Docker [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Upendra Kumar Devisetty

    2016-06-01

    Full Text Available Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. CyVerse’s Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse’s production environment for public use. This paper will help users bring their tools into CyVerse Discovery Environment (DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but will also help users to share their apps with collaborators and release them for public use.

  7. Pharmacological screening technologies for venom peptide discovery.

    Science.gov (United States)

    Prashanth, Jutty Rajan; Hasaballah, Nojod; Vetter, Irina

    2017-12-01

    Venomous animals occupy one of the most successful evolutionary niches and occur on nearly every continent. They deliver venoms via biting and stinging apparatuses with the aim to rapidly incapacitate prey and deter predators. This has led to the evolution of venom components that act at a number of biological targets - including ion channels, G-protein coupled receptors, transporters and enzymes - with exquisite selectivity and potency, making venom-derived components attractive pharmacological tool compounds and drug leads. In recent years, plate-based pharmacological screening approaches have been introduced to accelerate venom-derived drug discovery. A range of assays are amenable to this purpose, including high-throughput electrophysiology, fluorescence-based functional and binding assays. However, despite these technological advances, the traditional activity-guided fractionation approach is time-consuming and resource-intensive. The combination of screening techniques suitable for miniaturization with sequence-based discovery approaches - supported by advanced proteomics, mass spectrometry, chromatography as well as synthesis and expression techniques - promises to further improve venom peptide discovery. Here, we discuss practical aspects of establishing a pipeline for venom peptide drug discovery with a particular emphasis on pharmacology and pharmacological screening approaches. This article is part of the Special Issue entitled 'Venom-derived Peptides as Pharmacological Tools.' Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The Evolution of Discovery Systems in Academic Libraries: A Case Study at the University of Houston Libraries

    Science.gov (United States)

    Guajardo, Richard; Brett, Kelsey; Young, Frederick

    2017-01-01

    For the past several years academic libraries have been adopting discovery systems to provide a search experience that reflects user expectations and improves access to electronic resources. University of Houston Libraries has kept pace with this evolving trend by pursuing various discovery options; these include an open-source tool, a federated…

  9. Get Involved in Planetary Discoveries through New Worlds, New Discoveries

    Science.gov (United States)

    Shupla, Christine; Shipp, S. S.; Halligan, E.; Dalton, H.; Boonstra, D.; Buxner, S.; SMD Planetary Forum, NASA

    2013-01-01

    "New Worlds, New Discoveries" is a synthesis of NASA’s 50-year exploration history which provides an integrated picture of our new understanding of our solar system. As NASA spacecraft head to and arrive at key locations in our solar system, "New Worlds, New Discoveries" provides an integrated picture of our new understanding of the solar system to educators and the general public! The site combines the amazing discoveries of past NASA planetary missions with the most recent findings of ongoing missions, and connects them to the related planetary science topics. "New Worlds, New Discoveries," which includes the "Year of the Solar System" and the ongoing celebration of the "50 Years of Exploration," includes 20 topics that share thematic solar system educational resources and activities, tied to the national science standards. This online site and ongoing event offers numerous opportunities for the science community - including researchers and education and public outreach professionals - to raise awareness, build excitement, and make connections with educators, students, and the public about planetary science. Visitors to the site will find valuable hands-on science activities, resources and educational materials, as well as the latest news, to engage audiences in planetary science topics and their related mission discoveries. The topics are tied to the big questions of planetary science: how did the Sun’s family of planets and bodies originate and how have they evolved? How did life begin and evolve on Earth, and has it evolved elsewhere in our solar system? Scientists and educators are encouraged to get involved either directly or by sharing "New Worlds, New Discoveries" and its resources with educators, by conducting presentations and events, sharing their resources and events to add to the site, and adding their own public events to the site’s event calendar! Visit to find quality resources and ideas. Connect with educators, students and the public to

  10. ATLAAS-P2P: a two layer network solution for easing the resource discovery process in unstructured networks

    OpenAIRE

    Baraglia, Ranieri; Dazzi, Patrizio; Mordacchini, Matteo; Ricci, Laura

    2013-01-01

    ATLAAS-P2P is a two-layered P2P architecture for developing systems providing resource aggregation and approximated discovery in P2P networks. Such systems allow users to search the desired resources by specifying their requirements in a flexible and easy way. From the point of view of resource providers, this system makes available an effective solution supporting providers in being reached by resource requests.

  11. Using the iPlant collaborative discovery environment.

    Science.gov (United States)

    Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J

    2013-06-01

    The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.

  12. Systems pharmacology-based drug discovery for marine resources: an example using sea cucumber (Holothurians).

    Science.gov (United States)

    Guo, Yingying; Ding, Yan; Xu, Feifei; Liu, Baoyue; Kou, Zinong; Xiao, Wei; Zhu, Jingbo

    2015-05-13

    Sea cucumber, a kind of marine animal, have long been utilized as tonic and traditional remedies in the Middle East and Asia because of its effectiveness against hypertension, asthma, rheumatism, cuts and burns, impotence, and constipation. In this study, an overall study performed on sea cucumber was used as an example to show drug discovery from marine resource by using systems pharmacology model. The value of marine natural resources has been extensively considered because these resources can be potentially used to treat and prevent human diseases. However, the discovery of drugs from oceans is difficult, because of complex environments in terms of composition and active mechanisms. Thus, a comprehensive systems approach which could discover active constituents and their targets from marine resource, understand the biological basis for their pharmacological properties is necessary. In this study, a feasible pharmacological model based on systems pharmacology was established to investigate marine medicine by incorporating active compound screening, target identification, and network and pathway analysis. As a result, 106 candidate components of sea cucumber and 26 potential targets were identified. Furthermore, the functions of sea cucumber in health improvement and disease treatment were elucidated in a holistic way based on the established compound-target and target-disease networks, and incorporated pathways. This study established a novel strategy that could be used to explore specific active mechanisms and discover new drugs from marine sources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Helping Students Understand Gene Regulation with Online Tools: A Review of MEME and Melina II, Motif Discovery Tools for Active Learning in Biology

    Directory of Open Access Journals (Sweden)

    David Treves

    2012-08-01

    Full Text Available Review of: MEME and Melina II, which are two free and easy-to-use online motif discovery tools that can be employed to actively engage students in learning about gene regulatory elements.

  14. Updates in metabolomics tools and resources: 2014-2015.

    Science.gov (United States)

    Misra, Biswapriya B; van der Hooft, Justin J J

    2016-01-01

    Data processing and interpretation represent the most challenging and time-consuming steps in high-throughput metabolomic experiments, regardless of the analytical platforms (MS or NMR spectroscopy based) used for data acquisition. Improved machinery in metabolomics generates increasingly complex datasets that create the need for more and better processing and analysis software and in silico approaches to understand the resulting data. However, a comprehensive source of information describing the utility of the most recently developed and released metabolomics resources--in the form of tools, software, and databases--is currently lacking. Thus, here we provide an overview of freely-available, and open-source, tools, algorithms, and frameworks to make both upcoming and established metabolomics researchers aware of the recent developments in an attempt to advance and facilitate data processing workflows in their metabolomics research. The major topics include tools and researches for data processing, data annotation, and data visualization in MS and NMR-based metabolomics. Most in this review described tools are dedicated to untargeted metabolomics workflows; however, some more specialist tools are described as well. All tools and resources described including their analytical and computational platform dependencies are summarized in an overview Table. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. 30 CFR 44.24 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Discovery. 44.24 Section 44.24 Mineral... Discovery. Parties shall be governed in their conduct of discovery by appropriate provisions of the Federal... discovery. Alternative periods of time for discovery may be prescribed by the presiding administrative law...

  16. Comparison of three web-scale discovery services for health sciences research.

    Science.gov (United States)

    Hanneke, Rosie; O'Brien, Kelly K

    2016-04-01

    The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. All WSD tools returned between 50%-60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%-60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers.

  17. Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    Science.gov (United States)

    Espinoza Molina, D.; Datcu, M.

    2015-04-01

    The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.

  18. Prototype Development of a Tradespace Analysis Tool for Spaceflight Medical Resources.

    Science.gov (United States)

    Antonsen, Erik L; Mulcahy, Robert A; Rubin, David; Blue, Rebecca S; Canga, Michael A; Shah, Ronak

    2018-02-01

    The provision of medical care in exploration-class spaceflight is limited by mass, volume, and power constraints, as well as limitations of available skillsets of crewmembers. A quantitative means of exploring the risks and benefits of inclusion or exclusion of onboard medical capabilities may help to inform the development of an appropriate medical system. A pilot project was designed to demonstrate the utility of an early tradespace analysis tool for identifying high-priority resources geared toward properly equipping an exploration mission medical system. Physician subject matter experts identified resources, tools, and skillsets required, as well as associated criticality scores of the same, to meet terrestrial, U.S.-specific ideal medical solutions for conditions concerning for exploration-class spaceflight. A database of diagnostic and treatment actions and resources was created based on this input and weighed against the probabilities of mission-specific medical events to help identify common and critical elements needed in a future exploration medical capability. Analysis of repository data demonstrates the utility of a quantitative method of comparing various medical resources and skillsets for future missions. Directed database queries can provide detailed comparative estimates concerning likelihood of resource utilization within a given mission and the weighted utility of tangible and intangible resources. This prototype tool demonstrates one quantitative approach to the complex needs and limitations of an exploration medical system. While this early version identified areas for refinement in future version development, more robust analysis tools may help to inform the development of a comprehensive medical system for future exploration missions.Antonsen EL, Mulcahy RA, Rubin D, Blue RS, Canga MA, Shah R. Prototype development of a tradespace analysis tool for spaceflight medical resources. Aerosp Med Hum Perform. 2018; 89(2):108-114.

  19. Discovery of Sound in the Sea: Resources for Educators, Students, the Public, and Policymakers.

    Science.gov (United States)

    Vigness-Raposa, Kathleen J; Scowcroft, Gail; Miller, James H; Ketten, Darlene R; Popper, Arthur N

    2016-01-01

    There is increasing concern about the effects of underwater sound on marine life. However, the science of sound is challenging. The Discovery of Sound in the Sea (DOSITS) Web site ( http://www.dosits.org ) was designed to provide comprehensive scientific information on underwater sound for the public and educational and media professionals. It covers the physical science of underwater sound and its use by people and marine animals for a range of tasks. Celebrating 10 years of online resources, DOSITS continues to develop new material and improvements, providing the best resource for the most up-to-date information on underwater sound and its potential effects.

  20. Tools, techniques, organisation and culture of the CADD group at Sygnature Discovery.

    Science.gov (United States)

    St-Gallay, Steve A; Sambrook-Smith, Colin P

    2017-03-01

    Computer-aided drug design encompasses a wide variety of tools and techniques, and can be implemented with a range of organisational structures and focus in different organisations. Here we outline the computational chemistry skills within Sygnature Discovery, along with the software and hardware at our disposal, and briefly discuss the methods that are not employed and why. The goal of the group is to provide support for design and analysis in order to improve the quality of compounds synthesised and reduce the timelines of drug discovery projects, and we reveal how this is achieved at Sygnature. Impact on medicinal chemistry is vital to demonstrating the value of computational chemistry, and we discuss the approaches taken to influence the list of compounds for synthesis, and how we recognise success. Finally we touch on some of the areas being developed within the team in order to provide further value to the projects and clients.

  1. Evaluation Tool for the Application of Discovery Teaching Method in the Greek Environmental School Projects

    Science.gov (United States)

    Kalathaki, Maria

    2015-01-01

    Greek school community emphasizes on the discovery direction of teaching methodology in the school Environmental Education (EE) in order to promote Education for the Sustainable Development (ESD). In ESD school projects the used methodology is experiential teamwork for inquiry based learning. The proposed tool checks whether and how a school…

  2. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  3. New tools and resources in metabolomics: 2016-2017.

    Science.gov (United States)

    Misra, Biswapriya B

    2018-04-01

    Rapid advances in mass spectrometry (MS) and nuclear magnetic resonance (NMR)-based platforms for metabolomics have led to an upsurge of data every single year. Newer high-throughput platforms, hyphenated technologies, miniaturization, and tool kits in data acquisition efforts in metabolomics have led to additional challenges in metabolomics data pre-processing, analysis, interpretation, and integration. Thanks to the informatics, statistics, and computational community, new resources continue to develop for metabolomics researchers. The purpose of this review is to provide a summary of the metabolomics tools, software, and databases that were developed or improved during 2016-2017, thus, enabling readers, developers, and researchers access to a succinct but thorough list of resources for further improvisation, implementation, and application in due course of time. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An online conserved SSR discovery through cross-species comparison

    Directory of Open Access Journals (Sweden)

    Tun-Wen Pai

    2009-02-01

    Full Text Available Tun-Wen Pai1, Chien-Ming Chen1, Meng-Chang Hsiao1, Ronshan Cheng2, Wen-Shyong Tzou3, Chin-Hua Hu31Department of Computer Science and Engineering; 2Department of Aquaculture, 3Institute of Bioscience and Biotechnology, National Taiwan Ocean University, Keelung, Taiwan, Republic of ChinaAbstract: Simple sequence repeats (SSRs play important roles in gene regulation and genome evolution. Although there exist several online resources for SSR mining, most of them only extract general SSR patterns without providing functional information. Here, an online search tool, CG-SSR (Comparative Genomics SSR discovery, has been developed for discovering potential functional SSRs from vertebrate genomes through cross-species comparison. In addition to revealing SSR candidates in conserved regions among various species, it also combines accurate coordinate and functional genomics information. CG-SSR is the first comprehensive and efficient online tool for conserved SSR discovery.Keywords: microsatellites, genome, comparative genomics, functional SSR, gene ontology, conserved region

  5. Computational methods in drug discovery

    OpenAIRE

    Sumudu P. Leelananda; Steffen Lindert

    2016-01-01

    The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery project...

  6. Comparison of three web-scale discovery services for health sciences research*

    Science.gov (United States)

    Hanneke, Rosie; O'Brien, Kelly K.

    2016-01-01

    Objective The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Methods Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. Conclusions None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers. PMID:27076797

  7. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    Science.gov (United States)

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback.

  8. A Delphi study assessing the utility of quality improvement tools and resources in Australian primary care.

    Science.gov (United States)

    Upham, Susan J; Janamian, Tina; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To determine the relevance and utility of online tools and resources to support organisational performance development in primary care and to complement the Primary Care Practice Improvement Tool (PC-PIT). A purposively recruited Expert Advisory Panel of 12 end users used a modified Delphi technique to evaluate 53 tools and resources identified through a previously conducted systematic review. The panel comprised six practice managers and six general practitioners who had participated in the PC-PIT pilot study in 2013-2014. Tools and resources were reviewed in three rounds using a standard pre-tested assessment form. Recommendations, scores and reasons for recommending or rejecting each tool or resource were analysed to determine the final suite of tools and resources. The evaluation was conducted from November 2014 to August 2015. Recommended tools and resources scored highly (mean score, 16/20) in Rounds 1 and 2 of review (n = 25). These tools and resources were perceived to be easily used, useful to the practice and supportive of the PC-PIT. Rejected resources scored considerably lower (mean score, 5/20) and were noted to have limitations such as having no value to the practice and poor utility (n = 6). A final review (Round 3) of 28 resources resulted in a suite of 21 to support the elements of the PC-PIT. This suite of tools and resources offers one approach to supporting the quality improvement initiatives currently in development in primary care reform.

  9. European Institutional and Organisational Tools for Maritime Human Resources Development

    OpenAIRE

    Dragomir Cristina

    2012-01-01

    Seafarers need to continuously develop their career, at all stages of their professional life. This paper presents some tools of institutional and organisational career development. At insitutional level there are presented vocational education and training tools provided by the European Union institutions while at organisational level are exemplified some tools used by private crewing companies for maritime human resources assessment and development.

  10. A hybrid human and machine resource curation pipeline for the Neuroscience Information Framework.

    Science.gov (United States)

    Bandrowski, A E; Cachat, J; Li, Y; Müller, H M; Sternberg, P W; Ciccarese, P; Clark, T; Marenco, L; Wang, R; Astakhov, V; Grethe, J S; Martone, M E

    2012-01-01

    The breadth of information resources available to researchers on the Internet continues to expand, particularly in light of recently implemented data-sharing policies required by funding agencies. However, the nature of dense, multifaceted neuroscience data and the design of contemporary search engine systems makes efficient, reliable and relevant discovery of such information a significant challenge. This challenge is specifically pertinent for online databases, whose dynamic content is 'hidden' from search engines. The Neuroscience Information Framework (NIF; http://www.neuinfo.org) was funded by the NIH Blueprint for Neuroscience Research to address the problem of finding and utilizing neuroscience-relevant resources such as software tools, data sets, experimental animals and antibodies across the Internet. From the outset, NIF sought to provide an accounting of available resources, whereas developing technical solutions to finding, accessing and utilizing them. The curators therefore, are tasked with identifying and registering resources, examining data, writing configuration files to index and display data and keeping the contents current. In the initial phases of the project, all aspects of the registration and curation processes were manual. However, as the number of resources grew, manual curation became impractical. This report describes our experiences and successes with developing automated resource discovery and semiautomated type characterization with text-mining scripts that facilitate curation team efforts to discover, integrate and display new content. We also describe the DISCO framework, a suite of automated web services that significantly reduce manual curation efforts to periodically check for resource updates. Lastly, we discuss DOMEO, a semi-automated annotation tool that improves the discovery and curation of resources that are not necessarily website-based (i.e. reagents, software tools). Although the ultimate goal of automation was to

  11. search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Carrie L. Iwema

    2016-07-01

    Full Text Available The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1 to make their research immediately and freely available and (2 to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint.

  12. A Water Resources Planning Tool for the Jordan River Basin

    Directory of Open Access Journals (Sweden)

    Christopher Bonzi

    2011-06-01

    Full Text Available The Jordan River basin is subject to extreme and increasing water scarcity. Management of transboundary water resources in the basin is closely intertwined with political conflicts in the region. We have jointly developed with stakeholders and experts from the riparian countries, a new dynamic consensus database and—supported by hydro-climatological model simulations and participatory scenario exercises in the GLOWA (Global Change and the Hydrological Cycle Jordan River project—a basin-wide Water Evaluation and Planning (WEAP tool, which will allow testing of various unilateral and multilateral adaptation options under climate and socio-economic change. We present its validation and initial (climate and socio-economic scenario analyses with this budget and allocation tool, and invite further adaptation and application of the tool for specific Integrated Water Resources Management (IWRM problems.

  13. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  14. Forest resource projection tools at the European level

    NARCIS (Netherlands)

    Schelhaas, M.; Nabuurs, G.J.; Verkerk, P.J.; Hengeveld, G.M.; Packalen, Tuula; Sallnäs, O.; Pilli, Roberto; Grassi, J.; Forsell, Nicklas; Frank, S.; Gusti, Mykola; Havlik, Petr

    2017-01-01

    Many countries have developed their own systems for projecting forest resources and wood availability. Although studies using these tools are helpful for developing national policies, they do not provide a consistent assessment for larger regions such as the European Union or Europe as a whole.

  15. Comparison of three web-scale discovery services for health sciences research*

    Directory of Open Access Journals (Sweden)

    Rosie Hanneke, MLS

    2016-11-01

    Full Text Available Objective: The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD tools in answering health sciences search queries. Methods: Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS, Ex Libris’s Primo, and ProQuest’s Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results: All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings searches in MEDLINE. Conclusions: None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools’ value as a supplement to traditional resources for health sciences researchers.

  16. Antibody informatics for drug discovery

    DEFF Research Database (Denmark)

    Shirai, Hiroki; Prades, Catherine; Vita, Randi

    2014-01-01

    to the antibody science in every project in antibody drug discovery. Recent experimental technologies allow for the rapid generation of large-scale data on antibody sequences, affinity, potency, structures, and biological functions; this should accelerate drug discovery research. Therefore, a robust bioinformatic...... infrastructure for these large data sets has become necessary. In this article, we first identify and discuss the typical obstacles faced during the antibody drug discovery process. We then summarize the current status of three sub-fields of antibody informatics as follows: (i) recent progress in technologies...... for antibody rational design using computational approaches to affinity and stability improvement, as well as ab-initio and homology-based antibody modeling; (ii) resources for antibody sequences, structures, and immune epitopes and open drug discovery resources for development of antibody drugs; and (iii...

  17. Special issue of International journal of human resource management: Conceptual and empirical discoveries in successful HRM implementation

    OpenAIRE

    Mireia Valverde; Tanya Bondarouk; Jordi Trullen

    2016-01-01

    Special issue of International journal of human resource management: Conceptual and empirical discoveries in successful HRM implementation DOI: 10.1080/09585192.2016.1154378 URL: http://www.tandfonline.com/doi/full/10.1080/09585192.2016.1154378 Filiació URV: SI Inclòs a la memòria: SI Paraules clau en blanc [No abstract available

  18. Big Biomedical data as the key resource for discovery science

    Energy Technology Data Exchange (ETDEWEB)

    Toga, Arthur W.; Foster, Ian; Kesselman, Carl; Madduri, Ravi; Chard, Kyle; Deutsch, Eric W.; Price, Nathan D.; Glusman, Gustavo; Heavner, Benjamin D.; Dinov, Ivo D.; Ames, Joseph; Van Horn, John; Kramer, Roger; Hood, Leroy

    2015-07-21

    Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an “-ome to home” approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage, aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center’s computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson’s and Alzheimer’s.

  19. Scientific workflows as productivity tools for drug discovery.

    Science.gov (United States)

    Shon, John; Ohkawa, Hitomi; Hammer, Juergen

    2008-05-01

    Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.

  20. Bringing your tools to CyVerse Discovery Environment using Docker [version 2; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Upendra Kumar Devisetty

    2016-11-01

    Full Text Available Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.

  1. Bringing your tools to CyVerse Discovery Environment using Docker [version 3; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Upendra Kumar Devisetty

    2016-12-01

    Full Text Available Docker has become a very popular container-based virtualization platform for software distribution that has revolutionized the way in which scientific software and software dependencies (software stacks can be packaged, distributed, and deployed. Docker makes the complex and time-consuming installation procedures needed for scientific software a one-time process. Because it enables platform-independent installation, versioning of software environments, and easy redeployment and reproducibility, Docker is an ideal candidate for the deployment of identical software stacks on different compute environments such as XSEDE and Amazon AWS. Cyverse's Discovery Environment also uses Docker for integrating its powerful, community-recommended software tools into CyVerse's production environment for public use. This paper will help users bring their tools into CyVerse DE which will not only allows users to integrate their tools with relative ease compared to the earlier method of tool deployment in DE but also help users to share their apps with collaborators and also release them for public use.

  2. Databases and web tools for cancer genomics study.

    Science.gov (United States)

    Yang, Yadong; Dong, Xunong; Xie, Bingbing; Ding, Nan; Chen, Juan; Li, Yongjun; Zhang, Qian; Qu, Hongzhu; Fang, Xiangdong

    2015-02-01

    Publicly-accessible resources have promoted the advance of scientific discovery. The era of genomics and big data has brought the need for collaboration and data sharing in order to make effective use of this new knowledge. Here, we describe the web resources for cancer genomics research and rate them on the basis of the diversity of cancer types, sample size, omics data comprehensiveness, and user experience. The resources reviewed include data repository and analysis tools; and we hope such introduction will promote the awareness and facilitate the usage of these resources in the cancer research community. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  3. Technical summaries of Scotian Shelf - significant and commercial discoveries

    International Nuclear Information System (INIS)

    Dickey, J.E.; Bigelow, S.F.; Edens, J.A.; Brown, D.E.; Smith, B.; Makrides, C.; Mader, R.

    1997-03-01

    An independent assessment of the recoverable hydrocarbon resource currently held under' Significant and Commercial Discovery' status offshore Nova Scotia was presented. A generalized description of the regulatory issues regarding the discovered resources within the Scotian Basin was included. Twenty discoveries have been declared significant and two have been declared commercial, pursuant to the Canada-Nova Scotia Offshore Petroleum Resources Accord Implementation Acts. Salient facts about each discovery were documented. The information included the wells drilled within the structure, significant flow tests, geological and geophysical attributes, structural cross-section and areal extent, petrophysical parameters, hydrocarbons in place and anticipated hydrocarbon recoverable resource. tabs., figs

  4. Nora: A Vocabulary Discovery Tool for Concept Extraction.

    Science.gov (United States)

    Divita, Guy; Carter, Marjorie E; Durgahee, B S Begum; Pettey, Warren E; Redd, Andrew; Samore, Matthew H; Gundlapalli, Adi V

    2015-01-01

    Coverage of terms in domain-specific terminologies and ontologies is often limited in controlled medical vocabularies. Creating and augmenting such terminologies is resource intensive. We developed Nora as an interactive tool to discover terminology from text corpora; the output can then be employed to refine and enhance natural language processing-based concept extraction tasks. Nora provides a visualization of chains of words foraged from word frequency indexes from a text corpus. Domain experts direct and curate chains that contain relevant terms, which are further curated to identify lexical variants. A test of Nora demonstrated an increase of a domain lexicon in homelessness and related psychosocial factors by 38%, yielding an additional 10% extracted concepts.

  5. MobilomeFINDER: web-based tools for in silico and experimental discovery of bacterial genomic islands

    OpenAIRE

    Ou, Hong-Yu; He, Xinyi; Harrison, Ewan M.; Kulasekara, Bridget R.; Thani, Ali Bin; Kadioglu, Aras; Lory, Stephen; Hinton, Jay C. D.; Barer, Michael R.; Deng, Zixin; Rajakumar, Kumar

    2007-01-01

    MobilomeFINDER (http://mml.sjtu.edu.cn/MobilomeFINDER) is an interactive online tool that facilitates bacterial genomic island or ‘mobile genome’ (mobilome) discovery; it integrates the ArrayOme and tRNAcc software packages. ArrayOme utilizes a microarray-derived comparative genomic hybridization input data set to generate ‘inferred contigs’ produced by merging adjacent genes classified as ‘present’. Collectively these ‘fragments’ represent a hypothetical ‘microarray-visualized genome (MVG)’....

  6. Host-Brucella interactions and the Brucella genome as tools for subunit antigen discovery and immunization against brucellosis

    Science.gov (United States)

    Gomez, Gabriel; Adams, Leslie G.; Rice-Ficht, Allison; Ficht, Thomas A.

    2013-01-01

    Vaccination is the most important approach to counteract infectious diseases. Thus, the development of new and improved vaccines for existing, emerging, and re-emerging diseases is an area of great interest to the scientific community and general public. Traditional approaches to subunit antigen discovery and vaccine development lack consideration for the critical aspects of public safety and activation of relevant protective host immunity. The availability of genomic sequences for pathogenic Brucella spp. and their hosts have led to development of systems-wide analytical tools that have provided a better understanding of host and pathogen physiology while also beginning to unravel the intricacies at the host-pathogen interface. Advances in pathogen biology, host immunology, and host-agent interactions have the potential to serve as a platform for the design and implementation of better-targeted antigen discovery approaches. With emphasis on Brucella spp., we probe the biological aspects of host and pathogen that merit consideration in the targeted design of subunit antigen discovery and vaccine development. PMID:23720712

  7. Disability Rights, Gender, and Development: A Resource Tool for Action. Full Report

    Science.gov (United States)

    de Silva de Alwis, Rangita

    2008-01-01

    This resource tool builds a normative framework to examine the intersections of disability rights and gender in the human rights based approach to development. Through case studies, good practices and analyses the research tool makes recommendations and illustrates effective tools for the implementation of gender and disability sensitive laws,…

  8. Digital Resources in Instruction and Research: Assessing Faculty Discovery, Use and Needs--Final Summary Report

    Science.gov (United States)

    Tobias, Vicki

    2009-01-01

    In 2008, the Digital Initiatives Coordinating Committee (DICC) requested a comprehensive assessment of the UW Digital Collections (UWDC). The goal of this assessment was to better understand faculty awareness of and expectations for digital library resources, services and tools; obtain faculty feedback on digital resource and service needs that…

  9. The Energy Industry Profile of ISO/DIS 19115-1: Facilitating Discovery and Evaluation of, and Access to Distributed Information Resources

    Science.gov (United States)

    Hills, S. J.; Richard, S. M.; Doniger, A.; Danko, D. M.; Derenthal, L.; Energistics Metadata Work Group

    2011-12-01

    A diverse group of organizations representative of the international community involved in disciplines relevant to the upstream petroleum industry, - energy companies, - suppliers and publishers of information to the energy industry, - vendors of software applications used by the industry, - partner government and academic organizations, has engaged in the Energy Industry Metadata Standards Initiative. This Initiative envisions the use of standard metadata within the community to enable significant improvements in the efficiency with which users discover, evaluate, and access distributed information resources. The metadata standard needed to realize this vision is the initiative's primary deliverable. In addition to developing the metadata standard, the initiative is promoting its adoption to accelerate realization of the vision, and publishing metadata exemplars conformant with the standard. Implementation of the standard by community members, in the form of published metadata which document the information resources each organization manages, will allow use of tools requiring consistent metadata for efficient discovery and evaluation of, and access to, information resources. While metadata are expected to be widely accessible, access to associated information resources may be more constrained. The initiative is being conducting by Energistics' Metadata Work Group, in collaboration with the USGIN Project. Energistics is a global standards group in the oil and natural gas industry. The Work Group determined early in the initiative, based on input solicited from 40+ organizations and on an assessment of existing metadata standards, to develop the target metadata standard as a profile of a revised version of ISO 19115, formally the "Energy Industry Profile of ISO/DIS 19115-1 v1.0" (EIP). The Work Group is participating on the ISO/TC 211 project team responsible for the revision of ISO 19115, now ready for "Draft International Standard" (DIS) status. With ISO 19115 an

  10. OpenSearch technology for geospatial resources discovery

    Science.gov (United States)

    Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo

    2010-05-01

    set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above

  11. Using ChEMBL web services for building applications and data processing workflows relevant to drug discovery.

    Science.gov (United States)

    Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew

    2017-08-01

    ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.

  12. Spec Tool; an online education and research resource

    Science.gov (United States)

    Maman, S.; Shenfeld, A.; Isaacson, S.; Blumberg, D. G.

    2016-06-01

    Education and public outreach (EPO) activities related to remote sensing, space, planetary and geo-physics sciences have been developed widely in the Earth and Planetary Image Facility (EPIF) at Ben-Gurion University of the Negev, Israel. These programs aim to motivate the learning of geo-scientific and technologic disciplines. For over the past decade, the facility hosts research and outreach activities for researchers, local community, school pupils, students and educators. As software and data are neither available nor affordable, the EPIF Spec tool was created as a web-based resource to assist in initial spectral analysis as a need for researchers and students. The tool is used both in the academic courses and in the outreach education programs and enables a better understanding of the theoretical data of spectroscopy and Imaging Spectroscopy in a 'hands-on' activity. This tool is available online and provides spectra visualization tools and basic analysis algorithms including Spectral plotting, Spectral angle mapping and Linear Unmixing. The tool enables to visualize spectral signatures from the USGS spectral library and additional spectra collected in the EPIF such as of dunes in southern Israel and from Turkmenistan. For researchers and educators, the tool allows loading collected samples locally for further analysis.

  13. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  14. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  15. The Roles of Water in the Protein Matrix: A Largely Untapped Resource for Drug Discovery.

    Science.gov (United States)

    Spyrakis, Francesca; Ahmed, Mostafa H; Bayden, Alexander S; Cozzini, Pietro; Mozzarelli, Andrea; Kellogg, Glen E

    2017-08-24

    The value of thoroughly understanding the thermodynamics specific to a drug discovery/design study is well known. Over the past decade, the crucial roles of water molecules in protein structure, function, and dynamics have also become increasingly appreciated. This Perspective explores water in the biological environment by adopting its point of view in such phenomena. The prevailing thermodynamic models of the past, where water was seen largely in terms of an entropic gain after its displacement by a ligand, are now known to be much too simplistic. We adopt a set of terminology that describes water molecules as being "hot" and "cold", which we have defined as being easy and difficult to displace, respectively. The basis of these designations, which involve both enthalpic and entropic water contributions, are explored in several classes of biomolecules and structural motifs. The hallmarks for characterizing water molecules are examined, and computational tools for evaluating water-centric thermodynamics are reviewed. This Perspective's summary features guidelines for exploiting water molecules in drug discovery.

  16. MEAT: An Authoring Tool for Generating Adaptable Learning Resources

    Science.gov (United States)

    Kuo, Yen-Hung; Huang, Yueh-Min

    2009-01-01

    Mobile learning (m-learning) is a new trend in the e-learning field. The learning services in m-learning environments are supported by fundamental functions, especially the content and assessment services, which need an authoring tool to rapidly generate adaptable learning resources. To fulfill the imperious demand, this study proposes an…

  17. Big biomedical data as the key resource for discovery science.

    Science.gov (United States)

    Toga, Arthur W; Foster, Ian; Kesselman, Carl; Madduri, Ravi; Chard, Kyle; Deutsch, Eric W; Price, Nathan D; Glusman, Gustavo; Heavner, Benjamin D; Dinov, Ivo D; Ames, Joseph; Van Horn, John; Kramer, Roger; Hood, Leroy

    2015-11-01

    Modern biomedical data collection is generating exponentially more data in a multitude of formats. This flood of complex data poses significant opportunities to discover and understand the critical interplay among such diverse domains as genomics, proteomics, metabolomics, and phenomics, including imaging, biometrics, and clinical data. The Big Data for Discovery Science Center is taking an "-ome to home" approach to discover linkages between these disparate data sources by mining existing databases of proteomic and genomic data, brain images, and clinical assessments. In support of this work, the authors developed new technological capabilities that make it easy for researchers to manage, aggregate, manipulate, integrate, and model large amounts of distributed data. Guided by biological domain expertise, the Center's computational resources and software will reveal relationships and patterns, aiding researchers in identifying biomarkers for the most confounding conditions and diseases, such as Parkinson's and Alzheimer's. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Semantic Service Discovery Techniques for the composable web

    OpenAIRE

    Fernández Villamor, José Ignacio

    2013-01-01

    This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, thr...

  19. Processes, Performance Drivers and ICT Tools in Human Resources Management

    OpenAIRE

    Oškrdal Václav; Pavlíček Antonín; Jelínková Petra

    2011-01-01

    This article presents an insight to processes, performance drivers and ICT tools in human resources (HR) management area. On the basis of a modern approach to HR management, a set of business processes that are handled by today’s HR managers is defined. Consequently, the concept of ICT-supported performance drivers and their relevance in the area of HR management as well as the relationship between HR business processes, performance drivers and ICT tools are defined. The theoretical outcomes ...

  20. Modern ICT Tools: Online Electronic Resources Sharing Using Web ...

    African Journals Online (AJOL)

    Modern ICT Tools: Online Electronic Resources Sharing Using Web 2.0 and Its Implications For Library And Information Practice In Nigeria. ... The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader). If you would like more ...

  1. Bioinformatics for cancer immunotherapy target discovery

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein

    2014-01-01

    therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...

  2. The AIDS and Cancer Specimen Resource: Role in HIV/AIDS scientific discovery

    Directory of Open Access Journals (Sweden)

    McGrath Michael S

    2007-03-01

    Full Text Available Abstract The AIDS Cancer and Specimen Resource (ACSR supports scientific discovery in the area of HIV/AIDS-associated malignancies. The ACSR was established as a cooperative agreement between the NCI (Office of the Director, Division of Cancer Treatment and Diagnosis and regional consortia, University of California, San Francisco (West Coast, George Washington University (East Coast and Ohio State University (Mid-Region to collect, preserve and disperse HIV-related tissues and biologic fluids and controls along with clinical data to qualified investigators. The available biological samples with clinical data and the application process are described on the ACSR web site. The ACSR tissue bank has more than 100,000 human HIV positive specimens that represent different processing (43, specimen (15, and anatomical site (50 types. The ACSR provides special biospecimen collections and prepares speciality items, e.g., tissue microarrays (TMA, DNA libraries. Requests have been greatest for Kaposi's sarcoma (32% and non-Hodgkin's lymphoma (26%. Dispersed requests include 83% tissue (frozen and paraffin embedded, 18% plasma/serum and 9% other. ACSR also provides tissue microarrays of, e.g., Kaposi's sarcoma and non-Hodgkin's lymphoma, for biomarker assays and has developed collaborations with other groups that provide access to additional AIDS-related malignancy specimens. ACSR members and associates have completed 63 podium and poster presentations. Investigators have submitted 125 letters of intent requests. Discoveries using ACSR have been reported in 61 scientific publications in notable journals with an average impact factor of 7. The ACSR promotes the scientific exploration of the relationship between HIV/AIDS and malignancy by participation at national and international scientific meetings, contact with investigators who have productive research in this area and identifying, collecting, preserving, enhancing, and dispersing HIV

  3. An Open-Source Web-Based Tool for Resource-Agnostic Interactive Translation Prediction

    Directory of Open Access Journals (Sweden)

    Daniel Torregrosa

    2014-09-01

    Full Text Available We present a web-based open-source tool for interactive translation prediction (ITP and describe its underlying architecture. ITP systems assist human translators by making context-based computer-generated suggestions as they type. Most of the ITP systems in literature are strongly coupled with a statistical machine translation system that is conveniently adapted to provide the suggestions. Our system, however, follows a resource-agnostic approach and suggestions are obtained from any unmodified black-box bilingual resource. This paper reviews our ITP method and describes the architecture of Forecat, a web tool, partly based on the recent technology of web components, that eases the use of our ITP approach in any web application requiring this kind of translation assistance. We also evaluate the performance of our method when using an unmodified Moses-based statistical machine translation system as the bilingual resource.

  4. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  5. A study of the discovery process in 802.11 networks

    OpenAIRE

    Castignani , German; Arcia Moret , Andres Emilio; Montavont , Nicolas

    2011-01-01

    International audience; Today wireless communications are a synonym of mobility and resource sharing. These characteristics, proper of both infrastructure and ad-hoc networks, heavily relies on a general resource discovery process. The discovery process, being an unavoidable procedure, has to be fast and reliable to mitigate the effect of network disruptions. In this article, by means of simulations and a real testbed, our contribution is twofold. First we assess the discovery process focusin...

  6. On Resource Description Capabilities of On-Board Tools for Resource Management in Cloud Networking and NFV Infrastructures

    OpenAIRE

    Tutschku, Kurt; Ahmadi Mehri, Vida; Carlsson, Anders; Chivukula, Krishna Varaynya; Johan, Christenson

    2016-01-01

    The rapid adoption of networks that are based on "cloudification" and Network Function Virtualisation (NFV) comes from the anticipated high cost savings of up to 70% in their build and operation. The high savings are founded in the use of general standard servers, instead of single-purpose hardware, and by efficiency resource sharing through virtualisation concepts. In this paper, we discuss the capabilities of resource description of "on-board" tools, i.e. using standard Linux commands, to e...

  7. MLViS: A Web Tool for Machine Learning-Based Virtual Screening in Early-Phase of Drug Discovery and Development.

    Science.gov (United States)

    Korkmaz, Selcuk; Zararsiz, Gokmen; Goksuluk, Dincer

    2015-01-01

    Virtual screening is an important step in early-phase of drug discovery process. Since there are thousands of compounds, this step should be both fast and effective in order to distinguish drug-like and nondrug-like molecules. Statistical machine learning methods are widely used in drug discovery studies for classification purpose. Here, we aim to develop a new tool, which can classify molecules as drug-like and nondrug-like based on various machine learning methods, including discriminant, tree-based, kernel-based, ensemble and other algorithms. To construct this tool, first, performances of twenty-three different machine learning algorithms are compared by ten different measures, then, ten best performing algorithms have been selected based on principal component and hierarchical cluster analysis results. Besides classification, this application has also ability to create heat map and dendrogram for visual inspection of the molecules through hierarchical cluster analysis. Moreover, users can connect the PubChem database to download molecular information and to create two-dimensional structures of compounds. This application is freely available through www.biosoft.hacettepe.edu.tr/MLViS/.

  8. Electronic Safety Resource Tools -- Supporting Hydrogen and Fuel Cell Commercialization

    Energy Technology Data Exchange (ETDEWEB)

    Barilo, Nick F.

    2014-09-29

    The Pacific Northwest National Laboratory (PNNL) Hydrogen Safety Program conducted a planning session in Los Angeles, CA on April 1, 2014 to consider what electronic safety tools would benefit the next phase of hydrogen and fuel cell commercialization. A diverse, 20-person team led by an experienced facilitator considered the question as it applied to the eight most relevant user groups. The results and subsequent evaluation activities revealed several possible resource tools that could greatly benefit users. The tool identified as having the greatest potential for impact is a hydrogen safety portal, which can be the central location for integrating and disseminating safety information (including most of the tools identified in this report). Such a tool can provide credible and reliable information from a trustworthy source. Other impactful tools identified include a codes and standards wizard to guide users through a series of questions relating to application and specific features of the requirements; a scenario-based virtual reality training for first responders; peer networking tools to bring users from focused groups together to discuss and collaborate on hydrogen safety issues; and a focused tool for training inspectors. Table ES.1 provides results of the planning session, including proposed new tools and changes to existing tools.

  9. OAI-PMH for resource harvesting, tutorial 2

    CERN Multimedia

    CERN. Geneva; Nelson, Michael

    2005-01-01

    A variety of examples have arisen in which the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been used for applications beyond bibliographic metadata interchange. One of these examples is the use of OAI-PMH to harvest resources and not just metadata. Advanced resource discovery and preservations capabilities are possible by combining complex object formats such as MPEG-21 DIDL, METS and SCORM with the OAI-PMH. In this tutorial, we review community conventions and practices that have provided the impetus for resource harvesting. We show how the introduction of complex object formats for the representation of resources leads to a robust, OAI-PMH-based framework for resource harvesting. We detail how complex object formats fit in the OAI-PMH data model, and how (compound) digital objects can be represented using a complex object format for exposure by an OAI-PMH repository. We also cover tools that are available for the implementation of an OAI-PMH-based resource harvesting solution. Fu...

  10. Research resources: curating the new eagle-i discovery system

    Science.gov (United States)

    Vasilevsky, Nicole; Johnson, Tenille; Corday, Karen; Torniai, Carlo; Brush, Matthew; Segerdell, Erik; Wilson, Melanie; Shaffer, Chris; Robinson, David; Haendel, Melissa

    2012-01-01

    Development of biocuration processes and guidelines for new data types or projects is a challenging task. Each project finds its way toward defining annotation standards and ensuring data consistency with varying degrees of planning and different tools to support and/or report on consistency. Further, this process may be data type specific even within the context of a single project. This article describes our experiences with eagle-i, a 2-year pilot project to develop a federated network of data repositories in which unpublished, unshared or otherwise ‘invisible’ scientific resources could be inventoried and made accessible to the scientific community. During the course of eagle-i development, the main challenges we experienced related to the difficulty of collecting and curating data while the system and the data model were simultaneously built, and a deficiency and diversity of data management strategies in the laboratories from which the source data was obtained. We discuss our approach to biocuration and the importance of improving information management strategies to the research process, specifically with regard to the inventorying and usage of research resources. Finally, we highlight the commonalities and differences between eagle-i and similar efforts with the hope that our lessons learned will assist other biocuration endeavors. Database URL: www.eagle-i.net PMID:22434835

  11. mHealth Visual Discovery Dashboard.

    Science.gov (United States)

    Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng

    2017-09-01

    We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do - in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking.

  12. Validating the WHO maternal near miss tool: comparing high- and low-resource settings.

    Science.gov (United States)

    Witteveen, Tom; Bezstarosti, Hans; de Koning, Ilona; Nelissen, Ellen; Bloemenkamp, Kitty W; van Roosmalen, Jos; van den Akker, Thomas

    2017-06-19

    WHO proposed the WHO Maternal Near Miss (MNM) tool, classifying women according to several (potentially) life-threatening conditions, to monitor and improve quality of obstetric care. The objective of this study is to analyse merged data of one high- and two low-resource settings where this tool was applied and test whether the tool may be suitable for comparing severe maternal outcome (SMO) between these settings. Using three cohort studies that included SMO cases, during two-year time frames in the Netherlands, Tanzania and Malawi we reassessed all SMO cases (as defined by the original studies) with the WHO MNM tool (five disease-, four intervention- and seven organ dysfunction-based criteria). Main outcome measures were prevalence of MNM criteria and case fatality rates (CFR). A total of 3172 women were studied; 2538 (80.0%) from the Netherlands, 248 (7.8%) from Tanzania and 386 (12.2%) from Malawi. Total SMO detection was 2767 (87.2%) for disease-based criteria, 2504 (78.9%) for intervention-based criteria and 1211 (38.2%) for organ dysfunction-based criteria. Including every woman who received ≥1 unit of blood in low-resource settings as life-threatening, as defined by organ dysfunction criteria, led to more equally distributed populations. In one third of all Dutch and Malawian maternal death cases, organ dysfunction criteria could not be identified from medical records. Applying solely organ dysfunction-based criteria may lead to underreporting of SMO. Therefore, a tool based on defining MNM only upon establishing organ failure is of limited use for comparing settings with varying resources. In low-resource settings, lowering the threshold of transfused units of blood leads to a higher detection rate of MNM. We recommend refined disease-based criteria, accompanied by a limited set of intervention- and organ dysfunction-based criteria to set a measure of severity.

  13. Using Multiple Tools to Analyze Resource Exchange in China

    Directory of Open Access Journals (Sweden)

    Nan Li

    2015-09-01

    Full Text Available With the rapid development of globalization, the function of international physical resource exchange is becoming increasingly important in economic growth through resource optimization. However, most existing ecological economy studies use physical trade balance (PTB directly or use physical imports and exports individually to analyze national material metabolization. Neither the individual analysis of physical imports and exports nor the direct analysis of PTB is capable of portraying the comprehensive contributions of a certain product to total physical trade. This study introduced an indicator, i.e., the physical contribution to the trade balance (PCB, which evolved from the traditional index of contribution to the trade balance (CB. In addition, trade balance (TB, PTB, CB, and PCB were systematically related and combined. An analysis was conducted using the four tools to obtain overall trade trends in China. This study discovered that both physical trade value and quantity exhibited different characteristics when China joined the World Trade Organization in 2002 and experienced the global economic crisis in 2009. Finally, the advantages of a supporting policy decision by applying multiple analytical tools to physical trade were discussed.

  14. A procedure to improve the information flow in the assessment of discoveries of oil and gas resources in the Brazilian context

    Energy Technology Data Exchange (ETDEWEB)

    Rosa, Henrique; Suslick, Saul B.; Sousa, Sergio H.G. de [Universidade Estadual de Campinas, SP (Brazil). Inst. of Geosciences; Castro, Jonas Q. [ANP - Brazilian National Petroleum Agency, Rio de Janeiro, RJ (Brazil)

    2004-07-01

    This paper is focused on the elaboration of a standardization model for the existing flow of information between the Petroleum National Agency (ANP) and the concessionaire companies in the event of the discovery of any potentially commercial hydrocarbon resources inside their concession areas. The method proposed by Rosa (2003) included the analysis of a small sample of Oil and Gas Discovery Assessment Plans (PADs), elaborated by companies that operate in exploratory blocks in Brazil, under the regulatory context introduced by the Petroleum Law (Law 9478, August, 6th, 1997). The analysis of these documents made it possible to identify and target the problems originated from the lack of standardization. The results obtained facilitated the development of a model that helps the creation process of Oil and Gas Discovery Assessment Plans. It turns out that the standardization procedures suggested provide considerable advantages while speeding up several technical and regulatory steps. A software called 'ePADs' was developed to consolidate the automation of the several steps in the model for the standardization of the Oil and Gas Discovery Assessment Plans. A preliminary version has been tested with several different types of discoveries indicating a good performance by complying with all regulatory aspects and operational requirements. (author)

  15. A smartphone-based ASR data collection tool for under-resourced languages

    CSIR Research Space (South Africa)

    De Vries, NJ

    2014-01-01

    Full Text Available collection strategies, highlighting some of the salient issues pertaining to collecting ASR data for under-resourced languages. We then describe the development of a smartphone-based data collection tool, Woefzela, which is designed to function in a...

  16. Biomarkers as drug development tools: discovery, validation, qualification and use.

    Science.gov (United States)

    Kraus, Virginia B

    2018-06-01

    The 21st Century Cures Act, approved in the USA in December 2016, has encouraged the establishment of the national Precision Medicine Initiative and the augmentation of efforts to address disease prevention, diagnosis and treatment on the basis of a molecular understanding of disease. The Act adopts into law the formal process, developed by the FDA, of qualification of drug development tools, including biomarkers and clinical outcome assessments, to increase the efficiency of clinical trials and encourage an era of molecular medicine. The FDA and European Medicines Agency (EMA) have developed similar processes for the qualification of biomarkers intended for use as companion diagnostics or for development and regulatory approval of a drug or therapeutic. Biomarkers that are used exclusively for the diagnosis, monitoring or stratification of patients in clinical trials are not subject to regulatory approval, although their qualification can facilitate the conduct of a trial. In this Review, the salient features of biomarker discovery, analytical validation, clinical qualification and utilization are described in order to provide an understanding of the process of biomarker development and, through this understanding, convey an appreciation of their potential advantages and limitations.

  17. The Petroleum resources on the Norwegian Continental Shelf. 2011

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This resource report provides a survey of petroleum resources on the NCS. Content: Resource account; Unconventional oil and gas resources; Future oil and gas production; Challenges for producing fields; Discoveries; Undiscovered resources; Curbing greenhouse gas emissions; Technology and talent; Exploration and new areas; How undiscovered resources are calculated; The NPD's project database; Play analysis; Changes to and reductions in estimated undiscovered resources; Unconventional petroleum resources; Many wells, Increased exploration, Every little helps; Varied discovery success; Sub-basalt in the Norwegian Sea; High exploration costs; Profitable exploration; Unopened areas - mostly in the far north; Resource base; Small discoveries; Location; Development solutions, Profitability of discoveries; Things may take time; Area perspective; Development of production; Remaining reserves and resources in fields; Target for reserve growth; Existing technology; Water and gas injection; Drilling and wells; Infrastructure challenges; New methods and technology; Challenges for pilot projects; Long-term thinking and creativity. (eb)

  18. IMG-ABC: An Atlas of Biosynthetic Gene Clusters to Fuel the Discovery of Novel Secondary Metabolites

    Energy Technology Data Exchange (ETDEWEB)

    Chen, I-Min; Chu, Ken; Ratner, Anna; Palaniappan, Krishna; Huang, Jinghua; Reddy, T. B.K.; Cimermancic, Peter; Fischbach, Michael; Ivanova, Natalia; Markowitz, Victor; Kyrpides, Nikos; Pati, Amrita

    2014-10-28

    In the discovery of secondary metabolites (SMs), large-scale analysis of sequence data is a promising exploration path that remains largely underutilized due to the lack of relevant computational resources. We present IMG-ABC (https://img.jgi.doe.gov/abc/) -- An Atlas of Biosynthetic gene Clusters within the Integrated Microbial Genomes (IMG) system1. IMG-ABC is a rich repository of both validated and predicted biosynthetic clusters (BCs) in cultured isolates, single-cells and metagenomes linked with the SM chemicals they produce and enhanced with focused analysis tools within IMG. The underlying scalable framework enables traversal of phylogenetic dark matter and chemical structure space -- serving as a doorway to a new era in the discovery of novel molecules.

  19. Novel approaches to develop community-built biological network models for potential drug discovery.

    Science.gov (United States)

    Talikka, Marja; Bukharov, Natalia; Hayes, William S; Hofmann-Apitius, Martin; Alexopoulos, Leonidas; Peitsch, Manuel C; Hoeng, Julia

    2017-08-01

    Hundreds of thousands of data points are now routinely generated in clinical trials by molecular profiling and NGS technologies. A true translation of this data into knowledge is not possible without analysis and interpretation in a well-defined biology context. Currently, there are many public and commercial pathway tools and network models that can facilitate such analysis. At the same time, insights and knowledge that can be gained is highly dependent on the underlying biological content of these resources. Crowdsourcing can be employed to guarantee the accuracy and transparency of the biological content underlining the tools used to interpret rich molecular data. Areas covered: In this review, the authors describe crowdsourcing in drug discovery. The focal point is the efforts that have successfully used the crowdsourcing approach to verify and augment pathway tools and biological network models. Technologies that enable the building of biological networks with the community are also described. Expert opinion: A crowd of experts can be leveraged for the entire development process of biological network models, from ontologies to the evaluation of their mechanistic completeness. The ultimate goal is to facilitate biomarker discovery and personalized medicine by mechanistically explaining patients' differences with respect to disease prevention, diagnosis, and therapy outcome.

  20. Configurable User Interface Framework for Data Discovery in Cross-Disciplinary and Citizen Science

    Science.gov (United States)

    Rozell, E.; Wang, H.; West, P.; Zednik, S.; Fox, P.

    2012-04-01

    Use cases for data discovery and analysis vary widely when looking across disciplines and levels of expertise. Domain experts across disciplines may have a thorough understanding of self-describing data formats, such as netCDF, and the software packages that are compatible. However, they may be unfamiliar with specific vocabulary terms used to describe the data parameters or instrument packages in someone else's collection, which are often useful in data discovery. Citizen scientists may struggle with both expert vocabularies and knowledge of existing tools for analyzing and visualizing data. There are some solutions for each problem individually. For expert vocabularies, semantic technologies like the Resource Description Framework (RDF) have been used to map terms from an expert vocabulary to layperson terminology. For data analysis and visualization, tools can be mapped to data products using semantic technologies as well. This presentation discusses a solution to both problems based on the S2S Framework, a configurable user interface (UI) framework for Web services. S2S unifies the two solutions previously described using a data service abstraction ("search services") and a UI abstraction ("widgets"). Using the OWL Web Ontology Language, S2S defines a vocabulary for describing search services and their outputs, and the compatibility of those outputs with UI widgets. By linking search service outputs to widgets, S2S can automatically compose UIs for search and analysis of data, making it easier for citizen scientists to manipulate data. We have also created Linked Data widgets for S2S, which can leverage distributed RDF resources to present alternative views of expert vocabularies. This presentation covers some examples where we have applied these solutions to improve data discovery for both cross-disciplinary and non-expert users.

  1. Radiotracer properties determined by high performance liquid chromatography: a potential tool for brain radiotracer discovery

    International Nuclear Information System (INIS)

    Tavares, Adriana Alexandre S.; Lewsey, James; Dewar, Deborah; Pimlott, Sally L.

    2012-01-01

    Introduction: Previously, development of novel brain radiotracers has largely relied on simple screening tools. Improved selection methods at the early stages of radiotracer discovery and an increased understanding of the relationships between in vitro physicochemical and in vivo radiotracer properties are needed. We investigated if high performance liquid chromatography (HPLC) methodologies could provide criteria for lead candidate selection by comparing HPLC measurements with radiotracer properties in humans. Methods: Ten molecules, previously used as radiotracers in humans, were analysed to obtain the following measures: partition coefficient (Log P); permeability (P m ); percentage of plasma protein binding (%PPB); and membrane partition coefficient (K m ). Relationships between brain entry measurements (Log P, P m and %PPB) and in vivo brain percentage injected dose (%ID); and K m and specific binding in vivo (BP ND ) were investigated. Log P values obtained using in silico packages and flask methods were compared with Log P values obtained using HPLC. Results: The modelled associations with %ID were stronger for %PPB (r 2 =0.65) and P m (r 2 =0.77) than for Log P (r 2 =0.47) while 86% of BP ND variance was explained by K m . Log P values were variable dependant on the methodology used. Conclusions: Log P should not be relied upon as a predictor of blood-brain barrier penetration during brain radiotracer discovery. HPLC measurements of permeability, %PPB and membrane interactions may be potentially useful in predicting in vivo performance and hence allow evaluation and ranking of compound libraries for the selection of lead radiotracer candidates at early stages of radiotracer discovery.

  2. Toxins and drug discovery.

    Science.gov (United States)

    Harvey, Alan L

    2014-12-15

    Components from venoms have stimulated many drug discovery projects, with some notable successes. These are briefly reviewed, from captopril to ziconotide. However, there have been many more disappointments on the road from toxin discovery to approval of a new medicine. Drug discovery and development is an inherently risky business, and the main causes of failure during development programmes are outlined in order to highlight steps that might be taken to increase the chances of success with toxin-based drug discovery. These include having a clear focus on unmet therapeutic needs, concentrating on targets that are well-validated in terms of their relevance to the disease in question, making use of phenotypic screening rather than molecular-based assays, and working with development partners with the resources required for the long and expensive development process. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.

  3. A collaborative filtering-based approach to biomedical knowledge discovery.

    Science.gov (United States)

    Lever, Jake; Gakkhar, Sitanshu; Gottlieb, Michael; Rashnavadi, Tahereh; Lin, Santina; Siu, Celia; Smith, Maia; Jones, Martin R; Krzywinski, Martin; Jones, Steven J M; Wren, Jonathan

    2018-02-15

    The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. sjones@bcgsc.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. SAGES: a suite of freely-available software tools for electronic disease surveillance in resource-limited settings.

    Directory of Open Access Journals (Sweden)

    Sheri L Lewis

    Full Text Available Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.

  5. Quality tools and resources to support organisational improvement integral to high-quality primary care: a systematic review of published and grey literature.

    Science.gov (United States)

    Janamian, Tina; Upham, Susan J; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To conduct a systematic review of the literature to identify existing online primary care quality improvement tools and resources to support organisational improvement related to the seven elements in the Primary Care Practice Improvement Tool (PC-PIT), with the identified tools and resources to progress to a Delphi study for further assessment of relevance and utility. Systematic review of the international published and grey literature. CINAHL, Embase and PubMed databases were searched in March 2014 for articles published between January 2004 and December 2013. GreyNet International and other relevant websites and repositories were also searched in March-April 2014 for documents dated between 1992 and 2012. All citations were imported into a bibliographic database. Published and unpublished tools and resources were included in the review if they were in English, related to primary care quality improvement and addressed any of the seven PC-PIT elements of a high-performing practice. Tools and resources that met the eligibility criteria were then evaluated for their accessibility, relevance, utility and comprehensiveness using a four-criteria appraisal framework. We used a data extraction template to systematically extract information from eligible tools and resources. A content analysis approach was used to explore the tools and resources and collate relevant information: name of the tool or resource, year and country of development, author, name of the organisation that provided access and its URL, accessibility information or problems, overview of each tool or resource and the quality improvement element(s) it addresses. If available, a copy of the tool or resource was downloaded into the bibliographic database, along with supporting evidence (published or unpublished) on its use in primary care. This systematic review identified 53 tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in

  6. Perspectives of biomolecular NMR in drug discovery: the blessing and curse of versatility

    International Nuclear Information System (INIS)

    Jahnke, Wolfgang

    2007-01-01

    The versatility of NMR and its broad applicability to several stages in the drug discovery process is well known and generally considered one of the major strengths of NMR (Pellecchia et al., Nature Rev Drug Discov 1:211-219, 2002; Stockman and Dalvit, Prog Nucl Magn Reson Spectrosc 41:187-231, 2002; Lepre et al., Comb Chem High throughput screen 5:583-590, 2002; Wyss et al., Curr Opin Drug Discov Devel 5:630-647, 2002; Jahnke and Widmer, Cell Mol Life Sci 61:580-599, 2004; Huth et al., Methods Enzymol 394:549-571, 2005b; Klages et al., Mol Biosyst 2:318-332, 2006; Takeuchi and Wagner, Curr Opin Struct Biol 16:109-117, 2006; Zartler and Shapiro, Curr Pharm Des 12:3963-3972, 2006). Indeed, NMR is the only biophysical technique which can detect and quantify molecular interactions, and at the same time provide detailed structural information with atomic level resolution. NMR should therefore be ideally suited and widely requested as a tool for drug discovery research, and numerous examples of drug discovery projects which have substantially benefited from NMR contributions or were even driven by NMR have been described in the literature. However, not all pharmaceutical companies have rigorously implemented NMR as integral tool of their research processes. Some companies invest with limited resources, and others do not use biomolecular NMR at all. This discrepancy in assessing the value of a technology is striking, and calls for clarification-under which circumstances can NMR provide added value to the drug discovery process? What kind of contributions can NMR make, and how is it implemented and integrated for maximum impact? This perspectives article suggests key areas of impact for NMR, and a model of integrating NMR with other technologies to realize synergies and maximize their value for drug discovery

  7. A new approach to the rationale discovery of polymeric biomaterials

    Science.gov (United States)

    Kohn, Joachim; Welsh, William J.; Knight, Doyle

    2007-01-01

    This paper attempts to illustrate both the need for new approaches to biomaterials discovery as well as the significant promise inherent in the use of combinatorial and computational design strategies. The key observation of this Leading Opinion Paper is that the biomaterials community has been slow to embrace advanced biomaterials discovery tools such as combinatorial methods, high throughput experimentation, and computational modeling in spite of the significant promise shown by these discovery tools in materials science, medicinal chemistry and the pharmaceutical industry. It seems that the complexity of living cells and their interactions with biomaterials has been a conceptual as well as a practical barrier to the use of advanced discovery tools in biomaterials science. However, with the continued increase in computer power, the goal of predicting the biological response of cells in contact with biomaterials surfaces is within reach. Once combinatorial synthesis, high throughput experimentation, and computational modeling are integrated into the biomaterials discovery process, a significant acceleration is possible in the pace of development of improved medical implants, tissue regeneration scaffolds, and gene/drug delivery systems. PMID:17644176

  8. Text mining resources for the life sciences.

    Science.gov (United States)

    Przybyła, Piotr; Shardlow, Matthew; Aubin, Sophie; Bossy, Robert; Eckart de Castilho, Richard; Piperidis, Stelios; McNaught, John; Ananiadou, Sophia

    2016-01-01

    Text mining is a powerful technology for quickly distilling key information from vast quantities of biomedical literature. However, to harness this power the researcher must be well versed in the availability, suitability, adaptability, interoperability and comparative accuracy of current text mining resources. In this survey, we give an overview of the text mining resources that exist in the life sciences to help researchers, especially those employed in biocuration, to engage with text mining in their own work. We categorize the various resources under three sections: Content Discovery looks at where and how to find biomedical publications for text mining; Knowledge Encoding describes the formats used to represent the different levels of information associated with content that enable text mining, including those formats used to carry such information between processes; Tools and Services gives an overview of workflow management systems that can be used to rapidly configure and compare domain- and task-specific processes, via access to a wide range of pre-built tools. We also provide links to relevant repositories in each section to enable the reader to find resources relevant to their own area of interest. Throughout this work we give a special focus to resources that are interoperable-those that have the crucial ability to share information, enabling smooth integration and reusability. © The Author(s) 2016. Published by Oxford University Press.

  9. Text mining resources for the life sciences

    Science.gov (United States)

    Shardlow, Matthew; Aubin, Sophie; Bossy, Robert; Eckart de Castilho, Richard; Piperidis, Stelios; McNaught, John; Ananiadou, Sophia

    2016-01-01

    Text mining is a powerful technology for quickly distilling key information from vast quantities of biomedical literature. However, to harness this power the researcher must be well versed in the availability, suitability, adaptability, interoperability and comparative accuracy of current text mining resources. In this survey, we give an overview of the text mining resources that exist in the life sciences to help researchers, especially those employed in biocuration, to engage with text mining in their own work. We categorize the various resources under three sections: Content Discovery looks at where and how to find biomedical publications for text mining; Knowledge Encoding describes the formats used to represent the different levels of information associated with content that enable text mining, including those formats used to carry such information between processes; Tools and Services gives an overview of workflow management systems that can be used to rapidly configure and compare domain- and task-specific processes, via access to a wide range of pre-built tools. We also provide links to relevant repositories in each section to enable the reader to find resources relevant to their own area of interest. Throughout this work we give a special focus to resources that are interoperable—those that have the crucial ability to share information, enabling smooth integration and reusability. PMID:27888231

  10. A Critical Study of Effect of Web-Based Software Tools in Finding and Sharing Digital Resources--A Literature Review

    Science.gov (United States)

    Baig, Muntajeeb Ali

    2010-01-01

    The purpose of this paper is to review the effect of web-based software tools for finding and sharing digital resources. A positive correlation between learning and studying through online tools has been found in recent researches. In traditional classroom, searching resources are limited to the library and sharing of resources is limited to the…

  11. Pattern Discovery in Time-Ordered Data; TOPICAL

    International Nuclear Information System (INIS)

    CONRAD, GREGORY N.; BRITANIK, JOHN M.; DELAND, SHARON M.; JENKIN, CHRISTINA L.

    2002-01-01

    This report describes the results of a Laboratory-Directed Research and Development project on techniques for pattern discovery in discrete event time series data. In this project, we explored two different aspects of the pattern matching/discovery problem. The first aspect studied was the use of Dynamic Time Warping for pattern matching in continuous data. In essence, DTW is a technique for aligning time series along the time axis to optimize the similarity measure. The second aspect studied was techniques for discovering patterns in discrete event data. We developed a pattern discovery tool based on adaptations of the A-priori and GSP (Generalized Sequential Pattern mining) algorithms. We then used the tool on three different application areas-unattended monitoring system data from a storage magazine, computer network intrusion detection, and analysis of robot training data

  12. Education resources of the National Center for Biotechnology Information.

    Science.gov (United States)

    Cooper, Peter S; Lipshultz, Dawn; Matten, Wayne T; McGinnis, Scott D; Pechous, Steven; Romiti, Monica L; Tao, Tao; Valjavec-Gratian, Majda; Sayers, Eric W

    2010-11-01

    The National Center for Biotechnology Information (NCBI) hosts 39 literature and molecular biology databases containing almost half a billion records. As the complexity of these data and associated resources and tools continues to expand, so does the need for educational resources to help investigators, clinicians, information specialists and the general public make use of the wealth of public data available at the NCBI. This review describes the educational resources available at NCBI via the NCBI Education page (www.ncbi.nlm.nih.gov/Education/). These resources include materials designed for new users, such as About NCBI and the NCBI Guide, as well as documentation, Frequently Asked Questions (FAQs) and writings on the NCBI Bookshelf such as the NCBI Help Manual and the NCBI Handbook. NCBI also provides teaching materials such as tutorials, problem sets and educational tools such as the Amino Acid Explorer, PSSM Viewer and Ebot. NCBI also offers training programs including the Discovery Workshops, webinars and tutorials at conferences. To help users keep up-to-date, NCBI produces the online NCBI News and offers RSS feeds and mailing lists, along with a presence on Facebook, Twitter and YouTube.

  13. Processes, Performance Drivers and ICT Tools in Human Resources Management

    Directory of Open Access Journals (Sweden)

    Oškrdal Václav

    2011-06-01

    Full Text Available This article presents an insight to processes, performance drivers and ICT tools in human resources (HR management area. On the basis of a modern approach to HR management, a set of business processes that are handled by today’s HR managers is defined. Consequently, the concept of ICT-supported performance drivers and their relevance in the area of HR management as well as the relationship between HR business processes, performance drivers and ICT tools are defined. The theoretical outcomes are further enhanced with results obtained from a survey among Czech companies. This article was written with kind courtesy of finances provided by VŠE IGA grant „IGA – 32/2010“.

  14. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  15. UCLA's Molecular Screening Shared Resource: enhancing small molecule discovery with functional genomics and new technology.

    Science.gov (United States)

    Damoiseaux, Robert

    2014-05-01

    The Molecular Screening Shared Resource (MSSR) offers a comprehensive range of leading-edge high throughput screening (HTS) services including drug discovery, chemical and functional genomics, and novel methods for nano and environmental toxicology. The MSSR is an open access environment with investigators from UCLA as well as from the entire globe. Industrial clients are equally welcome as are non-profit entities. The MSSR is a fee-for-service entity and does not retain intellectual property. In conjunction with the Center for Environmental Implications of Nanotechnology, the MSSR is unique in its dedicated and ongoing efforts towards high throughput toxicity testing of nanomaterials. In addition, the MSSR engages in technology development eliminating bottlenecks from the HTS workflow and enabling novel assays and readouts currently not available.

  16. Using Data in the Classroom: Resources for Undergraduate Faculty

    Science.gov (United States)

    Manduca, C. A.

    2003-12-01

    .carleton.edu/introgeo/); Earth Exploration Toolbook providing step-by-step instructions for using Earth science datasets and software tools in educational settings. (serc.carleton.edu/eet/); and NSDL projects developing data access and tools including THREDDS (www.unidata.ucar.edu /projects/THREDDS/); Data Discovery Toolkit and Foundry (www.newmediastudio.org/DataDiscovery/index.html); Collection and Distribution of Geoscience (Solid Earth) Data Sets (atlas.geo.cornell.edu/nsdl/nsdl.html); and Atmospheric Visualization Collection (www.nsdl.arm.gov/index.shtml) These resources will be available for exploration at our poster.

  17. Harvest: an open platform for developing web-based biomedical data discovery and reporting applications.

    Science.gov (United States)

    Pennington, Jeffrey W; Ruth, Byron; Italia, Michael J; Miller, Jeffrey; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; White, Peter S

    2014-01-01

    Biomedical researchers share a common challenge of making complex data understandable and accessible as they seek inherent relationships between attributes in disparate data types. Data discovery in this context is limited by a lack of query systems that efficiently show relationships between individual variables, but without the need to navigate underlying data models. We have addressed this need by developing Harvest, an open-source framework of modular components, and using it for the rapid development and deployment of custom data discovery software applications. Harvest incorporates visualizations of highly dimensional data in a web-based interface that promotes rapid exploration and export of any type of biomedical information, without exposing researchers to underlying data models. We evaluated Harvest with two cases: clinical data from pediatric cardiology and demonstration data from the OpenMRS project. Harvest's architecture and public open-source code offer a set of rapid application development tools to build data discovery applications for domain-specific biomedical data repositories. All resources, including the OpenMRS demonstration, can be found at http://harvest.research.chop.edu.

  18. Aligning Web-Based Tools to the Research Process Cycle: A Resource for Collaborative Research Projects

    Science.gov (United States)

    Price, Geoffrey P.; Wright, Vivian H.

    2012-01-01

    Using John Creswell's Research Process Cycle as a framework, this article describes various web-based collaborative technologies useful for enhancing the organization and efficiency of educational research. Visualization tools (Cacoo) assist researchers in identifying a research problem. Resource storage tools (Delicious, Mendeley, EasyBib)…

  19. The in silico drug discovery toolbox: applications in lead discovery and optimization.

    Science.gov (United States)

    Bruno, Agostino; Costantino, Gabriele; Sartori, Luca; Radi, Marco

    2017-11-06

    Discovery and development of a new drug is a long lasting and expensive journey that takes around 15 years from starting idea to approval and marketing of new medication. Despite the R&D expenditures have been constantly increasing in the last few years, number of new drugs introduced into market has been steadily declining. This is mainly due to preclinical and clinical safety issues, which still represent about 40% of drug discontinuation. From this point of view, it is clear that if we want to increase drug-discovery success rate and reduce costs associated with development of a new drug, a comprehensive evaluation/prediction of potential safety issues should be conducted as soon as possible during early drug discovery phase. In the present review, we will analyse the early steps of drug-discovery pipeline, describing the sequence of steps from disease selection to lead optimization and focusing on the most common in silico tools used to assess attrition risks and build a mitigation plan. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Climate Discovery: Integrating Research With Exhibit, Public Tours, K-12, and Web-based EPO Resources

    Science.gov (United States)

    Foster, S. Q.; Carbone, L.; Gardiner, L.; Johnson, R.; Russell, R.; Advisory Committee, S.; Ammann, C.; Lu, G.; Richmond, A.; Maute, A.; Haller, D.; Conery, C.; Bintner, G.

    2005-12-01

    The Climate Discovery Exhibit at the National Center for Atmospheric Research (NCAR) Mesa Lab provides an exciting conceptual outline for the integration of several EPO activities with other well-established NCAR educational resources and programs. The exhibit is organized into four topic areas intended to build understanding among NCAR's 80,000 annual visitors, including 10,000 school children, about Earth system processes and scientific methods contributing to a growing body of knowledge about climate and global change. These topics include: 'Sun-Earth Connections,' 'Climate Now,' 'Climate Past,' and 'Climate Future.' Exhibit text, graphics, film and electronic media, and interactives are developed and updated through collaborations between NCAR's climate research scientists and staff in the Office of Education and Outreach (EO) at the University Corporation for Atmospheric Research (UCAR). With funding from NCAR, paleoclimatologists have contributed data and ideas for a new exhibit Teachers' Guide unit about 'Climate Past.' This collection of middle-school level, standards-aligned lessons are intended to help students gain understanding about how scientists use proxy data and direct observations to describe past climates. Two NASA EPO's have funded the development of 'Sun-Earth Connection' lessons, visual media, and tips for scientists and teachers. Integrated with related content and activities from the NASA-funded Windows to the Universe web site, these products have been adapted to form a second unit in the Climate Discovery Teachers' Guide about the Sun's influence on Earth's climate. Other lesson plans, previously developed by on-going efforts of EO staff and NSF's previously-funded Project Learn program are providing content for a third Teachers' Guide unit on 'Climate Now' - the dynamic atmospheric and geological processes that regulate Earth's climate. EO has plans to collaborate with NCAR climatologists and computer modelers in the next year to develop

  1. Intra-annual wave resource characterization for energy exploitation: A new decision-aid tool

    International Nuclear Information System (INIS)

    Carballo, R.; Sánchez, M.; Ramos, V.; Fraguela, J.A.; Iglesias, G.

    2015-01-01

    Highlights: • A decision-aid tool is developed for computing the monthly performance of WECs. • It allows the generation of high-resolution monthly characterization matrices. • The decision-aid tool is implemented to the Death Coast (N Spain). • The monthly matrices can be obtained at any coastal location within the Death Coast. • The tool is applied to a coastal location of a proposed wave farm. - Abstract: The wave energy resource is usually characterized by a significant variability throughout the year. In estimating the power performance of a Wave Energy Converter (WEC) it is fundamental to take into account this variability; indeed, an estimate based on mean annual values may well result in a wrong decision making. In this work, a novel decision-aid tool, iWEDGE (intra-annual Wave Energy Diagram GEnerator) is developed and implemented to a coastal region of interest, the Death Coast (Spain), one of the regions in Europe with the largest wave resource. Following a comprehensive procedure, and based on deep water wave data and high-resolution numerical modelling, this tool provides the monthly high-resolution characterization matrices (or energy diagrams) for any location of interest. In other words, the information required for the accurate computation of the intra-annual performance of any WEC at any location within the region covered is made available. Finally, an application of iWEDGE to the site of a proposed wave farm is presented. The results obtained highlight the importance of the decision-aid tool herein provided for wave energy exploitation

  2. Enhancing Undergraduate Education with NASA Resources

    Science.gov (United States)

    Manning, James G.; Meinke, Bonnie; Schultz, Gregory; Smith, Denise Anne; Lawton, Brandon L.; Gurton, Suzanne; Astrophysics Community, NASA

    2015-08-01

    The NASA Astrophysics Science Education and Public Outreach Forum (SEPOF) coordinates the work of NASA Science Mission Directorate (SMD) Astrophysics EPO projects and their teams to bring cutting-edge discoveries of NASA missions to the introductory astronomy college classroom. Uniquely poised to foster collaboration between scientists with content expertise and educators with pedagogical expertise, the Forum has coordinated the development of several resources that provide new opportunities for college and university instructors to bring the latest NASA discoveries in astrophysics into their classrooms.To address the needs of the higher education community, the Astrophysics Forum collaborated with the astrophysics E/PO community, researchers, and introductory astronomy instructors to place individual science discoveries and learning resources into context for higher education audiences. The resulting products include two “Resource Guides” on cosmology and exoplanets, each including a variety of accessible resources. The Astrophysics Forum also coordinates the development of the “Astro 101” slide set series. The sets are five- to seven-slide presentations on new discoveries from NASA astrophysics missions relevant to topics in introductory astronomy courses. These sets enable Astronomy 101 instructors to include new discoveries not yet in their textbooks in their courses, and may be found at: https://www.astrosociety.org/education/resources-for-the-higher-education-audience/.The Astrophysics Forum also coordinated the development of 12 monthly “Universe Discovery Guides,” each featuring a theme and a representative object well-placed for viewing, with an accompanying interpretive story, strategies for conveying the topics, and supporting NASA-approved education activities and background information from a spectrum of NASA missions and programs. These resources are adaptable for use by instructors and may be found at: http

  3. Enhancing knowledge discovery from cancer genomics data with Galaxy.

    Science.gov (United States)

    Albuquerque, Marco A; Grande, Bruno M; Ritch, Elie J; Pararajalingam, Prasath; Jessa, Selin; Krzywinski, Martin; Grewal, Jasleen K; Shah, Sohrab P; Boutros, Paul C; Morin, Ryan D

    2017-05-01

    The field of cancer genomics has demonstrated the power of massively parallel sequencing techniques to inform on the genes and specific alterations that drive tumor onset and progression. Although large comprehensive sequence data sets continue to be made increasingly available, data analysis remains an ongoing challenge, particularly for laboratories lacking dedicated resources and bioinformatics expertise. To address this, we have produced a collection of Galaxy tools that represent many popular algorithms for detecting somatic genetic alterations from cancer genome and exome data. We developed new methods for parallelization of these tools within Galaxy to accelerate runtime and have demonstrated their usability and summarized their runtimes on multiple cloud service providers. Some tools represent extensions or refinement of existing toolkits to yield visualizations suited to cohort-wide cancer genomic analysis. For example, we present Oncocircos and Oncoprintplus, which generate data-rich summaries of exome-derived somatic mutation. Workflows that integrate these to achieve data integration and visualizations are demonstrated on a cohort of 96 diffuse large B-cell lymphomas and enabled the discovery of multiple candidate lymphoma-related genes. Our toolkit is available from our GitHub repository as Galaxy tool and dependency definitions and has been deployed using virtualization on multiple platforms including Docker. © The Author 2017. Published by Oxford University Press.

  4. Open Drug Discovery Toolkit (ODDT): a new open-source player in the drug discovery field.

    Science.gov (United States)

    Wójcikowski, Maciej; Zielenkiewicz, Piotr; Siedlecki, Pawel

    2015-01-01

    There has been huge progress in the open cheminformatics field in both methods and software development. Unfortunately, there has been little effort to unite those methods and software into one package. We here describe the Open Drug Discovery Toolkit (ODDT), which aims to fulfill the need for comprehensive and open source drug discovery software. The Open Drug Discovery Toolkit was developed as a free and open source tool for both computer aided drug discovery (CADD) developers and researchers. ODDT reimplements many state-of-the-art methods, such as machine learning scoring functions (RF-Score and NNScore) and wraps other external software to ease the process of developing CADD pipelines. ODDT is an out-of-the-box solution designed to be easily customizable and extensible. Therefore, users are strongly encouraged to extend it and develop new methods. We here present three use cases for ODDT in common tasks in computer-aided drug discovery. Open Drug Discovery Toolkit is released on a permissive 3-clause BSD license for both academic and industrial use. ODDT's source code, additional examples and documentation are available on GitHub (https://github.com/oddt/oddt).

  5. Proxy support for service discovery using mDNS/DNS-SD in low power networks

    NARCIS (Netherlands)

    Stolikj, M.; Verhoeven, R.; Cuijpers, P.J.L.; Lukkien, J.J.

    2014-01-01

    We present a solution for service discovery of resource constrained devices based on mDNS/DNS-SD. We extend the mDNS/DNS-SD service discovery protocol with support for proxy servers. Proxy servers temporarily store information about services offered on resource constrained devices and respond on

  6. Teaching resources in speleology and karst: a valuable educational tool

    Directory of Open Access Journals (Sweden)

    De Waele Jo

    2010-01-01

    Full Text Available There is a growing need in the speleological community of tools that make teaching of speleology and karst much easier. Despite the existence of a wide range of major academic textbooks, often the caver community has a difficult access to such material. Therefore, to fill this gap, the Italian Speleological Society, under the umbrella of the Union International de Spéléologie, has prepared a set of lectures, in a presentation format, on several topics including geology, physics, chemistry, hydrogeology, mineralogy, palaeontology, biology, microbiology, history, archaeology, artificial caves, documentation, etc. These lectures constitute the “Teaching Resources in Speleology and Karst”, available online. This educational tool, thanks to its easily manageable format, can constantly be updated and enriched with new contents and topics.

  7. Applying genetics in inflammatory disease drug discovery

    DEFF Research Database (Denmark)

    Folkersen, Lasse; Biswas, Shameek; Frederiksen, Klaus Stensgaard

    2015-01-01

    , with several notable exceptions, the journey from a small-effect genetic variant to a functional drug has proven arduous, and few examples of actual contributions to drug discovery exist. Here, we discuss novel approaches of overcoming this hurdle by using instead public genetics resources as a pragmatic guide...... alongside existing drug discovery methods. Our aim is to evaluate human genetic confidence as a rationale for drug target selection....

  8. atBioNet– an integrated network analysis tool for genomics and biomarker discovery

    Directory of Open Access Journals (Sweden)

    Ding Yijun

    2012-07-01

    Full Text Available Abstract Background Large amounts of mammalian protein-protein interaction (PPI data have been generated and are available for public use. From a systems biology perspective, Proteins/genes interactions encode the key mechanisms distinguishing disease and health, and such mechanisms can be uncovered through network analysis. An effective network analysis tool should integrate different content-specific PPI databases into a comprehensive network format with a user-friendly platform to identify key functional modules/pathways and the underlying mechanisms of disease and toxicity. Results atBioNet integrates seven publicly available PPI databases into a network-specific knowledge base. Knowledge expansion is achieved by expanding a user supplied proteins/genes list with interactions from its integrated PPI network. The statistically significant functional modules are determined by applying a fast network-clustering algorithm (SCAN: a Structural Clustering Algorithm for Networks. The functional modules can be visualized either separately or together in the context of the whole network. Integration of pathway information enables enrichment analysis and assessment of the biological function of modules. Three case studies are presented using publicly available disease gene signatures as a basis to discover new biomarkers for acute leukemia, systemic lupus erythematosus, and breast cancer. The results demonstrated that atBioNet can not only identify functional modules and pathways related to the studied diseases, but this information can also be used to hypothesize novel biomarkers for future analysis. Conclusion atBioNet is a free web-based network analysis tool that provides a systematic insight into proteins/genes interactions through examining significant functional modules. The identified functional modules are useful for determining underlying mechanisms of disease and biomarker discovery. It can be accessed at: http://www.fda.gov/ScienceResearch/BioinformaticsTools

  9. atBioNet--an integrated network analysis tool for genomics and biomarker discovery.

    Science.gov (United States)

    Ding, Yijun; Chen, Minjun; Liu, Zhichao; Ding, Don; Ye, Yanbin; Zhang, Min; Kelly, Reagan; Guo, Li; Su, Zhenqiang; Harris, Stephen C; Qian, Feng; Ge, Weigong; Fang, Hong; Xu, Xiaowei; Tong, Weida

    2012-07-20

    Large amounts of mammalian protein-protein interaction (PPI) data have been generated and are available for public use. From a systems biology perspective, Proteins/genes interactions encode the key mechanisms distinguishing disease and health, and such mechanisms can be uncovered through network analysis. An effective network analysis tool should integrate different content-specific PPI databases into a comprehensive network format with a user-friendly platform to identify key functional modules/pathways and the underlying mechanisms of disease and toxicity. atBioNet integrates seven publicly available PPI databases into a network-specific knowledge base. Knowledge expansion is achieved by expanding a user supplied proteins/genes list with interactions from its integrated PPI network. The statistically significant functional modules are determined by applying a fast network-clustering algorithm (SCAN: a Structural Clustering Algorithm for Networks). The functional modules can be visualized either separately or together in the context of the whole network. Integration of pathway information enables enrichment analysis and assessment of the biological function of modules. Three case studies are presented using publicly available disease gene signatures as a basis to discover new biomarkers for acute leukemia, systemic lupus erythematosus, and breast cancer. The results demonstrated that atBioNet can not only identify functional modules and pathways related to the studied diseases, but this information can also be used to hypothesize novel biomarkers for future analysis. atBioNet is a free web-based network analysis tool that provides a systematic insight into proteins/genes interactions through examining significant functional modules. The identified functional modules are useful for determining underlying mechanisms of disease and biomarker discovery. It can be accessed at: http://www.fda.gov/ScienceResearch/BioinformaticsTools/ucm285284.htm.

  10. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework.

    Science.gov (United States)

    Chen, Yi-An; Tripathi, Lokesh P; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org. © The Author(s) 2016. Published by Oxford University Press.

  11. Promise Fulfilled? An EBSCO Discovery Service Usability Study

    Science.gov (United States)

    Williams, Sarah C.; Foster, Anita K.

    2011-01-01

    Discovery tools are the next phase of library search systems. Illinois State University's Milner Library implemented EBSCO Discovery Service in August 2010. The authors conducted usability studies on the system in the fall of 2010. The aims of the study were twofold: first, to determine how Milner users set about using the system in order to…

  12. MobilomeFINDER: web-based tools for in silico and experimental discovery of bacterial genomic islands

    Science.gov (United States)

    Ou, Hong-Yu; He, Xinyi; Harrison, Ewan M.; Kulasekara, Bridget R.; Thani, Ali Bin; Kadioglu, Aras; Lory, Stephen; Hinton, Jay C. D.; Barer, Michael R.; Rajakumar, Kumar

    2007-01-01

    MobilomeFINDER (http://mml.sjtu.edu.cn/MobilomeFINDER) is an interactive online tool that facilitates bacterial genomic island or ‘mobile genome’ (mobilome) discovery; it integrates the ArrayOme and tRNAcc software packages. ArrayOme utilizes a microarray-derived comparative genomic hybridization input data set to generate ‘inferred contigs’ produced by merging adjacent genes classified as ‘present’. Collectively these ‘fragments’ represent a hypothetical ‘microarray-visualized genome (MVG)’. ArrayOme permits recognition of discordances between physical genome and MVG sizes, thereby enabling identification of strains rich in microarray-elusive novel genes. Individual tRNAcc tools facilitate automated identification of genomic islands by comparative analysis of the contents and contexts of tRNA sites and other integration hotspots in closely related sequenced genomes. Accessory tools facilitate design of hotspot-flanking primers for in silico and/or wet-science-based interrogation of cognate loci in unsequenced strains and analysis of islands for features suggestive of foreign origins; island-specific and genome-contextual features are tabulated and represented in schematic and graphical forms. To date we have used MobilomeFINDER to analyse several Enterobacteriaceae, Pseudomonas aeruginosa and Streptococcus suis genomes. MobilomeFINDER enables high-throughput island identification and characterization through increased exploitation of emerging sequence data and PCR-based profiling of unsequenced test strains; subsequent targeted yeast recombination-based capture permits full-length sequencing and detailed functional studies of novel genomic islands. PMID:17537813

  13. Improved discovery of NEON data and samples though vocabularies, workflows, and web tools

    Science.gov (United States)

    Laney, C. M.; Elmendorf, S.; Flagg, C.; Harris, T.; Lunch, C. K.; Gulbransen, T.

    2017-12-01

    The National Ecological Observatory Network (NEON) is a continental-scale ecological observation facility sponsored by the National Science Foundation and operated by Battelle. NEON supports research on the impacts of invasive species, land use change, and environmental change on natural resources and ecosystems by gathering and disseminating a full suite of observational, instrumented, and airborne datasets from field sites across the U.S. NEON also collects thousands of samples from soil, water, and organisms every year, and partners with numerous institutions to analyze and archive samples. We have developed numerous new technologies to support processing and discovery of this highly diverse collection of data. These technologies include applications for data collection and sample management, processing pipelines specific to each collection system (field observations, installed sensors, and airborne instruments), and publication pipelines. NEON data and metadata are discoverable and downloadable via both a public API and data portal. We solicit continued engagement and advice from the informatics and environmental research communities, particularly in the areas of data versioning, usability, and visualization.

  14. Developing a planning tool for South African prosecution resources: challenges and approach

    Directory of Open Access Journals (Sweden)

    R Koen

    2012-12-01

    Full Text Available In every country the prosecution of criminal cases is governed by different laws, policies and processes. In South Africa, the National Prosecuting Authority (NPA has the responsibility of planning and managing all prosecution functions. The NPA has certain unique characteristics that make it different from other similar organisations internationally. The development of a planning tool that the NPA could use to plan their future resource requirements over the short to medium term required extensive modelling, and its final form included features which, to the best knowledge of the development team, make it unique both locally and internationally. Model design was largely influenced by the challenges emanating from the special requirements and context of the problem. Resources were not forecasted directly, but were derived with the help of simulation models that traced docket flows through various resource-driven processes. Docket flows were derived as a proportion of reported crimes, and these were forecasted using a multivariate statistical model which could take into account explanatory variables as well as the correlations between the patterns observed within different crime categories. The simulation consisted of a number of smaller models which could be run independently, and not of one overarching model. This approach was found to make the best use of available data, and compensated for the fact that certain parameters, linking different courts and court types, were not available. In addition, it simplified scenario testing and sensitivity analysis. The various components of the planning tool, including inputs and outputs of the simulation models and the linkages between the forecasts and the simulation models, were implemented in a set of spreadsheets. By using spreadsheets as a common user interface, the planning tool could be used by prosecutors and managers who may not have extensive mathematical or modelling experience.

  15. BEAM web server: a tool for structural RNA motif discovery.

    Science.gov (United States)

    Pietrosanto, Marco; Adinolfi, Marta; Casula, Riccardo; Ausiello, Gabriele; Ferrè, Fabrizio; Helmer-Citterich, Manuela

    2018-03-15

    RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. marco.pietrosanto@uniroma2.it. Supplementary data are available at Bioinformatics online.

  16. Too New for Textbooks: The Biotechnology Discoveries & Applications Guidebook

    Science.gov (United States)

    Loftin, Madelene; Lamb, Neil E.

    2013-01-01

    The "Biotechnology Discoveries and Applications" guidebook aims to provide teachers with an overview of the recent advances in genetics and biotechnology, allowing them to share these findings with their students. The annual guidebook introduces a wealth of modern genomic discoveries and provides teachers with tools to integrate exciting…

  17. Entrepreneurship, transaction costs, and resource attributes

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    2006-01-01

    transaction costs shape the process of entrepreneurial discovery. We provide a sketch of the mechanisms that link entrepreneurship, property rights, and transaction costs in a resource-based setting, contributing further to the attempt to take the RBV in a more dynamic direction.......This paper responds to Kim and Mahoney's "How Property Rights Economics Furthers the Resource-Based View: Resources, Transaction Costs and Entrepreneurial Discovery" (a comment on Foss and Foss, 2005). While we agree with many of their arguments, we argue that they fail to recognise how exactly...

  18. Entrepreneurship, transaction costs, and resource attributes

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    2006-01-01

    This paper responds to Kim and Mahoney's "How Property Rights Economics Furthers the Resource-Based View: Resources, Transaction Costs and Entrepreneurial Discovery" (a comment on Foss and Foss, 2005). While we agree with many of their arguments, we argue that they fail to recognise how exactly t...... transaction costs shape the process of entrepreneurial discovery. We provide a sketch of the mechanisms that link entrepreneurship, property rights, and transaction costs in a resource-based setting, contributing further to the attempt to take the RBV in a more dynamic direction....

  19. Introduction to fragment-based drug discovery.

    Science.gov (United States)

    Erlanson, Daniel A

    2012-01-01

    Fragment-based drug discovery (FBDD) has emerged in the past decade as a powerful tool for discovering drug leads. The approach first identifies starting points: very small molecules (fragments) that are about half the size of typical drugs. These fragments are then expanded or linked together to generate drug leads. Although the origins of the technique date back some 30 years, it was only in the mid-1990s that experimental techniques became sufficiently sensitive and rapid for the concept to be become practical. Since that time, the field has exploded: FBDD has played a role in discovery of at least 18 drugs that have entered the clinic, and practitioners of FBDD can be found throughout the world in both academia and industry. Literally dozens of reviews have been published on various aspects of FBDD or on the field as a whole, as have three books (Jahnke and Erlanson, Fragment-based approaches in drug discovery, 2006; Zartler and Shapiro, Fragment-based drug discovery: a practical approach, 2008; Kuo, Fragment based drug design: tools, practical approaches, and examples, 2011). However, this chapter will assume that the reader is approaching the field with little prior knowledge. It will introduce some of the key concepts, set the stage for the chapters to follow, and demonstrate how X-ray crystallography plays a central role in fragment identification and advancement.

  20. Drive Cost Reduction, Increase Innovation and Mitigate Risk with Advanced Knowledge Discovery Tools Designed to Unlock and Leverage Prior Knowledge

    International Nuclear Information System (INIS)

    Mitchell, I.

    2016-01-01

    Full text: The nuclear industry is knowledge-intensive and includes a diverse number of stakeholders. Much of this knowledge is at risk as engineers, technicians and project professionals retire, leaving a widening skills and information gap. This knowledge is critical in an increasingly complex environment with information from past projects often buried in decades-old, non-integrated systems enterprise. Engineers can spend 40% or more of their time searching for answers across the enterprise instead of solving problems. The inability to access trusted industry knowledge results in increased risk and expense. Advanced knowledge discovery technologies slash research times by as much as 75% and accelerate innovation and problem solving by giving technical professionals access to the information they need, in the context of the problems they are trying to solve. Unlike traditional knowledge management approaches, knowledge discovery tools powered by semantic search technologies are adept at uncovering answers in unstructured data and require no tagging, organization or moving of data, meaning a smaller IT footprint and faster time-to-knowledge. This session will highlight best-in-class knowledge discovery technologies, content, and strategies to give nuclear industry organizations the ability to leverage the corpus of enterprise knowledge into the future. (author

  1. ATO Resource Tool -

    Data.gov (United States)

    Department of Transportation — Cru-X/ART is a shift management tool designed for?use by operational employees in Air Traffic Facilities.? Cru-X/ART is used for shift scheduling, shift sign in/out,...

  2. Beyond information retrieval: information discovery and multimedia information retrieval

    OpenAIRE

    Roberto Raieli

    2017-01-01

    The paper compares the current methodologies for search and discovery of information and information resources: terminological search and term-based language, own of information retrieval (IR); semantic search and information discovery, being developed mainly through the language of linked data; semiotic search and content-based language, experienced by multimedia information retrieval (MIR).MIR semiotic methodology is, then, detailed.

  3. Bioenergy Knowledge Discovery Framework Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-07-01

    The Bioenergy Knowledge Discovery Framework (KDF) supports the development of a sustainable bioenergy industry by providing access to a variety of data sets, publications, and collaboration and mapping tools that support bioenergy research, analysis, and decision making. In the KDF, users can search for information, contribute data, and use the tools and map interface to synthesize, analyze, and visualize information in a spatially integrated manner.

  4. Silicon Detectors-Tools for Discovery in Particle Physics

    International Nuclear Information System (INIS)

    Krammer, Manfred

    2009-01-01

    Since the first application of Silicon strip detectors in high energy physics in the early 1980ies these detectors have enabled the experiments to perform new challenging measurements. With these devices it became possible to determine the decay lengths of heavy quarks, for example in the fixed target experiment NA11 at CERN. In this experiment Silicon tracking detectors were used for the identification of particles containing a c-quark. Later on, the experiments at the Large Electron Positron collider at CERN used already larger and sophisticated assemblies of Silicon detectors to identify and study particles containing the b-quark. A very important contribution to the discovery of the last of the six quarks, the top quark, has been made by even larger Silicon vertex detectors inside the experiments CDF and D0 at Fermilab. Nowadays a mature detector technology, the use of Silicon detectors is no longer restricted to the vertex regions of collider experiments. The two multipurpose experiments ATLAS and CMS at the Large Hadron Collider at CERN contain large tracking detectors made of Silicon. The largest is the CMS Inner Tracker consisting of 200 m 2 of Silicon sensor area. These detectors will be very important for a possible discovery of the Higgs boson or of Super Symmetric particles. This paper explains the first applications of Silicon sensors in particle physics and describes the continuous development of this technology up to the construction of the state of the art Silicon detector of CMS.

  5. New generation pharmacogenomic tools: a SNP linkage disequilibrium Map, validated SNP assay resource, and high-throughput instrumentation system for large-scale genetic studies.

    Science.gov (United States)

    De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A

    2002-06-01

    Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.

  6. Evaluation of tools for highly variable gene discovery from single-cell RNA-seq data.

    Science.gov (United States)

    Yip, Shun H; Sham, Pak Chung; Wang, Junwen

    2018-02-21

    Traditional RNA sequencing (RNA-seq) allows the detection of gene expression variations between two or more cell populations through differentially expressed gene (DEG) analysis. However, genes that contribute to cell-to-cell differences are not discoverable with RNA-seq because RNA-seq samples are obtained from a mixture of cells. Single-cell RNA-seq (scRNA-seq) allows the detection of gene expression in each cell. With scRNA-seq, highly variable gene (HVG) discovery allows the detection of genes that contribute strongly to cell-to-cell variation within a homogeneous cell population, such as a population of embryonic stem cells. This analysis is implemented in many software packages. In this study, we compare seven HVG methods from six software packages, including BASiCS, Brennecke, scLVM, scran, scVEGs and Seurat. Our results demonstrate that reproducibility in HVG analysis requires a larger sample size than DEG analysis. Discrepancies between methods and potential issues in these tools are discussed and recommendations are made.

  7. Development of a Suite of Analytical Tools for Energy and Water Infrastructure Knowledge Discovery

    Science.gov (United States)

    Morton, A.; Piburn, J.; Stewart, R.; Chandola, V.

    2017-12-01

    Energy and water generation and delivery systems are inherently interconnected. With demand for energy growing, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic, and demographic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This also requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. To address this need, we've developed a suite of analytical tools to support an integrated data driven modeling, analysis, and visualization capability for understanding, designing, and developing efficient local and regional practices related to the energy-water nexus. This work reviews the analytical capabilities available along with a series of case studies designed to demonstrate the potential of these tools for illuminating energy-water nexus solutions and supporting strategic (federal) policy decisions.

  8. NASA Reverb: Standards-Driven Earth Science Data and Service Discovery

    Science.gov (United States)

    Cechini, M. F.; Mitchell, A.; Pilone, D.

    2011-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) is a core capability in NASA's Earth Science Data Systems Program. NASA's EOS ClearingHOuse (ECHO) is a metadata catalog for the EOSDIS, providing a centralized catalog of data products and registry of related data services. Working closely with the EOSDIS community, the ECHO team identified a need to develop the next generation EOS data and service discovery tool. This development effort relied on the following principles: + Metadata Driven User Interface - Users should be presented with data and service discovery capabilities based on dynamic processing of metadata describing the targeted data. + Integrated Data & Service Discovery - Users should be able to discovery data and associated data services that facilitate their research objectives. + Leverage Common Standards - Users should be able to discover and invoke services that utilize common interface standards. Metadata plays a vital role facilitating data discovery and access. As data providers enhance their metadata, more advanced search capabilities become available enriching a user's search experience. Maturing metadata formats such as ISO 19115 provide the necessary depth of metadata that facilitates advanced data discovery capabilities. Data discovery and access is not limited to simply the retrieval of data granules, but is growing into the more complex discovery of data services. These services include, but are not limited to, services facilitating additional data discovery, subsetting, reformatting, and re-projecting. The discovery and invocation of these data services is made significantly simpler through the use of consistent and interoperable standards. By utilizing an adopted standard, developing standard-specific adapters can be utilized to communicate with multiple services implementing a specific protocol. The emergence of metadata standards such as ISO 19119 plays a similarly important role in discovery as the 19115 standard

  9. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  10. miRDis: a Web tool for endogenous and exogenous microRNA discovery based on deep-sequencing data analysis.

    Science.gov (United States)

    Zhang, Hanyuan; Vieira Resende E Silva, Bruno; Cui, Juan

    2018-05-01

    Small RNA sequencing is the most widely used tool for microRNA (miRNA) discovery, and shows great potential for the efficient study of miRNA cross-species transport, i.e., by detecting the presence of exogenous miRNA sequences in the host species. Because of the increased appreciation of dietary miRNAs and their far-reaching implication in human health, research interests are currently growing with regard to exogenous miRNAs bioavailability, mechanisms of cross-species transport and miRNA function in cellular biological processes. In this article, we present microRNA Discovery (miRDis), a new small RNA sequencing data analysis pipeline for both endogenous and exogenous miRNA detection. Specifically, we developed and deployed a Web service that supports the annotation and expression profiling data of known host miRNAs and the detection of novel miRNAs, other noncoding RNAs, and the exogenous miRNAs from dietary species. As a proof-of-concept, we analyzed a set of human plasma sequencing data from a milk-feeding study where 225 human miRNAs were detected in the plasma samples and 44 show elevated expression after milk intake. By examining the bovine-specific sequences, data indicate that three bovine miRNAs (bta-miR-378, -181* and -150) are present in human plasma possibly because of the dietary uptake. Further evaluation based on different sets of public data demonstrates that miRDis outperforms other state-of-the-art tools in both detection and quantification of miRNA from either animal or plant sources. The miRDis Web server is available at: http://sbbi.unl.edu/miRDis/index.php.

  11. Solution NMR Spectroscopy in Target-Based Drug Discovery.

    Science.gov (United States)

    Li, Yan; Kang, Congbao

    2017-08-23

    Solution NMR spectroscopy is a powerful tool to study protein structures and dynamics under physiological conditions. This technique is particularly useful in target-based drug discovery projects as it provides protein-ligand binding information in solution. Accumulated studies have shown that NMR will play more and more important roles in multiple steps of the drug discovery process. In a fragment-based drug discovery process, ligand-observed and protein-observed NMR spectroscopy can be applied to screen fragments with low binding affinities. The screened fragments can be further optimized into drug-like molecules. In combination with other biophysical techniques, NMR will guide structure-based drug discovery. In this review, we describe the possible roles of NMR spectroscopy in drug discovery. We also illustrate the challenges encountered in the drug discovery process. We include several examples demonstrating the roles of NMR in target-based drug discoveries such as hit identification, ranking ligand binding affinities, and mapping the ligand binding site. We also speculate the possible roles of NMR in target engagement based on recent processes in in-cell NMR spectroscopy.

  12. Tools and data services registry

    DEFF Research Database (Denmark)

    Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé

    2016-01-01

    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a...

  13. Discovery and publishing of primary biodiversity data associated with multimedia resources: The Audubon Core strategies and approaches

    Directory of Open Access Journals (Sweden)

    Robert A Morris

    2013-07-01

    Full Text Available The Audubon Core Multimedia Resource Metadata Schema is a representation-free vocabulary for the description of biodiversity multimedia resources and collections, now in the final stages as a proposed Biodiversity Informatics Standards (TDWG standard. By defining only six terms as mandatory, it seeks to lighten the burden for providing or using multimedia useful for biodiversity science. At the same time it offers rich optional metadata terms that can help curators of multimedia collections provide authoritative media that document species occurrence, ecosystems, identification tools, ontologies, and many other kinds of biodiversity documents or data. About half of the vocabulary is re-used from other relevant controlled vocabularies that are often already in use for multimedia metadata, thereby reducing the mapping burden on existing repositories. A central design goal is to allow consuming applications to have a high likelihood of discovering suitable resources, reducing the human examination effort that might be required to decide if the resource is fit for the purpose of the application.

  14. Rediscovering Don Swanson: The Past, Present and Future of Literature-based Discovery

    Directory of Open Access Journals (Sweden)

    Neil R. Smalheiser

    2017-12-01

    Full Text Available Purpose: The late Don R. Swanson was well appreciated during his lifetime as Dean of the Graduate Library School at University of Chicago, as winner of the American Society for Information Science Award of Merit for 2000, and as author of many seminal articles. In this informal essay, I will give my personal perspective on Don’s contributions to science, and outline some current and future directions in literature-based discovery that are rooted in concepts that he developed. Design/methodology/approach: Personal recollections and literature review. Findings: The Swanson A-B-C model of literature-based discovery has been successfully used by laboratory investigators analyzing their findings and hypotheses. It continues to be a fertile area of research in a wide range of application areas including text mining, drug repurposing, studies of scientific innovation, knowledge discovery in databases, and bioinformatics. Recently, additional modes of discovery that do not follow the A-B-C model have also been proposed and explored (e.g. so-called storytelling, gaps, analogies, link prediction, negative consensus, outliers, and revival of neglected or discarded research questions. Research limitations: This paper reflects the opinions of the author and is not a comprehensive nor technically based review of literature-based discovery. Practical implications: The general scientific public is still not aware of the availability of tools for literature-based discovery. Our Arrowsmith project site maintains a suite of discovery tools that are free and open to the public (http://arrowsmith.psych.uic.edu, as does BITOLA which is maintained by Dmitar Hristovski (http:// http://ibmi.mf.uni-lj.si/bitola, and Epiphanet which is maintained by Trevor Cohen (http://epiphanet.uth.tmc.edu/. Bringing user-friendly tools to the public should be a high priority, since even more than advancing basic research in informatics, it is vital that we ensure that scientists

  15. Assessing the role of learning devices and geovisualisation tools for collective action in natural resource management: Experiences from Vietnam.

    Science.gov (United States)

    Castella, Jean-Christophe

    2009-02-01

    In northern Vietnam uplands the successive policy reforms that accompanied agricultural decollectivisation triggered very rapid changes in land use in the 1990s. From a centralized system of natural resource management, a multitude of individual strategies emerged which contributed to new production interactions among farming households, changes in landscape structures, and conflicting strategies among local stakeholders. Within this context of agrarian transition, learning devices can help local communities to collectively design their own course of action towards sustainable natural resource management. This paper presents a collaborative approach combining a number of participatory methods and geovisualisation tools (e.g., spatially explicit multi-agent models and role-playing games) with the shared goal to analyse and represent the interactions between: (i) decision-making processes by individual farmers based on the resource profiles of their farms; (ii) the institutions which regulate resource access and usage; and (iii) the biophysical and socioeconomic environment. This methodological pathway is illustrated by a case study in Bac Kan Province where it successfully led to a communication platform on natural resource management. In a context of rapid socioeconomic changes, learning devices and geovisualisation tools helped embed the participatory approach within a process of community development. The combination of different tools, each with its own advantages and constraints, proved highly relevant for supporting collective natural resource management.

  16. Tools & Resources | Efficient Windows Collaborative

    Science.gov (United States)

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  17. A survey of the neuroscience resource landscape: perspectives from the neuroscience information framework.

    Science.gov (United States)

    Cachat, Jonathan; Bandrowski, Anita; Grethe, Jeffery S; Gupta, Amarnath; Astakhov, Vadim; Imam, Fahim; Larson, Stephen D; Martone, Maryann E

    2012-01-01

    The number of available neuroscience resources (databases, tools, materials, and networks) available via the Web continues to expand, particularly in light of newly implemented data sharing policies required by funding agencies and journals. However, the nature of dense, multifaceted neuroscience data and the design of classic search engine systems make efficient, reliable, and relevant discovery of such resources a significant challenge. This challenge is especially pertinent for online databases, whose dynamic content is largely opaque to contemporary search engines. The Neuroscience Information Framework was initiated to address this problem of finding and utilizing neuroscience-relevant resources. Since its first production release in 2008, NIF has been surveying the resource landscape for the neurosciences, identifying relevant resources and working to make them easily discoverable by the neuroscience community. In this chapter, we provide a survey of the resource landscape for neuroscience: what types of resources are available, how many there are, what they contain, and most importantly, ways in which these resources can be utilized by the research community to advance neuroscience research. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Bioinformatics for discovery of microbiome variation

    DEFF Research Database (Denmark)

    Brejnrod, Asker Daniel

    of various molecular methods to build hypotheses about the impact of a copper contaminated soil. The introduction is a broad introduction to the field of microbiome research with a focus on the technologies that enable these discoveries and how some of the broader issues have related to this thesis......Sequencing based tools have revolutionized microbiology in recent years. Highthroughput DNA sequencing have allowed high-resolution studies on microbial life in many different environments and at unprecedented low cost. These culture-independent methods have helped discovery of novel bacteria...... 1 ,“Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies”, benchmarked the performance of a variety of popular statistical methods for discovering differentially abundant bacteria . between...

  19. Models, methods and software tools to evaluate the quality of informational and educational resources

    International Nuclear Information System (INIS)

    Gavrilov, S.I.

    2011-01-01

    The paper studies the modern methods and tools to evaluate the quality of data systems, which allows determining the specificity of informational and educational resources (IER). The author has developed a model of IER quality management at all stages of the life cycle and an integrated multi-level hierarchical system of IER quality assessment, taking into account both information properties and targeted resource assignment. The author presents a mathematical and algorithmic justification of solving the problem of IER quality management, and offers data system to assess the IER quality [ru

  20. MCM generator: a Java-based tool for generating medical metadata.

    Science.gov (United States)

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.

  1. Beyond the International Year of Astronomy: The Universe Discovery Guides

    Science.gov (United States)

    Lawton, B.; Berendsen, M.; Gurton, S.; Smith, D.; NASA SMD Astrophysics EPO Community

    2014-07-01

    Developed for informal educators and their audiences, the 12 Universe Discovery Guides (UDGs, one per month) are adapted from the Discovery Guides that were developed for the International Year of Astronomy in 2009. The UDGs showcase education and public outreach resources from across more than 30 NASA astrophysics missions and programs. Via collaboration through scientist and educator partnerships, the UDGs aim to increase the impact of individual missions and programs, put their efforts into context, and extend their reach to new audiences. Each of the UDGs has a science topic, an interpretive story, a sky object to view with finding charts, hands-on activities, and connections to recent NASA science discoveries. The UDGs are modular; informal educators can take resources from the guides that they find most useful for their audiences. Attention is being given to audience needs, and field-testing is ongoing. The UDGs are available via downloadable PDFs.

  2. A Case Study Optimizing Human Resources in Rwanda's First Dental School: Three Innovative Management Tools.

    Science.gov (United States)

    Hackley, Donna M; Mumena, Chrispinus H; Gatarayiha, Agnes; Cancedda, Corrado; Barrow, Jane R

    2018-06-01

    Harvard School of Dental Medicine, University of Maryland School of Dentistry, and the University of Rwanda (UR) are collaborating to create Rwanda's first School of Dentistry as part of the Human Resources for Health (HRH) Rwanda initiative that aims to strengthen the health care system of Rwanda. The HRH oral health team developed three management tools to measure progress in systems-strengthening efforts: 1) the road map is an operations plan for the entire dental school and facilitates delivery of the curriculum and management of human and material resources; 2) each HRH U.S. faculty member develops a work plan with targeted deliverables for his or her rotation, which is facilitated with biweekly flash reports that measure progress and keep the faculty member focused on his or her specific deliverables; and 3) the redesigned HRH twinning model, changed from twinning of an HRH faculty member with a single Rwandan faculty member to twinning with multiple Rwandan faculty members based on shared academic interests and goals, has improved efficiency, heightened engagement of the UR dental faculty, and increased the impact of HRH U.S. faculty members. These new tools enable the team to measure its progress toward the collaborative's goals and understand the successes and challenges in moving toward the planned targets. The tools have been valuable instruments in fostering discussion around priorities and deployment of resources as well as in developing strong relationships, enabling two-way exchange of knowledge, and promoting sustainability.

  3. Applied metabolomics in drug discovery.

    Science.gov (United States)

    Cuperlovic-Culf, M; Culf, A S

    2016-08-01

    The metabolic profile is a direct signature of phenotype and biochemical activity following any perturbation. Metabolites are small molecules present in a biological system including natural products as well as drugs and their metabolism by-products depending on the biological system studied. Metabolomics can provide activity information about possible novel drugs and drug scaffolds, indicate interesting targets for drug development and suggest binding partners of compounds. Furthermore, metabolomics can be used for the discovery of novel natural products and in drug development. Metabolomics can enhance the discovery and testing of new drugs and provide insight into the on- and off-target effects of drugs. This review focuses primarily on the application of metabolomics in the discovery of active drugs from natural products and the analysis of chemical libraries and the computational analysis of metabolic networks. Metabolomics methodology, both experimental and analytical is fast developing. At the same time, databases of compounds are ever growing with the inclusion of more molecular and spectral information. An increasing number of systems are being represented by very detailed metabolic network models. Combining these experimental and computational tools with high throughput drug testing and drug discovery techniques can provide new promising compounds and leads.

  4. Data access and decision tools for coastal water resources ...

    Science.gov (United States)

    US EPA has supported the development of numerous models and tools to support implementation of environmental regulations. However, transfer of knowledge and methods from detailed technical models to support practical problem solving by local communities and watershed or coastal management organizations remains a challenge. We have developed the Estuary Data Mapper (EDM) to facilitate data discovery, visualization and access to support environmental problem solving for coastal watersheds and estuaries. EDM is a stand-alone application based on open-source software which requires only internet access for operation. Initially, development of EDM focused on delivery of raw data streams from distributed web services, ranging from atmospheric deposition to hydrologic, tidal, and water quality time series, estuarine habitat characteristics, and remote sensing products. We have transitioned to include access to value-added products which provide end-users with results of future scenario analysis, facilitate extension of models across geographic regions, and/or promote model interoperability. Here we present three examples: 1) the delivery of input data for the development of seagrass models across estuaries, 2) scenarios illustrating the implications of riparian buffer management (loss or restoration) for stream thermal regimes and fish communities, and 3) access to hydrology model outputs to foster connections across models at different scales, ultimately feeding

  5. Using insects for STEM outreach: Development and evaluation of the UA Insect Discovery Program

    Science.gov (United States)

    Beal, Benjamin D.

    Science and technology impact most aspects of modern daily life. It is therefore important to create a scientifically literate society. Since the majority of Americans do not take college-level science courses, strong K-12 science education is essential. At the K-5 level, however, many teachers lack the time, resources and background for effective science teaching. Elementary teachers and students may benefit from scientist-led outreach programs created by Cooperative Extension or other institutions. One example is the University of Arizona Insect Discovery Program, which provides short-duration programing that uses insects to support science content learning, teach critical thinking and spark interest in science. We conducted evaluations of the Insect Discovery programming to determine whether the activities offered were accomplishing program goals. Pre-post tests, post program questionnaires for teachers, and novel assessments of children's drawings were used as assessment tools. Assessments were complicated by the short duration of the program interactions with the children as well as their limited literacy. In spite of these difficulties, results of the pre-post tests indicated a significant impact on content knowledge and critical thinking skills. Based on post-program teacher questionnaires, positive impacts on interest in science learning were noted as much as a month after the children participated in the program. New programming and resources developed to widen the potential for impact are also described.

  6. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  7. American Recovery and Reinvestment Act-comparative effectiveness research infrastructure investments: emerging data resources, tools and publications.

    Science.gov (United States)

    Segal, Courtney; Holve, Erin

    2014-11-01

    The Recovery Act provided a substantial, one-time investment in data infrastructure for comparative effectiveness research (CER). A review of the publications, data, and tools developed as a result of this support has informed understanding of the level of effort undertaken by these projects. Structured search queries, as well as outreach efforts, were conducted to identify and review resources from American Recovery and Reinvestment Act of 2009 CER projects building electronic clinical data infrastructure. The findings from this study provide a spectrum of productivity across a range of topics and settings. A total of 451 manuscripts published in 192 journals, and 141 data resources and tools were identified and address gaps in evidence on priority populations, conditions, and the infrastructure needed to support CER.

  8. Neopeptide Analyser: A software tool for neopeptide discovery in proteomics data [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Mandy Peffers

    2017-04-01

    Full Text Available Experiments involving mass spectrometry (MS-based proteomics are widely used for analyses of connective tissues. Common examples include the use of relative quantification to identify differentially expressed peptides and proteins in cartilage and tendon. We are working on characterising so-called ‘neopeptides’, i.e. peptides formed due to native cleavage of proteins, for example under pathological conditions. Unlike peptides typically quantified in MS workflows due to the in vitro use of an enzyme such as trypsin, a neopeptide has at least one terminus that was not due to the use of trypsin in the workflow. The identification of neopeptides within these datasets is important in understanding disease pathology, and the development of antibodies that could be utilised as diagnostic biomarkers for diseases, such as osteoarthritis, and targets for novel treatments. Our previously described neopeptide data analysis workflow was laborious and was not amenable to robust statistical analysis, which reduced confidence in the neopeptides identified. To overcome this, we developed ‘Neopeptide Analyser’, a user friendly neopeptide analysis tool used in conjunction with label-free MS quantification tool Progenesis QIP for proteomics. Neopeptide Analyser filters data sourced from Progenesis QIP output to identify neopeptide sequences, as well as give the residues that are adjacent to the peptide in its corresponding protein sequence. It also produces normalised values for the neopeptide quantification values and uses these to perform statistical tests, which are also included in the output. Neopeptide Analyser is available as a Java application for Mac, Windows and Linux. The analysis features and ease of use encourages data exploration, which could aid the discovery of novel pathways in extracellular matrix degradation, the identification of potential biomarkers and as a tool to investigate matrix turnover. Neopeptide Analyser is available from

  9. Animal Resource Program | Center for Cancer Research

    Science.gov (United States)

    CCR Animal Resource Program The CCR Animal Resource Program plans, develops, and coordinates laboratory animal resources for CCR’s research programs. We also provide training, imaging, and technology development in support of moving basic discoveries to the clinic. The ARP Manager:

  10. Animal Resource Program | Center for Cancer Research

    Science.gov (United States)

    CCR Animal Resource Program The CCR Animal Resource Program plans, develops, and coordinates laboratory animal resources for CCR’s research programs. We also provide training, imaging, and technology development in support of moving basic discoveries to the clinic. The ARP Office:

  11. The Spiral Discovery Network as an Automated General-Purpose Optimization Tool

    Directory of Open Access Journals (Sweden)

    Adam B. Csapo

    2018-01-01

    Full Text Available The Spiral Discovery Method (SDM was originally proposed as a cognitive artifact for dealing with black-box models that are dependent on multiple inputs with nonlinear and/or multiplicative interaction effects. Besides directly helping to identify functional patterns in such systems, SDM also simplifies their control through its characteristic spiral structure. In this paper, a neural network-based formulation of SDM is proposed together with a set of automatic update rules that makes it suitable for both semiautomated and automated forms of optimization. The behavior of the generalized SDM model, referred to as the Spiral Discovery Network (SDN, and its applicability to nondifferentiable nonconvex optimization problems are elucidated through simulation. Based on the simulation, the case is made that its applicability would be worth investigating in all areas where the default approach of gradient-based backpropagation is used today.

  12. Orphan diseases: state of the drug discovery art.

    Science.gov (United States)

    Volmar, Claude-Henry; Wahlestedt, Claes; Brothers, Shaun P

    2017-06-01

    Since 1983 more than 300 drugs have been developed and approved for orphan diseases. However, considering the development of novel diagnosis tools, the number of rare diseases vastly outpaces therapeutic discovery. Academic centers and nonprofit institutes are now at the forefront of rare disease R&D, partnering with pharmaceutical companies when academic researchers discover novel drugs or targets for specific diseases, thus reducing the failure risk and cost for pharmaceutical companies. Considerable progress has occurred in the art of orphan drug discovery, and a symbiotic relationship now exists between pharmaceutical industry, academia, and philanthropists that provides a useful framework for orphan disease therapeutic discovery. Here, the current state-of-the-art of drug discovery for orphan diseases is reviewed. Current technological approaches and challenges for drug discovery are considered, some of which can present somewhat unique challenges and opportunities in orphan diseases, including the potential for personalized medicine, gene therapy, and phenotypic screening.

  13. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    Science.gov (United States)

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  14. High-Performance Integrated Virtual Environment (HIVE Tools and Applications for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Vahan Simonyan

    2014-09-01

    Full Text Available The High-performance Integrated Virtual Environment (HIVE is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  15. Remote access tools for optimization of hardware and workmanship resources

    International Nuclear Information System (INIS)

    Bartnig, Roberto; Diniz, Luciano; Ribeiro, Joao Luiz; Salcedo, Fernando

    2000-01-01

    Campos basin, during its 23 years, went by several different generations concerning industrial automation, always looking for operational improvement and reduction of risks, as well as larger reliability and precision in the process controls. The ECOS (Central Operational and Supervision Station) is a human-machine interface developed by PETROBRAS using graphic stations for supervision and control of 16 offshore production units, which nowadays are responsible for about 77 % of oil production of Campos basin (730,000 barrels/day). Through the use of software tools developed by the personnel of the support to the industrial automation it was possible to optimize the resources of specialized labor and to take advantage of the periods of smaller use of the net. Those tools monitor disk free space and fragmentation, onshore/offshore communication links, local network traffic and errors, backup of process historical data and applications, in order to guarantee the operation of the net, the historical of process data, the integrity of the production applications, improving safety and operational continuity. (author)

  16. Sea Level Rise Data Discovery

    Science.gov (United States)

    Quach, N.; Huang, T.; Boening, C.; Gill, K. M.

    2016-12-01

    Research related to sea level rise crosses multiple disciplines from sea ice to land hydrology. The NASA Sea Level Change Portal (SLCP) is a one-stop source for current sea level change information and data, including interactive tools for accessing and viewing regional data, a virtual dashboard of sea level indicators, and ongoing updates through a suite of editorial products that include content articles, graphics, videos, and animations. The architecture behind the SLCP makes it possible to integrate web content and data relevant to sea level change that are archived across various data centers as well as new data generated by sea level change principal investigators. The Extensible Data Gateway Environment (EDGE) is incorporated into the SLCP architecture to provide a unified platform for web content and science data discovery. EDGE is a data integration platform designed to facilitate high-performance geospatial data discovery and access with the ability to support multi-metadata standard specifications. EDGE has the capability to retrieve data from one or more sources and package the resulting sets into a single response to the requestor. With this unified endpoint, the Data Analysis Tool that is available on the SLCP can retrieve dataset and granule level metadata as well as perform geospatial search on the data. This talk focuses on the architecture that makes it possible to seamlessly integrate and enable discovery of disparate data relevant to sea level rise.

  17. Institutional profile: the national Swedish academic drug discovery & development platform at SciLifeLab.

    Science.gov (United States)

    Arvidsson, Per I; Sandberg, Kristian; Sakariassen, Kjell S

    2017-06-01

    The Science for Life Laboratory Drug Discovery and Development Platform (SciLifeLab DDD) was established in Stockholm and Uppsala, Sweden, in 2014. It is one of ten platforms of the Swedish national SciLifeLab which support projects run by Swedish academic researchers with large-scale technologies for molecular biosciences with a focus on health and environment. SciLifeLab was created by the coordinated effort of four universities in Stockholm and Uppsala: Stockholm University, Karolinska Institutet, KTH Royal Institute of Technology and Uppsala University, and has recently expanded to other Swedish university locations. The primary goal of the SciLifeLab DDD is to support selected academic discovery and development research projects with tools and resources to discover novel lead therapeutics, either molecules or human antibodies. Intellectual property developed with the help of SciLifeLab DDD is wholly owned by the academic research group. The bulk of SciLifeLab DDD's research and service activities are funded from the Swedish state, with only consumables paid by the academic research group through individual grants.

  18. National forecast for geothermal resource exploration and development with techniques for policy analysis and resource assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cassel, T.A.V.; Shimamoto, G.T.; Amundsen, C.B.; Blair, P.D.; Finan, W.F.; Smith, M.R.; Edeistein, R.H.

    1982-03-31

    The backgrund, structure and use of modern forecasting methods for estimating the future development of geothermal energy in the United States are documented. The forecasting instrument may be divided into two sequential submodels. The first predicts the timing and quality of future geothermal resource discoveries from an underlying resource base. This resource base represents an expansion of the widely-publicized USGS Circular 790. The second submodel forecasts the rate and extent of utilization of geothermal resource discoveries. It is based on the joint investment behavior of resource developers and potential users as statistically determined from extensive industry interviews. It is concluded that geothermal resource development, especially for electric power development, will play an increasingly significant role in meeting US energy demands over the next 2 decades. Depending on the extent of R and D achievements in related areas of geosciences and technology, expected geothermal power development will reach between 7700 and 17300 Mwe by the year 2000. This represents between 8 and 18% of the expected electric energy demand (GWh) in western and northwestern states.

  19. A Tool and Process that Facilitate Community Capacity Building and Social Learning for Natural Resource Management

    Directory of Open Access Journals (Sweden)

    Christopher M. Raymond

    2013-03-01

    Full Text Available This study presents a self-assessment tool and process that facilitate community capacity building and social learning for natural resource management. The tool and process provide opportunities for rural landholders and project teams both to self-assess their capacity to plan and deliver natural resource management (NRM programs and to reflect on their capacities relative to other organizations and institutions that operate in their region. We first outline the tool and process and then present a critical review of the pilot in the South Australian Arid Lands NRM region, South Australia. Results indicate that participants representing local, organizational, and institutional tiers of government were able to arrive at a group consensus position on the strength, importance, and confidence of a variety of capacities for NRM categorized broadly as human, social, physical, and financial. During the process, participants learned a lot about their current capacity as well as capacity needs. Broad conclusions are discussed with reference to the iterative process for assessing and reflecting on community capacity.

  20. Lessons learned developing a diagnostic tool for HIV-associated dementia feasible to implement in resource-limited settings: pilot testing in Kenya.

    Directory of Open Access Journals (Sweden)

    Judith Kwasa

    Full Text Available To conduct a preliminary evaluation of the utility and reliability of a diagnostic tool for HIV-associated dementia (HAD for use by primary health care workers (HCW which would be feasible to implement in resource-limited settings.In resource-limited settings, HAD is an indication for anti-retroviral therapy regardless of CD4 T-cell count. Anti-retroviral therapy, the treatment for HAD, is now increasingly available in resource-limited settings. Nonetheless, HAD remains under-diagnosed likely because of limited clinical expertise and availability of diagnostic tests. Thus, a simple diagnostic tool which is practical to implement in resource-limited settings is an urgent need.A convenience sample of 30 HIV-infected outpatients was enrolled in Western Kenya. We assessed the sensitivity and specificity of a diagnostic tool for HAD as administered by a primary HCW. This was compared to an expert clinical assessment which included examination by a physician, neuropsychological testing, and in selected cases, brain imaging. Agreement between HCW and an expert examiner on certain tool components was measured using Kappa statistic.The sample was 57% male, mean age was 38.6 years, mean CD4 T-cell count was 323 cells/µL, and 54% had less than a secondary school education. Six (20% of the subjects were diagnosed with HAD by expert clinical assessment. The diagnostic tool was 63% sensitive and 67% specific for HAD. Agreement between HCW and expert examiners was poor for many individual items of the diagnostic tool (K = .03-.65. This diagnostic tool had moderate sensitivity and specificity for HAD. However, reliability was poor, suggesting that substantial training and formal evaluations of training adequacy will be critical to enable HCW to reliably administer a brief diagnostic tool for HAD.

  1. What’s in a word? : Rethinking facet headings in a discovery service

    Directory of Open Access Journals (Sweden)

    David Nelson

    2015-06-01

    Full Text Available The emergence of Discovery systems has been well received by libraries who have long been concerned with offering a smorgasbord of databases that require either individual searching of databases or the problematic use of federated searching.  The ability to search across a wide array of subscribed and open-access information resources via a centralized index has opened up access for users to a library’s wealth of information resources.  This capability has been particularly praised for its ‘google like’ search interface, thereby conforming to user expectations for information searching.  Yet, all discovery services also include facets as a search capability and thus provide faceted navigation which is a search feature that Google is not particularly well suited for.  Discovery services thus provide a hybrid search interface.  An examination of e-commerce sites clearly shows that faceted navigation is an integral part of their discovery systems.  Many library OPACs also now are being developed with faceted navigation capabilities.  However, the discovery services faceted structures suffer from a number of problems which inhibit their usefulness and their potential.  This article examines a number of these issues and it offers suggestions for improving the discovery search interface.  It also argues that vendors and libraries need to work together to more closely analyze the user experience of the discovery system.

  2. Gas reserves, discoveries and production

    International Nuclear Information System (INIS)

    Saniere, A.

    2006-01-01

    Between 2000 and 2004, new discoveries, located mostly in the Asia/Pacific region, permitted a 71% produced reserve replacement rate. The Middle East and the offshore sector represent a growing proportion of world gas production Non-conventional gas resources are substantial but are not exploited to any significant extent, except in the United States, where they account for 30% of U.S. gas production. (author)

  3. INTEGRATING CORPUS-BASED RESOURCES AND NATURAL LANGUAGE PROCESSING TOOLS INTO CALL

    Directory of Open Access Journals (Sweden)

    Pascual Cantos Gomez

    2002-06-01

    Full Text Available This paper ainis at presenting a survey of computational linguistic tools presently available but whose potential has been neither fully considered not exploited to its full in modern CALL. It starts with a discussion on the rationale of DDL to language learning, presenting typical DDL-activities. DDL-software and potential extensions of non-typical DDL-software (electronic dictionaries and electronic dictionary facilities to DDL . An extended section is devoted to describe NLP-technology and how it can be integrated into CALL, within already existing software or as stand alone resources. A range of NLP-tools is presentcd (MT programs, taggers, lemn~atizersp, arsers and speech technologies with special emphasis on tagged concordancing. The paper finishes with a number of reflections and ideas on how language technologies can be used efficiently within the language learning context and how extensive exploration and integration of these technologies might change and extend both modern CAI,I, and the present language learning paradigiii..

  4. Tools and measures for stimulation the efficient energy consumption. Integrated resource planning in Romania

    International Nuclear Information System (INIS)

    Scripcariu, Daniela; Scripcariu, Mircea; Leca, Aureliu

    1996-01-01

    The integrated resource planning is based on analyses of the energy generation and energy consumption as a whole. Thus, increasing the energy efficiency appears to be the cheapest, the most available and the most cost-effective energy resource. In order to stimulate the increase of efficiency of energy consumption, besides economic efficiency criteria for selecting technical solutions, additional tools and measures are necessary. The paper presents the main tools and measures needed to foster an efficient energy consumption. Actions meant to stimulate DSM (Demand-Side Management) implementation in Romania are proposed. The paper contains 5 sections. In the introduction, the main aspects of the DSM are considered, namely, where the programs are implemented, who is the responsible, which are the objectives and finally, how the DSM programs are implemented. The following tools in management of energy use are examined: the energy prices, the regulation in the field of energy efficiency, standards and norms, energy labelling of the products and energy education. Among the measures for managing the energy use, the paper takes into consideration the institutions responsible for DSM, for instance, the Romanian Agency for Energy Conservation (ARCE), decentralization of decision making, the program approaches and financing the actions aiming at improving the energy efficiency. Finally, the paper analyses the criteria in choosing adequate solutions of improving the energy efficiency

  5. Technical tools and of information for the quality administration of the water resource

    International Nuclear Information System (INIS)

    Escobar Martinez, John Fernando; Sierra Ramirez, Carlos; Molina Perez, Francisco

    2000-01-01

    Given the complexity of an aquatic ecosystem and the impossibility of making experiments on a real scale, the water quality engineer represents the different reactions and interactions that happen in these ecosystems using mathematical models. This constitutes a powerful tool that allows prospective studies and helps in the decision-making. On the other hand, the huge volumes of information produced by the geographical space analysis and the large amount of variables involved, make the Geographical Information Systems (GIS) a powerful tool to develop analysis, modeling and simulation tasks on a defined area and the processes related to it. This article presents an integration proposal that permits to associate both, the spatial analysis and the visual representation capabilities of a GIS application with the water quality results obtained from a mathematical model, in such a way that, the interaction of the users of the information get increased, and the development of new tools helps in the decisions making and administrative process in the management of the water resources

  6. Discovery Mondays - The Web of the future: a calculator for the planet

    CERN Multimedia

    2003-01-01

    Physics is hungry for bytes. The LHC experiments will produce 10 petabytes (a 1 followed by 16 zeros) each year, enough to fill 16 million CD-ROMs. CERN is introducing some futuristic computing tools to process, manage and store this phenomenal flow of data. The most spectacular among them is without doubt the Grid, a development of the Web, which will make it possible to pool the computing resources of thousands of computers distributed around the world. The next Discovery Monday will offer you a glimpse into how this super computer works. Come and watch demonstrations of the Grid in action for such projects as UNOSAT, which gathers geographic data by satellite. Become one of the first users of the Grid by sending a job. In the course of the evening another cutting-edge tool will be unveiled. EDMS is a system which enables some 6000 scientists from around the world to communicate and track in real time all the technical documentation on the million-odd components for the LHC and its experiments. You will r...

  7. Optogenetic Approaches to Drug Discovery in Neuroscience and Beyond.

    Science.gov (United States)

    Zhang, Hongkang; Cohen, Adam E

    2017-07-01

    Recent advances in optogenetics have opened new routes to drug discovery, particularly in neuroscience. Physiological cellular assays probe functional phenotypes that connect genomic data to patient health. Optogenetic tools, in particular tools for all-optical electrophysiology, now provide a means to probe cellular disease models with unprecedented throughput and information content. These techniques promise to identify functional phenotypes associated with disease states and to identify compounds that improve cellular function regardless of whether the compound acts directly on a target or through a bypass mechanism. This review discusses opportunities and unresolved challenges in applying optogenetic techniques throughout the discovery pipeline - from target identification and validation, to target-based and phenotypic screens, to clinical trials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Novel Tools for Conservation Genomics: Comparing Two High-Throughput Approaches for SNP Discovery in the Transcriptome of the European Hake

    DEFF Research Database (Denmark)

    Milano, Ilaria; Babbucci, Massimiliano; Panitz, Frank

    2011-01-01

    The growing accessibility to genomic resources using next-generation sequencing (NGS) technologies has revolutionized the application of molecular genetic tools to ecology and evolutionary studies in non-model organisms. Here we present the case study of the European hake (Merluccius merluccius),...

  9. Application of genomic tools in plant breeding.

    Science.gov (United States)

    Pérez-de-Castro, A M; Vilanova, S; Cañizares, J; Pascual, L; Blanca, J M; Díez, M J; Prohens, J; Picó, B

    2012-05-01

    Plant breeding has been very successful in developing improved varieties using conventional tools and methodologies. Nowadays, the availability of genomic tools and resources is leading to a new revolution of plant breeding, as they facilitate the study of the genotype and its relationship with the phenotype, in particular for complex traits. Next Generation Sequencing (NGS) technologies are allowing the mass sequencing of genomes and transcriptomes, which is producing a vast array of genomic information. The analysis of NGS data by means of bioinformatics developments allows discovering new genes and regulatory sequences and their positions, and makes available large collections of molecular markers. Genome-wide expression studies provide breeders with an understanding of the molecular basis of complex traits. Genomic approaches include TILLING and EcoTILLING, which make possible to screen mutant and germplasm collections for allelic variants in target genes. Re-sequencing of genomes is very useful for the genome-wide discovery of markers amenable for high-throughput genotyping platforms, like SSRs and SNPs, or the construction of high density genetic maps. All these tools and resources facilitate studying the genetic diversity, which is important for germplasm management, enhancement and use. Also, they allow the identification of markers linked to genes and QTLs, using a diversity of techniques like bulked segregant analysis (BSA), fine genetic mapping, or association mapping. These new markers are used for marker assisted selection, including marker assisted backcross selection, 'breeding by design', or new strategies, like genomic selection. In conclusion, advances in genomics are providing breeders with new tools and methodologies that allow a great leap forward in plant breeding, including the 'superdomestication' of crops and the genetic dissection and breeding for complex traits.

  10. Pharmacogenetics in type 2 diabetes: precision medicine or discovery tool?

    Science.gov (United States)

    Florez, Jose C

    2017-05-01

    In recent years, technological and analytical advances have led to an explosion in the discovery of genetic loci associated with type 2 diabetes. However, their ability to improve prediction of disease outcomes beyond standard clinical risk factors has been limited. On the other hand, genetic effects on drug response may be stronger than those commonly seen for disease incidence. Pharmacogenetic findings may aid in identifying new drug targets, elucidate pathophysiology, unravel disease heterogeneity, help prioritise specific genes in regions of genetic association, and contribute to personalised or precision treatment. In diabetes, precedent for the successful application of pharmacogenetic concepts exists in its monogenic subtypes, such as MODY or neonatal diabetes. Whether similar insights will emerge for the much more common entity of type 2 diabetes remains to be seen. As genetic approaches advance, the progressive deployment of candidate gene, large-scale genotyping and genome-wide association studies has begun to produce suggestive results that may transform clinical practice. However, many barriers to the translation of diabetes pharmacogenetic discoveries to the clinic still remain. This perspective offers a contemporary overview of the field with a focus on sulfonylureas and metformin, identifies the major uses of pharmacogenetics, and highlights potential limitations and future directions.

  11. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yu-Chi Lin

    2011-02-01

    Full Text Available Due to the implicit characteristics of learning disabilities (LDs, the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN and support vector machine (SVM have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST, which can not only perform as a classifier, but may also produce meaningful explanations or rules, to the LD diagnosis application. Our experiments indicate that the RST approach is competitive as a tool for feature selection, and it performs better in term of prediction accuracy than other rulebased algorithms such as decision tree and ripper algorithms. We also propose to mix samples collected from sources with different LD diagnosis procedure and criteria. By pre-processing these mixed samples with simple and readily available clustering algorithms, we are able to improve the quality and support of rules generated by the RST. Overall, our study shows that the rough set approach, as a classification and knowledge discovery tool, may have great potential in playing an essential role in LD diagnosis.

  12. Computational neuropharmacology: dynamical approaches in drug discovery.

    Science.gov (United States)

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  13. Developing integrated crop knowledge networks to advance candidate gene discovery.

    Science.gov (United States)

    Hassani-Pak, Keywan; Castellote, Martin; Esch, Maria; Hindle, Matthew; Lysenko, Artem; Taubert, Jan; Rawlings, Christopher

    2016-12-01

    The chances of raising crop productivity to enhance global food security would be greatly improved if we had a complete understanding of all the biological mechanisms that underpinned traits such as crop yield, disease resistance or nutrient and water use efficiency. With more crop genomes emerging all the time, we are nearer having the basic information, at the gene-level, to begin assembling crop gene catalogues and using data from other plant species to understand how the genes function and how their interactions govern crop development and physiology. Unfortunately, the task of creating such a complete knowledge base of gene functions, interaction networks and trait biology is technically challenging because the relevant data are dispersed in myriad databases in a variety of data formats with variable quality and coverage. In this paper we present a general approach for building genome-scale knowledge networks that provide a unified representation of heterogeneous but interconnected datasets to enable effective knowledge mining and gene discovery. We describe the datasets and outline the methods, workflows and tools that we have developed for creating and visualising these networks for the major crop species, wheat and barley. We present the global characteristics of such knowledge networks and with an example linking a seed size phenotype to a barley WRKY transcription factor orthologous to TTG2 from Arabidopsis, we illustrate the value of integrated data in biological knowledge discovery. The software we have developed (www.ondex.org) and the knowledge resources (http://knetminer.rothamsted.ac.uk) we have created are all open-source and provide a first step towards systematic and evidence-based gene discovery in order to facilitate crop improvement.

  14. KML (Keyhole Markup Language) : a key tool in the education of geo-resources.

    Science.gov (United States)

    Veltz, Isabelle

    2015-04-01

    Although going on the ground with pupils remains the best way to understand the geologic structure of a deposit, it is very difficult to bring them in a mining extraction site and it is impossible to explore whole regions in search of these resources. For those reasons the KML (with the Google earth interface) is a very complete tool for teaching geosciences. Simple and intuitive, its handling is quickly mastered by the pupils, it also allows the teachers to validate skills for IT certificates. It allows the use of KML files stemming from online banks, from personal productions of the teacher or from pupils' works. These tools offer a global approach in 3D as well as a geolocation-based access to any type of geological data. The resource on which I built this KML is taught in the curriculum of the 3 years of French high school, it is methane hydrate. This non conventional hydrocarbon molecule enters in this vague border between mineral an organic matter (as phosphate deposits). It has become for over ten year the subject of the race for the exploitation of the gas hydrates fields in order to try to supply to the world demand. The methane hydrate fields are very useful and interesting to study the 3 majors themes of geological resource: the exploration, the exploitation and the risks especially for environments and populations. The KML which I propose allows the pupils to put itself in the skin of a geologist in search of deposits or on the technician who is going to extract the resource. It also allows them to evaluate the risks connected to the effect of tectonics activity or climatic changes on the natural or catastrophic releasing of methane and its role in the increase of the greenhouse effect. This KML associated to plenty of pedagogic activities is directly downloadable for teachers at http://eduterre.ens-lyon.fr/eduterre-usages/actualites/methane/.

  15. Extended statistical entropy analysis as a quantitative management tool for water resource systems

    Science.gov (United States)

    Sobantka, Alicja; Rechberger, Helmut

    2010-05-01

    The use of entropy in hydrology and water resources has been applied to various applications. As water resource systems are inherently spatial and complex, a stochastic description of these systems is needed, and entropy theory enables development of such a description by providing determination of the least-biased probability distributions with limited knowledge and data. Entropy can also serve as a basis for risk and reliability analysis. The relative entropy has been variously interpreted as a measure freedom of choice, uncertainty and disorder, information content, missing information or information gain or loss. In the analysis of empirical data, entropy is another measure of dispersion, an alternative to the variance. Also, as an evaluation tool, the statistical entropy analysis (SEA) has been developed by previous workers to quantify the power of a process to concentrate chemical elements. Within this research programme the SEA is aimed to be extended for application to chemical compounds and tested for its deficits and potentials in systems where water resources play an important role. The extended SEA (eSEA) will be developed first for the nitrogen balance in waste water treatment plants (WWTP). Later applications on the emission of substances to water bodies such as groundwater (e.g. leachate from landfills) will also be possible. By applying eSEA to the nitrogen balance in a WWTP, all possible nitrogen compounds, which may occur during the water treatment process, are taken into account and are quantified in their impact towards the environment and human health. It has been shown that entropy reducing processes are part of modern waste management. Generally, materials management should be performed in a way that significant entropy rise is avoided. The entropy metric might also be used to perform benchmarking on WWTPs. The result out of this management tool would be the determination of the efficiency of WWTPs. By improving and optimizing the efficiency

  16. The Computer Revolution in Science: Steps towards the realization of computer-supported discovery environments

    NARCIS (Netherlands)

    de Jong, Hidde; Rip, Arie

    1997-01-01

    The tools that scientists use in their search processes together form so-called discovery environments. The promise of artificial intelligence and other branches of computer science is to radically transform conventional discovery environments by equipping scientists with a range of powerful

  17. Engaging Scientists in Meaningful E/PO: The Universe Discovery Guides

    Science.gov (United States)

    Meinke, B. K.; Lawton, B.; Gurton, S.; Smith, D. A.; Manning, J. G.

    2014-12-01

    For the 2009 International Year of Astronomy, the then-existing NASA Origins Forum collaborated with the Astronomical Society of the Pacific (ASP) to create a series of monthly "Discovery Guides" for informal educator and amateur astronomer use in educating the public about featured sky objects and associated NASA science themes. Today's NASA Astrophysics Science Education and Public Outreach Forum (SEPOF), one of a new generation of forums coordinating the work of NASA Science Mission Directorate (SMD) EPO efforts—in collaboration with the ASP and NASA SMD missions and programs--has adapted the Discovery Guides into "evergreen" educational resources suitable for a variety of audiences. The Guides focus on "deep sky" objects and astrophysics themes (stars and stellar evolution, galaxies and the universe, and exoplanets), showcasing EPO resources from more than 30 NASA astrophysics missions and programs in a coordinated and cohesive "big picture" approach across the electromagnetic spectrum, grounded in best practices to best serve the needs of the target audiences. Each monthly guide features a theme and a representative object well-placed for viewing, with an accompanying interpretive story, finding charts, strategies for conveying the topics, and complementary supporting NASA-approved education activities and background information from a spectrum of NASA missions and programs. The Universe Discovery Guides are downloadable from the NASA Night Sky Network web site at nightsky.jpl.nasa.gov. We will share the Forum-led Collaborative's experience in developing the guides, how they place individual science discoveries and learning resources into context for audiences, and how the Guides can be readily used in scientist public outreach efforts, in college and university introductory astronomy classes, and in other engagements between scientists, students and the public.

  18. SNP-PHAGE – High throughput SNP discovery pipeline

    Directory of Open Access Journals (Sweden)

    Cregan Perry B

    2006-10-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs as defined here are single base sequence changes or short insertion/deletions between or within individuals of a given species. As a result of their abundance and the availability of high throughput analysis technologies SNP markers have begun to replace other traditional markers such as restriction fragment length polymorphisms (RFLPs, amplified fragment length polymorphisms (AFLPs and simple sequence repeats (SSRs or microsatellite markers for fine mapping and association studies in several species. For SNP discovery from chromatogram data, several bioinformatics programs have to be combined to generate an analysis pipeline. Results have to be stored in a relational database to facilitate interrogation through queries or to generate data for further analyses such as determination of linkage disequilibrium and identification of common haplotypes. Although these tasks are routinely performed by several groups, an integrated open source SNP discovery pipeline that can be easily adapted by new groups interested in SNP marker development is currently unavailable. Results We developed SNP-PHAGE (SNP discovery Pipeline with additional features for identification of common haplotypes within a sequence tagged site (Haplotype Analysis and GenBank (-dbSNP submissions. This tool was applied for analyzing sequence traces from diverse soybean genotypes to discover over 10,000 SNPs. This package was developed on UNIX/Linux platform, written in Perl and uses a MySQL database. Scripts to generate a user-friendly web interface are also provided with common queries for preliminary data analysis. A machine learning tool developed by this group for increasing the efficiency of SNP discovery is integrated as a part of this package as an optional feature. The SNP-PHAGE package is being made available open source at http://bfgl.anri.barc.usda.gov/ML/snp-phage/. Conclusion SNP-PHAGE provides a bioinformatics

  19. Is the GAIN Act a turning point in new antibiotic discovery?

    Science.gov (United States)

    Brown, Eric D

    2013-03-01

    The United States GAIN (Generating Antibiotic Incentives Now) Act is a call to action for new antibiotic discovery and development that arises from a ground swell of concern over declining activity in this therapeutic area in the pharmaceutical sector. The GAIN Act aims to provide economic incentives for antibiotic drug discovery in the form of market exclusivity and accelerated drug approval processes. The legislation comes on the heels of nearly two decades of failure using the tools of modern drug discovery to find new antibiotic drugs. The lessons of failure are examined herein as are the prospects for a renewed effort in antibiotic drug discovery and development stimulated by new investments in both the public and private sector.

  20. Trellis Tone Modulation Multiple-Access for Peer Discovery in D2D Networks

    Directory of Open Access Journals (Sweden)

    Chiwoo Lim

    2018-04-01

    Full Text Available In this paper, a new non-orthogonal multiple-access scheme, trellis tone modulation multiple-access (TTMMA, is proposed for peer discovery of distributed device-to-device (D2D communication. The range and capacity of discovery are important performance metrics in peer discovery. The proposed trellis tone modulation uses single-tone transmission and achieves a long discovery range due to its low Peak-to-Average Power Ratio (PAPR. The TTMMA also exploits non-orthogonal resource assignment to increase the discovery capacity. For the multi-user detection of superposed multiple-access signals, a message-passing algorithm with supplementary schemes are proposed. With TTMMA and its message-passing demodulation, approximately 1.5 times the number of devices are discovered compared to the conventional frequency division multiple-access (FDMA-based discovery.

  1. Understanding Cancer Prognosis

    Medline Plus

    Full Text Available ... Discovery Stories of Discovery Research Tools, Specimens, and Data Conducting Clinical Trials Statistical Tools and Data Terminology Resources NCI Data Catalog Cryo-EM NCI's ...

  2. Reducing traffic in DHT-based discovery protocols for dynamic resources

    Science.gov (United States)

    Carlini, Emanuele; Coppola, Massimo; Laforenza, Domenico; Ricci, Laura

    Existing peer-to-peer approaches for resource location based on distributed hash tables focus mainly on optimizing lookup query resolution. The underlying assumption is that the arrival ratio of lookup queries is higher than the ratio of resource publication operations. We propose a set of optimization strategies to reduce the network traffic generated by the data publication and update process when resources have dynamic-valued attributes. We aim at reducing the publication overhead of supporting multi-attribute range queries. We develop a model predicting the bandwidth reduction, and we assign proper values to the model variables on the basis of real data measurements. We further validate these results by a set of simulations. Our experiments are designed to reproduce the typical behaviour of the resulting scheme within large distributed resource location system, like the resource location service of the XtreemOS Grid-enabled Operating System.

  3. IMG-ABC: A Knowledge Base To Fuel Discovery of Biosynthetic Gene Clusters and Novel Secondary Metabolites.

    Science.gov (United States)

    Hadjithomas, Michalis; Chen, I-Min Amy; Chu, Ken; Ratner, Anna; Palaniappan, Krishna; Szeto, Ernest; Huang, Jinghua; Reddy, T B K; Cimermančič, Peter; Fischbach, Michael A; Ivanova, Natalia N; Markowitz, Victor M; Kyrpides, Nikos C; Pati, Amrita

    2015-07-14

    In the discovery of secondary metabolites, analysis of sequence data is a promising exploration path that remains largely underutilized due to the lack of computational platforms that enable such a systematic approach on a large scale. In this work, we present IMG-ABC (https://img.jgi.doe.gov/abc), an atlas of biosynthetic gene clusters within the Integrated Microbial Genomes (IMG) system, which is aimed at harnessing the power of "big" genomic data for discovering small molecules. IMG-ABC relies on IMG's comprehensive integrated structural and functional genomic data for the analysis of biosynthetic gene clusters (BCs) and associated secondary metabolites (SMs). SMs and BCs serve as the two main classes of objects in IMG-ABC, each with a rich collection of attributes. A unique feature of IMG-ABC is the incorporation of both experimentally validated and computationally predicted BCs in genomes as well as metagenomes, thus identifying BCs in uncultured populations and rare taxa. We demonstrate the strength of IMG-ABC's focused integrated analysis tools in enabling the exploration of microbial secondary metabolism on a global scale, through the discovery of phenazine-producing clusters for the first time in Alphaproteobacteria. IMG-ABC strives to fill the long-existent void of resources for computational exploration of the secondary metabolism universe; its underlying scalable framework enables traversal of uncovered phylogenetic and chemical structure space, serving as a doorway to a new era in the discovery of novel molecules. IMG-ABC is the largest publicly available database of predicted and experimental biosynthetic gene clusters and the secondary metabolites they produce. The system also includes powerful search and analysis tools that are integrated with IMG's extensive genomic/metagenomic data and analysis tool kits. As new research on biosynthetic gene clusters and secondary metabolites is published and more genomes are sequenced, IMG-ABC will continue to

  4. Big Data and Comparative Effectiveness Research in Radiation Oncology: Synergy and Accelerated Discovery

    Science.gov (United States)

    Trifiletti, Daniel M.; Showalter, Timothy N.

    2015-01-01

    Several advances in large data set collection and processing have the potential to provide a wave of new insights and improvements in the use of radiation therapy for cancer treatment. The era of electronic health records, genomics, and improving information technology resources creates the opportunity to leverage these developments to create a learning healthcare system that can rapidly deliver informative clinical evidence. By merging concepts from comparative effectiveness research with the tools and analytic approaches of “big data,” it is hoped that this union will accelerate discovery, improve evidence for decision making, and increase the availability of highly relevant, personalized information. This combination offers the potential to provide data and analysis that can be leveraged for ultra-personalized medicine and high-quality, cutting-edge radiation therapy. PMID:26697409

  5. Gene Overexpression Resources in Cereals for Functional Genomics and Discovery of Useful Genes

    Directory of Open Access Journals (Sweden)

    Kiyomi Abe

    2016-09-01

    Full Text Available Identification and elucidation of functions of plant genes is valuable for both basic and applied research. In addition to natural variation in model plants, numerous loss-of-function resources have been produced by mutagenesis with chemicals, irradiation, or insertions of transposable elements or T-DNA. However, we may be unable to observe loss-of-function phenotypes for genes with functionally redundant homologs, and for those essential for growth and development. To offset such disadvantages, gain-of-function transgenic resources have been exploited. Activation-tagged lines have been generated using obligatory overexpression of endogenous genes by random insertion of an enhancer. Recent progress in DNA sequencing technology and bioinformatics has enabled the preparation of genomewide collections of full-length cDNAs (fl-cDNAs in some model species. Using the fl-cDNA clones, a novel gain-of-function strategy, Fl-cDNA OvereXpressor gene (FOX-hunting system, has been developed. A mutant phenotype in a FOX line can be directly attributed to the overexpressed fl-cDNA. Investigating a large population of FOX lines could reveal important genes conferring favorable phenotypes for crop breeding. Alternatively, a unique loss-of-function approach Chimeric REpressor gene Silencing Technology (CRES-T has been developed. In CRES-T, overexpression of a chimeric repressor, composed of the coding sequence of a transcription factor (TF and short peptide designated as the repression domain, could interfere with the action of endogenous TF in plants. Although plant TFs usually consist of gene families, CRES-T is effective, in principle, even for the TFs with functional redundancy. In this review, we focus on the current status of the gene-overexpression strategies and resources for identifying and elucidating novel functions of cereal genes. We discuss the potential of these research tools for identifying useful genes and phenotypes for application in crop

  6. A risk assessment tool applied to the study of shale gas resources

    Energy Technology Data Exchange (ETDEWEB)

    Veiguela, Miguel [Mining, Energy and Materials Engineering School, University of Oviedo (Spain); Hurtado, Antonio; Eguilior, Sonsoles; Recreo, Fernando [Environment Department, CIEMAT, Madrid (Spain); Roqueñi, Nieves [Mining, Energy and Materials Engineering School, University of Oviedo (Spain); Loredo, Jorge, E-mail: jloredo@uniovi.es [Mining, Energy and Materials Engineering School, University of Oviedo (Spain)

    2016-11-15

    The implementation of a risk assessment tool with the capacity to evaluate the risks for health, safety and the environment (HSE) from extraction of non-conventional fossil fuel resources by the hydraulic fracturing (fracking) technique can be a useful tool to boost development and progress of the technology and winning public trust and acceptance of this. At the early project stages, the lack of data related the selection of non-conventional gas deposits makes it difficult the use of existing approaches to risk assessment of fluids injected into geologic formations. The qualitative risk assessment tool developed in this work is based on the approach that shale gas exploitation risk is dependent on both the geologic site and the technological aspects. It follows from the Oldenburg's ‘Screening and Ranking Framework (SRF)’ developed to evaluate potential geologic carbon dioxide (CO{sub 2}) storage sites. These two global characteristics: (1) characteristics centered on the natural aspects of the site and (2) characteristics centered on the technological aspects of the Project, have been evaluated through user input of Property values, which define Attributes, which define the Characteristics. In order to carry out an individual evaluation of each of the characteristics and the elements of the model, the tool has been implemented in a spreadsheet. The proposed model has been applied to a site with potential for the exploitation of shale gas in Asturias (northwestern Spain) with tree different technological options to test the approach. - Highlights: • The proposed methodology is a risk assessment useful tool for shale gas projects. • The tool is addressed to the early stages of decision making processes. • The risk assessment of a site is made through a qualitative estimation. • Different weights are assigned to each specific natural and technological property. • The uncertainty associated to the current knowledge is considered.

  7. A risk assessment tool applied to the study of shale gas resources

    International Nuclear Information System (INIS)

    Veiguela, Miguel; Hurtado, Antonio; Eguilior, Sonsoles; Recreo, Fernando; Roqueñi, Nieves; Loredo, Jorge

    2016-01-01

    The implementation of a risk assessment tool with the capacity to evaluate the risks for health, safety and the environment (HSE) from extraction of non-conventional fossil fuel resources by the hydraulic fracturing (fracking) technique can be a useful tool to boost development and progress of the technology and winning public trust and acceptance of this. At the early project stages, the lack of data related the selection of non-conventional gas deposits makes it difficult the use of existing approaches to risk assessment of fluids injected into geologic formations. The qualitative risk assessment tool developed in this work is based on the approach that shale gas exploitation risk is dependent on both the geologic site and the technological aspects. It follows from the Oldenburg's ‘Screening and Ranking Framework (SRF)’ developed to evaluate potential geologic carbon dioxide (CO_2) storage sites. These two global characteristics: (1) characteristics centered on the natural aspects of the site and (2) characteristics centered on the technological aspects of the Project, have been evaluated through user input of Property values, which define Attributes, which define the Characteristics. In order to carry out an individual evaluation of each of the characteristics and the elements of the model, the tool has been implemented in a spreadsheet. The proposed model has been applied to a site with potential for the exploitation of shale gas in Asturias (northwestern Spain) with tree different technological options to test the approach. - Highlights: • The proposed methodology is a risk assessment useful tool for shale gas projects. • The tool is addressed to the early stages of decision making processes. • The risk assessment of a site is made through a qualitative estimation. • Different weights are assigned to each specific natural and technological property. • The uncertainty associated to the current knowledge is considered.

  8. Low cost, low tech SNP genotyping tools for resource-limited areas: Plague in Madagascar as a model.

    Directory of Open Access Journals (Sweden)

    Cedar L Mitchell

    2017-12-01

    Full Text Available Genetic analysis of pathogenic organisms is a useful tool for linking human cases together and/or to potential environmental sources. The resulting data can also provide information on evolutionary patterns within a targeted species and phenotypic traits. However, the instruments often used to generate genotyping data, such as single nucleotide polymorphisms (SNPs, can be expensive and sometimes require advanced technologies to implement. This places many genotyping tools out of reach for laboratories that do not specialize in genetic studies and/or lack the requisite financial and technological resources. To address this issue, we developed a low cost and low tech genotyping system, termed agarose-MAMA, which combines traditional PCR and agarose gel electrophoresis to target phylogenetically informative SNPs.To demonstrate the utility of this approach for generating genotype data in a resource-constrained area (Madagascar, we designed an agarose-MAMA system targeting previously characterized SNPs within Yersinia pestis, the causative agent of plague. We then used this system to genetically type pathogenic strains of Y. pestis in a Malagasy laboratory not specialized in genetic studies, the Institut Pasteur de Madagascar (IPM. We conducted rigorous assay performance validations to assess potential variation introduced by differing research facilities, reagents, and personnel and found no difference in SNP genotyping results. These agarose-MAMA PCR assays are currently employed as an investigative tool at IPM, providing Malagasy researchers a means to improve the value of their plague epidemiological investigations by linking outbreaks to potential sources through genetic characterization of isolates and to improve understanding of disease ecology that may contribute to a long-term control effort.The success of our study demonstrates that the SNP-based genotyping capacity of laboratories in developing countries can be expanded with manageable

  9. Low cost, low tech SNP genotyping tools for resource-limited areas: Plague in Madagascar as a model.

    Science.gov (United States)

    Mitchell, Cedar L; Andrianaivoarimanana, Voahangy; Colman, Rebecca E; Busch, Joseph; Hornstra-O'Neill, Heidie; Keim, Paul S; Wagner, David M; Rajerison, Minoarisoa; Birdsell, Dawn N

    2017-12-01

    Genetic analysis of pathogenic organisms is a useful tool for linking human cases together and/or to potential environmental sources. The resulting data can also provide information on evolutionary patterns within a targeted species and phenotypic traits. However, the instruments often used to generate genotyping data, such as single nucleotide polymorphisms (SNPs), can be expensive and sometimes require advanced technologies to implement. This places many genotyping tools out of reach for laboratories that do not specialize in genetic studies and/or lack the requisite financial and technological resources. To address this issue, we developed a low cost and low tech genotyping system, termed agarose-MAMA, which combines traditional PCR and agarose gel electrophoresis to target phylogenetically informative SNPs. To demonstrate the utility of this approach for generating genotype data in a resource-constrained area (Madagascar), we designed an agarose-MAMA system targeting previously characterized SNPs within Yersinia pestis, the causative agent of plague. We then used this system to genetically type pathogenic strains of Y. pestis in a Malagasy laboratory not specialized in genetic studies, the Institut Pasteur de Madagascar (IPM). We conducted rigorous assay performance validations to assess potential variation introduced by differing research facilities, reagents, and personnel and found no difference in SNP genotyping results. These agarose-MAMA PCR assays are currently employed as an investigative tool at IPM, providing Malagasy researchers a means to improve the value of their plague epidemiological investigations by linking outbreaks to potential sources through genetic characterization of isolates and to improve understanding of disease ecology that may contribute to a long-term control effort. The success of our study demonstrates that the SNP-based genotyping capacity of laboratories in developing countries can be expanded with manageable financial cost for

  10. Insects: an underrepresented resource for the discovery of biologically active natural products

    Directory of Open Access Journals (Sweden)

    Lauren Seabrooks

    2017-07-01

    Full Text Available Nature has been the source of life-changing and -saving medications for centuries. Aspirin, penicillin and morphine are prime examples of Nature׳s gifts to medicine. These discoveries catalyzed the field of natural product drug discovery which has mostly focused on plants. However, insects have more than twice the number of species and entomotherapy has been in practice for as long as and often in conjunction with medicinal plants and is an important alternative to modern medicine in many parts of the world. Herein, an overview of current traditional medicinal applications of insects and characterization of isolated biologically active molecules starting from approximately 2010 is presented. Insect natural products reviewed were isolated from ants, bees, wasps, beetles, cockroaches, termites, flies, true bugs, moths and more. Biological activities of these natural products from insects include antimicrobial, antifungal, antiviral, anticancer, antioxidant, anti-inflammatory and immunomodulatory effects.

  11. Scientific and practical tools for dealing with water resource estimations for the future

    Directory of Open Access Journals (Sweden)

    D. A. Hughes

    2015-06-01

    Full Text Available Future flow regimes will be different to today and imperfect knowledge of present and future climate variations, rainfall–runoff processes and anthropogenic impacts make them highly uncertain. Future water resources decisions will rely on practical and appropriate simulation tools that are sensitive to changes, can assimilate different types of change information and flexible enough to accommodate improvements in understanding of change. They need to include representations of uncertainty and generate information appropriate for uncertain decision-making. This paper presents some examples of the tools that have been developed to address these issues in the southern Africa region. The examples include uncertainty in present day simulations due to lack of understanding and data, using climate change projection data from multiple climate models and future catchment responses due to both climate and development effects. The conclusions are that the tools and models are largely available and what we need is more reliable forcing and model evlaution information as well as methods of making decisions with such inevitably uncertain information.

  12. Discovery and History of Amino Acid Fermentation.

    Science.gov (United States)

    Hashimoto, Shin-Ichi

    There has been a strong demand in Japan and East Asia for L-glutamic acid as a seasoning since monosodium glutamate was found to present umami taste in 1907. The discovery of glutamate fermentation by Corynebacterium glutamicum in 1956 enabled abundant and low-cost production of the amino acid, creating a large market. The discovery also prompted researchers to develop fermentative production processes for other L-amino acids, such as lysine. Currently, the amino acid fermentation industry is so huge that more than 5 million metric tons of amino acids are manufactured annually all over the world, and this number continues to grow. Research on amino acid fermentation fostered the notion and skills of metabolic engineering which has been applied for the production of other compounds from renewable resources. The discovery of glutamate fermentation has had revolutionary impacts on both the industry and science. In this chapter, the history and development of glutamate fermentation, including the very early stage of fermentation of other amino acids, are reviewed.

  13. Resource Disambiguator for the Web: Extracting Biomedical Resources and Their Citations from the Scientific Literature.

    Directory of Open Access Journals (Sweden)

    Ibrahim Burak Ozyurt

    Full Text Available The NIF Registry developed and maintained by the Neuroscience Information Framework is a cooperative project aimed at cataloging research resources, e.g., software tools, databases and tissue banks, funded largely by governments and available as tools to research scientists. Although originally conceived for neuroscience, the NIF Registry has over the years broadened in the scope to include research resources of general relevance to biomedical research. The current number of research resources listed by the Registry numbers over 13K. The broadening in scope to biomedical science led us to re-christen the NIF Registry platform as SciCrunch. The NIF/SciCrunch Registry has been cataloging the resource landscape since 2006; as such, it serves as a valuable dataset for tracking the breadth, fate and utilization of these resources. Our experience shows research resources like databases are dynamic objects, that can change location and scope over time. Although each record is entered manually and human-curated, the current size of the registry requires tools that can aid in curation efforts to keep content up to date, including when and where such resources are used. To address this challenge, we have developed an open source tool suite, collectively termed RDW: Resource Disambiguator for the (Web. RDW is designed to help in the upkeep and curation of the registry as well as in enhancing the content of the registry by automated extraction of resource candidates from the literature. The RDW toolkit includes a URL extractor from papers, resource candidate screen, resource URL change tracker, resource content change tracker. Curators access these tools via a web based user interface. Several strategies are used to optimize these tools, including supervised and unsupervised learning algorithms as well as statistical text analysis. The complete tool suite is used to enhance and maintain the resource registry as well as track the usage of individual

  14. Discovery of Sound in the Sea 2013 Annual Report

    Science.gov (United States)

    2013-09-30

    develop and maintain resources that address the long-term goal. The resources include a website (Figure 1), a tri-fold educational pamphlet (available in...on whale watches during the winter months. The DOSITS tri-fold brochure was translated to French for distribution at the 21st International...University of Rhode Island. (tri-fold pamphlet ) Vigness-Raposa, K.J., Scowcroft, G., Miller, J.H., and Ketten, D.R. 2012. Discovery of Sound in

  15. Automated discovery systems and the inductivist controversy

    Science.gov (United States)

    Giza, Piotr

    2017-09-01

    The paper explores possible influences that some developments in the field of branches of AI, called automated discovery and machine learning systems, might have upon some aspects of the old debate between Francis Bacon's inductivism and Karl Popper's falsificationism. Donald Gillies facetiously calls this controversy 'the duel of two English knights', and claims, after some analysis of historical cases of discovery, that Baconian induction had been used in science very rarely, or not at all, although he argues that the situation has changed with the advent of machine learning systems. (Some clarification of terms machine learning and automated discovery is required here. The key idea of machine learning is that, given data with associated outcomes, software can be trained to make those associations in future cases which typically amounts to inducing some rules from individual cases classified by the experts. Automated discovery (also called machine discovery) deals with uncovering new knowledge that is valuable for human beings, and its key idea is that discovery is like other intellectual tasks and that the general idea of heuristic search in problem spaces applies also to discovery tasks. However, since machine learning systems discover (very low-level) regularities in data, throughout this paper I use the generic term automated discovery for both kinds of systems. I will elaborate on this later on). Gillies's line of argument can be generalised: thanks to automated discovery systems, philosophers of science have at their disposal a new tool for empirically testing their philosophical hypotheses. Accordingly, in the paper, I will address the question, which of the two philosophical conceptions of scientific method is better vindicated in view of the successes and failures of systems developed within three major research programmes in the field: machine learning systems in the Turing tradition, normative theory of scientific discovery formulated by Herbert Simon

  16. Culture-independent discovery of natural products from soil metagenomes.

    Science.gov (United States)

    Katz, Micah; Hover, Bradley M; Brady, Sean F

    2016-03-01

    Bacterial natural products have proven to be invaluable starting points in the development of many currently used therapeutic agents. Unfortunately, traditional culture-based methods for natural product discovery have been deemphasized by pharmaceutical companies due in large part to high rediscovery rates. Culture-independent, or "metagenomic," methods, which rely on the heterologous expression of DNA extracted directly from environmental samples (eDNA), have the potential to provide access to metabolites encoded by a large fraction of the earth's microbial biosynthetic diversity. As soil is both ubiquitous and rich in bacterial diversity, it is an appealing starting point for culture-independent natural product discovery efforts. This review provides an overview of the history of soil metagenome-driven natural product discovery studies and elaborates on the recent development of new tools for sequence-based, high-throughput profiling of environmental samples used in discovering novel natural product biosynthetic gene clusters. We conclude with several examples of these new tools being employed to facilitate the recovery of novel secondary metabolite encoding gene clusters from soil metagenomes and the subsequent heterologous expression of these clusters to produce bioactive small molecules.

  17. The petroleum resources on the Norwegian Continental Shelf. 2009

    Energy Technology Data Exchange (ETDEWEB)

    2009-07-01

    Exploration activity has reached record-breaking levels in the last couple of years, which has led to many, but small, discoveries. The NPD believes that large discoveries can still be made in areas of the shelf that have not been extensively explored. Content: Challenges on the Norwegian continental shelf; Value creation in fields; 40 years of oil and gas production; Resource management; Still many possibilities; Energy consumption and the environment; Exploration; Access to acreage; Awards of new licenses; Exploration in frontier areas; Exploration history and statistics; Resources and forecasts; Undiscovered resources; Proven recoverable resources; Forecasts; Short-term petroleum production forecast (2009-2013); Investments- and operating costs forecasts; Long-term forecast for the petroleum production; Emissions from the petroleum activity. (AG)

  18. Cyber-Enabled Scientific Discovery

    International Nuclear Information System (INIS)

    Chan, Tony; Jameson, Leland

    2007-01-01

    It is often said that numerical simulation is third in the group of three ways to explore modern science: theory, experiment and simulation. Carefully executed modern numerical simulations can, however, be considered at least as relevant as experiment and theory. In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time. In comparison to theory, with numerical simulation one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically. Cyber-enabled scientific discovery is not just about numerical simulation but about every possible issue related to scientific discovery by utilizing cyberinfrastructure such as the analysis and storage of large data sets, the creation of tools that can be used by broad classes of researchers and, above all, the education and training of a cyber-literate workforce

  19. Concept relation discovery and innovation enabling technology (CORDIET)

    NARCIS (Netherlands)

    Poelmans, J.; Elzinga, P.; Neznanov, A.; Viaene, S.; Kuznetsov, S.O.; Ignatov, D.; Dedene, G.

    2011-01-01

    Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a toolbox for gaining new knowledge from unstructured text data. At the core of CORDIET is the C-K theory which captures the essential elements of innovation. The tool uses Formal Concept Analysis (FCA), Emergent Self

  20. The Registry of Knowledge Translation Methods and Tools: a resource to support evidence-informed public health.

    Science.gov (United States)

    Peirson, Leslea; Catallo, Cristina; Chera, Sunita

    2013-08-01

    This paper examines the development of a globally accessible online Registry of Knowledge Translation Methods and Tools to support evidence-informed public health. A search strategy, screening and data extraction tools, and writing template were developed to find, assess, and summarize relevant methods and tools. An interactive website and searchable database were designed to house the registry. Formative evaluation was undertaken to inform refinements. Over 43,000 citations were screened; almost 700 were full-text reviewed, 140 of which were included. By November 2012, 133 summaries were available. Between January 1 and November 30, 2012 over 32,945 visitors from more than 190 countries accessed the registry. Results from 286 surveys and 19 interviews indicated the registry is valued and useful, but would benefit from a more intuitive indexing system and refinements to the summaries. User stories and promotional activities help expand the reach and uptake of knowledge translation methods and tools in public health contexts. The National Collaborating Centre for Methods and Tools' Registry of Methods and Tools is a unique and practical resource for public health decision makers worldwide.

  1. GI-conf: A configuration tool for the GI-cat distributed catalog

    Science.gov (United States)

    Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.

    2009-04-01

    In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated

  2. NATURAL RESOURCES ASSESSMENT

    International Nuclear Information System (INIS)

    D.F. Fenster

    2000-01-01

    The purpose of this report is to summarize the scientific work that was performed to evaluate and assess the occurrence and economic potential of natural resources within the geologic setting of the Yucca Mountain area. The extent of the regional areas of investigation for each commodity differs and those areas are described in more detail in the major subsections of this report. Natural resource assessments have focused on an area defined as the ''conceptual controlled area'' because of the requirements contained in the U.S. Nuclear Regulatory Commission Regulation, 10 CFR Part 60, to define long-term boundaries for potential radionuclide releases. New requirements (proposed 10 CFR Part 63 [Dyer 1999]) have obviated the need for defining such an area. However, for the purposes of this report, the area being discussed, in most cases, is the previously defined ''conceptual controlled area'', now renamed the ''natural resources site study area'' for this report (shown on Figure 1). Resource potential can be difficult to assess because it is dependent upon many factors, including economics (demand, supply, cost), the potential discovery of new uses for resources, or the potential discovery of synthetics to replace natural resource use. The evaluations summarized are based on present-day use and economic potential of the resources. The objective of this report is to summarize the existing reports and information for the Yucca Mountain area on: (1) Metallic mineral and mined energy resources (such as gold, silver, etc., including uranium); (2) Industrial rocks and minerals (such as sand, gravel, building stone, etc.); (3) Hydrocarbons (including oil, natural gas, tar sands, oil shales, and coal); and (4) Geothermal resources. Groundwater is present at the Yucca Mountain site at depths ranging from 500 to 750 m (about 1,600 to 2,500 ft) below the ground surface. Groundwater resources are not discussed in this report, but are planned to be included in the hydrology

  3. Availability of EPA Tools and Resources to Increase Awareness of the Cardiovascular Health Effects of Air Pollution

    Science.gov (United States)

    On November 14, 2017 Dr. Wayne Cascio, Acting Director will present a webinar titled, “Availability of EPA Tools and Resources to Increase Awareness of the Cardiovascular Health Effects of Air Pollution” to HHS’ Million Hearts Federal Partner’s Monthly Cal...

  4. Metagenomics as a Tool for Enzyme Discovery: Hydrolytic Enzymes from Marine-Related Metagenomes.

    Science.gov (United States)

    Popovic, Ana; Tchigvintsev, Anatoly; Tran, Hai; Chernikova, Tatyana N; Golyshina, Olga V; Yakimov, Michail M; Golyshin, Peter N; Yakunin, Alexander F

    2015-01-01

    This chapter discusses metagenomics and its application for enzyme discovery, with a focus on hydrolytic enzymes from marine metagenomic libraries. With less than one percent of culturable microorganisms in the environment, metagenomics, or the collective study of community genetics, has opened up a rich pool of uncharacterized metabolic pathways, enzymes, and adaptations. This great untapped pool of genes provides the particularly exciting potential to mine for new biochemical activities or novel enzymes with activities tailored to peculiar sets of environmental conditions. Metagenomes also represent a huge reservoir of novel enzymes for applications in biocatalysis, biofuels, and bioremediation. Here we present the results of enzyme discovery for four enzyme activities, of particular industrial or environmental interest, including esterase/lipase, glycosyl hydrolase, protease and dehalogenase.

  5. Lambda-Display: A Powerful Tool for Antigen Discovery

    Directory of Open Access Journals (Sweden)

    Nicola Gargano

    2011-04-01

    Full Text Available Since its introduction in 1985, phage display technology has been successfully used in projects aimed at deciphering biological processes and isolating molecules of practical value in several applications. Bacteriophage lambda, representing a classical molecular cloning and expression system has also been exploited for generating large combinatorial libraries of small peptides and protein domains exposed on its capsid. More recently, lambda display has been consistently and successfully employed for domain mapping, antigen discovery and protein interaction studies or, more generally, in functional genomics. We show here the results obtained by the use of large libraries of cDNA and genomic DNA for the molecular dissection of the human B-cell response against complex pathogens, including protozoan parasites, bacteria and viruses. Moreover, by reviewing the experimental work performed in recent investigations we illustrate the potential of lambda display in the diagnostics field and for identifying antigens useful as targets for vaccine development.

  6. The MY NASA DATA Project: Tools and a Collaboration Space for Knowledge Discovery

    Science.gov (United States)

    Chambers, L. H.; Alston, E. J.; Diones, D. D.; Moore, S. W.; Oots, P. C.; Phelps, C. S.

    2006-05-01

    The Atmospheric Science Data Center (ASDC) at NASA Langley Research Center is charged with serving a wide user community that is interested in its large data holdings in the areas of Aerosols, Clouds, Radiation Budget, and Tropospheric Chemistry. Most of the data holdings, however, are in large files with specialized data formats. The MY NASA DATA (mynasadata.larc.nasa.gov) project began in 2004, as part of the NASA Research, Education, and Applications Solutions Network (REASoN), in order to open this important resource to a broader community including K-12 education and citizen scientists. MY NASA DATA (short for Mentoring and inquirY using NASA Data on Atmospheric and earth science for Teachers and Amateurs) consists of a web space that collects tools, lesson plans, and specially developed documentation to help the target audience more easily use the vast collection of NASA data about the Earth System. The core piece of the MY NASA DATA project is the creation of microsets (both static and custom) that make data easily accessible. The installation of a Live Access Server (LAS) greatly enhanced the ability for teachers, students, and citizen scientists to create and explore custom microsets of Earth System Science data. The LAS, which is an open source software tool using emerging data standards, also allows the MY NASA DATA team to make available data on other aspects of the Earth System from collaborating data centers. We are currently working with the Physical Oceanography DAAC at the Jet Propulsion Laboratory to bring in several parameters describing the ocean. In addition, MY NASA DATA serves as a central space for the K-12 community to share resources. The site already includes a dozen User-contributed lesson plans. This year we will be focusing on the Citizen Science portion of the site, and will be welcoming user-contributed project ideas, as well as reports of completed projects. An e-mentor network has also been created to involve a wider community in

  7. Methods and tools to simulate the effect of economic instruments in complex water resources systems. Application to the Jucar river basin.

    Science.gov (United States)

    Lopez-Nicolas, Antonio; Pulido-Velazquez, Manuel

    2014-05-01

    The main challenge of the BLUEPRINT to safeguard Europe's water resources (EC, 2012) is to guarantee that enough good quality water is available for people's needs, the economy and the environment. In this sense, economic policy instruments such as water pricing policies and water markets can be applied to enhance efficient use of water. This paper presents a method based on hydro-economic tools to assess the effect of economic instruments on water resource systems. Hydro-economic models allow integrated analysis of water supply, demand and infrastructure operation at the river basin scale, by simultaneously combining engineering, hydrologic and economic aspects of water resources management. The method made use of the simulation and optimization hydroeconomic tools SIMGAMS and OPTIGAMS. The simulation tool SIMGAMS allocates water resources among the users according to priorities and operating rules, and evaluate economic scarcity costs of the system by using economic demand functions. The model's objective function is designed so that the system aims to meet the operational targets (ranked according to priorities) at each month while following the system operating rules. The optimization tool OPTIGAMS allocates water resources based on an economic efficiency criterion: maximize net benefits, or alternatively, minimizing the total water scarcity and operating cost of water use. SIMGAS allows to simulate incentive water pricing policies based on marginal resource opportunity costs (MROC; Pulido-Velazquez et al., 2013). Storage-dependent step pricing functions are derived from the time series of MROC values at a certain reservoir in the system. These water pricing policies are defined based on water availability in the system (scarcity pricing), so that when water storage is high, the MROC is low, while low storage (drought periods) will be associated to high MROC and therefore, high prices. We also illustrate the use of OPTIGAMS to simulate the effect of ideal water

  8. Current perspectives in fragment-based lead discovery (FBLD)

    Science.gov (United States)

    Lamoree, Bas; Hubbard, Roderick E.

    2017-01-01

    It is over 20 years since the first fragment-based discovery projects were disclosed. The methods are now mature for most ‘conventional’ targets in drug discovery such as enzymes (kinases and proteases) but there has also been growing success on more challenging targets, such as disruption of protein–protein interactions. The main application is to identify tractable chemical startpoints that non-covalently modulate the activity of a biological molecule. In this essay, we overview current practice in the methods and discuss how they have had an impact in lead discovery – generating a large number of fragment-derived compounds that are in clinical trials and two medicines treating patients. In addition, we discuss some of the more recent applications of the methods in chemical biology – providing chemical tools to investigate biological molecules, mechanisms and systems. PMID:29118093

  9. A vulnerability tool for adapting water and aquatic resources to climate change and extremes on the Shoshone National Forest, Wyoming

    Science.gov (United States)

    Rice, J.; Joyce, L. A.; Armel, B.; Bevenger, G.; Zubic, R.

    2011-12-01

    Climate change introduces a significant challenge for land managers and decision makers managing the natural resources that provide many benefits from forests. These benefits include water for urban and agricultural uses, wildlife habitat, erosion and climate control, aquifer recharge, stream flows regulation, water temperature regulation, and cultural services such as outdoor recreation and aesthetic enjoyment. The Forest Service has responded to this challenge by developing a national strategy for responding to climate change (the National Roadmap for Responding to Climate Change, July 2010). In concert with this national strategy, the Forest Service's Westwide Climate Initiative has conducted 4 case studies on individual Forests in the western U.S to develop climate adaptation tools. Western National Forests are particularly vulnerable to climate change as they have high-mountain topography, diversity in climate and vegetation, large areas of water limited ecosystems, and increasing urbanization. Information about the vulnerability and capacity of resources to adapt to climate change and extremes is lacking. There is an urgent need to provide customized tools and synthesized local scale information about the impacts to resources from future climate change and extremes, as well as develop science based adaptation options and strategies in National Forest management and planning. The case study on the Shoshone National Forest has aligned its objectives with management needs by developing a climate extreme vulnerability tool that guides adaptation options development. The vulnerability tool determines the likely degree to which native Yellowstone cutthroat trout and water availability are susceptible to, or unable to cope with adverse effects of climate change extremes. We spatially categorize vulnerability for water and native trout resources using exposure, sensitivity, and adaptive capacity indicators that use minimum and maximum climate and GIS data. Results

  10. California's Central Valley Groundwater Study: A Powerful New Tool to Assess Water Resources in California's Central Valley

    Science.gov (United States)

    Faunt, Claudia C.; Hanson, Randall T.; Belitz, Kenneth; Rogers, Laurel

    2009-01-01

    Competition for water resources is growing throughout California, particularly in the Central Valley. Since 1980, the Central Valley's population has nearly doubled to 3.8 million people. It is expected to increase to 6 million by 2020. Statewide population growth, anticipated reductions in Colorado River water deliveries, drought, and the ecological crisis in the Sacramento-San Joaquin Delta have created an intense demand for water. Tools and information can be used to help manage the Central Valley aquifer system, an important State and national resource.

  11. Drug target ontology to classify and integrate drug discovery data.

    Science.gov (United States)

    Lin, Yu; Mehta, Saurabh; Küçük-McGinty, Hande; Turner, John Paul; Vidovic, Dusica; Forlin, Michele; Koleti, Amar; Nguyen, Dac-Trung; Jensen, Lars Juhl; Guha, Rajarshi; Mathias, Stephen L; Ursu, Oleg; Stathias, Vasileios; Duan, Jianbin; Nabizadeh, Nooshin; Chung, Caty; Mader, Christopher; Visser, Ubbo; Yang, Jeremy J; Bologa, Cristian G; Oprea, Tudor I; Schürer, Stephan C

    2017-11-09

    One of the most successful approaches to develop new small molecule therapeutics has been to start from a validated druggable protein target. However, only a small subset of potentially druggable targets has attracted significant research and development resources. The Illuminating the Druggable Genome (IDG) project develops resources to catalyze the development of likely targetable, yet currently understudied prospective drug targets. A central component of the IDG program is a comprehensive knowledge resource of the druggable genome. As part of that effort, we have developed a framework to integrate, navigate, and analyze drug discovery data based on formalized and standardized classifications and annotations of druggable protein targets, the Drug Target Ontology (DTO). DTO was constructed by extensive curation and consolidation of various resources. DTO classifies the four major drug target protein families, GPCRs, kinases, ion channels and nuclear receptors, based on phylogenecity, function, target development level, disease association, tissue expression, chemical ligand and substrate characteristics, and target-family specific characteristics. The formal ontology was built using a new software tool to auto-generate most axioms from a database while supporting manual knowledge acquisition. A modular, hierarchical implementation facilitate ontology development and maintenance and makes use of various external ontologies, thus integrating the DTO into the ecosystem of biomedical ontologies. As a formal OWL-DL ontology, DTO contains asserted and inferred axioms. Modeling data from the Library of Integrated Network-based Cellular Signatures (LINCS) program illustrates the potential of DTO for contextual data integration and nuanced definition of important drug target characteristics. DTO has been implemented in the IDG user interface Portal, Pharos and the TIN-X explorer of protein target disease relationships. DTO was built based on the need for a formal semantic

  12. Open source GIS based tools to improve hydrochemical water resources management in EU H2020 FREEWAT platform

    Science.gov (United States)

    Criollo, Rotman; Velasco, Violeta; Vázquez-Suñé, Enric; Nardi, Albert; Marazuela, Miguel A.; Rossetto, Rudy; Borsi, Iacopo; Foglia, Laura; Cannata, Massimiliano; De Filippis, Giovanna

    2017-04-01

    Due to the general increase of water scarcity (Steduto et al., 2012), water quantity and quality must be well known to ensure a proper access to water resources in compliance with local and regional directives. This circumstance can be supported by tools which facilitate process of data management and its analysis. Such analyses have to provide research/professionals, policy makers and users with the ability to improve the management of the water resources with standard regulatory guidelines. Compliance with the established standard regulatory guidelines (with a special focus on requirement deriving from the GWD) should have an effective monitoring, evaluation, and interpretation of a large number of physical and chemical parameters. These amounts of datasets have to be assessed and interpreted: (i) integrating data from different sources and gathered with different data access techniques and formats; (ii) managing data with varying temporal and spatial extent; (iii) integrating groundwater quality information with other relevant information such as further hydrogeological data (Velasco et al., 2014) and pre-processing these data generally for the realization of groundwater models. In this context, the Hydrochemical Analysis Tools, akvaGIS Tools, has been implemented within the H2020 FREEWAT project; which aims to manage water resources by modelling water resource management in an open source GIS platform (QGIS desktop). The main goal of AkvaGIS Tools is to improve water quality analysis through different capabilities to improve the case study conceptual model managing all data related into its geospatial database (implemented in Spatialite) and a set of tools for improving the harmonization, integration, standardization, visualization and interpretation of the hydrochemical data. To achieve that, different commands cover a wide range of methodologies for querying, interpreting, and comparing groundwater quality data and facilitate the pre-processing analysis for

  13. Bioinformatics and biomarker discovery "Omic" data analysis for personalized medicine

    CERN Document Server

    Azuaje, Francisco

    2010-01-01

    This book is designed to introduce biologists, clinicians and computational researchers to fundamental data analysis principles, techniques and tools for supporting the discovery of biomarkers and the implementation of diagnostic/prognostic systems. The focus of the book is on how fundamental statistical and data mining approaches can support biomarker discovery and evaluation, emphasising applications based on different types of "omic" data. The book also discusses design factors, requirements and techniques for disease screening, diagnostic and prognostic applications. Readers are provided w

  14. The gene trap resource: a treasure trove for hemopoiesis research.

    Science.gov (United States)

    Forrai, Ariel; Robb, Lorraine

    2005-08-01

    The laboratory mouse is an invaluable tool for functional gene discovery because of its genetic malleability and a biological similarity to human systems that facilitates identification of human models of disease. A number of mutagenic technologies are being used to elucidate gene function in the mouse. Gene trapping is an insertional mutagenesis strategy that is being undertaken by multiple research groups, both academic and private, in an effort to introduce mutations across the mouse genome. Large-scale, publicly funded gene trap programs have been initiated in several countries with the International Gene Trap Consortium coordinating certain efforts and resources. We outline the methodology of mammalian gene trapping and how it can be used to identify genes expressed in both primitive and definitive blood cells and to discover hemopoietic regulator genes. Mouse mutants with hematopoietic phenotypes derived using gene trapping are described. The efforts of the large-scale gene trapping consortia have now led to the availability of libraries of mutagenized ES cell clones. The identity of the trapped locus in each of these clones can be identified by sequence-based searching via the world wide web. This resource provides an extraordinary tool for all researchers wishing to use mouse genetics to understand gene function.

  15. “Time for Some Traffic Problems": Enhancing E-Discovery and Big Data Processing Tools with Linguistic Methods for Deception Detection

    Directory of Open Access Journals (Sweden)

    Erin Smith Crabb

    2014-09-01

    Full Text Available Linguistic deception theory provides methods to discover potentially deceptive texts to make them accessible to clerical review. This paper proposes the integration of these linguistic methods with traditional e-discovery techniques to identify deceptive texts within a given author’s larger body of written work, such as their sent email box. First, a set of linguistic features associated with deception are identified and a prototype classifier is constructed to analyze texts and describe the features’ distributions, while avoiding topic-specific features to improve recall of relevant documents. The tool is then applied to a portion of the Enron Email Dataset to illustrate how these strategies identify records, providing an example of its advantages and capability to stratify the large data set at hand.

  16. (Self-) Discovery Service: Helping Students Help Themselves

    Science.gov (United States)

    Debonis, Rocco; O'Donnell, Edward; Thomes, Cynthia

    2012-01-01

    EBSCO Discovery Service (EDS) has been heavily used by UMUC students since its implementation in fall 2011, but experience has shown that it is not always the most appropriate source for satisfying students' information needs and that they often need assistance in understanding how the tool works and how to use it effectively. UMUC librarians have…

  17. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore

    2004-07-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  18. Mass spectrometry-driven drug discovery for development of herbal medicine.

    Science.gov (United States)

    Zhang, Aihua; Sun, Hui; Wang, Xijun

    2018-05-01

    Herbal medicine (HM) has made a major contribution to the drug discovery process with regard to identifying products compounds. Currently, more attention has been focused on drug discovery from natural compounds of HM. Despite the rapid advancement of modern analytical techniques, drug discovery is still a difficult and lengthy process. Fortunately, mass spectrometry (MS) can provide us with useful structural information for drug discovery, has been recognized as a sensitive, rapid, and high-throughput technology for advancing drug discovery from HM in the post-genomic era. It is essential to develop an efficient, high-quality, high-throughput screening method integrated with an MS platform for early screening of candidate drug molecules from natural products. We have developed a new chinmedomics strategy reliant on MS that is capable of capturing the candidate molecules, facilitating their identification of novel chemical structures in the early phase; chinmedomics-guided natural product discovery based on MS may provide an effective tool that addresses challenges in early screening of effective constituents of herbs against disease. This critical review covers the use of MS with related techniques and methodologies for natural product discovery, biomarker identification, and determination of mechanisms of action. It also highlights high-throughput chinmedomics screening methods suitable for lead compound discovery illustrated by recent successes. © 2016 Wiley Periodicals, Inc.

  19. Long term adequacy of uranium resources

    International Nuclear Information System (INIS)

    Steyn, J.

    1990-01-01

    This paper examines the adequacy of world economic uranium resources to meet requirements in the very long term, that is until at least 2025 and beyond. It does so by analysing current requirements forecasts, existing and potential production centre supply capability schedules and national resource estimates. It takes into account lead times from resource discovery to production and production rate limitations. The institutional and political issues surrounding the question of adequacy are reviewed. (author)

  20. Impact of a Discovery System on Interlibrary Loan

    Science.gov (United States)

    Musser, Linda R.; Coopey, Barbara M.

    2016-01-01

    Web-scale discovery services such as Summon (Serial Solutions), WorldCat Local (OCLC), EDS (EBSCO), and Primo (Ex Libris) are often touted as a single search solution to connect users to library-owned and -licensed content, improving discoverability and retrieval of resources. Assessing how well these systems achieve this goal can be challenging,…

  1. Frugal innovation in medicine for low resource settings.

    Science.gov (United States)

    Tran, Viet-Thi; Ravaud, Philippe

    2016-07-07

    Whilst it is clear that technology is crucial to advance healthcare: innovation in medicine is not just about high-tech tools, new procedures or genome discoveries. In constrained environments, healthcare providers often create unexpected solutions to provide adequate healthcare to patients. These inexpensive but effective frugal innovations may be imperfect, but they have the power to ensure that health is within reach of everyone. Frugal innovations are not limited to low-resource settings: ingenuous ideas can be adapted to offer simpler and disruptive alternatives to usual care all around the world, representing the concept of "reverse innovation". In this article, we discuss the different types of frugal innovations, illustrated with examples from the literature, and argue for the need to give voice to this neglected type of innovation in medicine.

  2. Material flow cost accounting as a tool for improved resource efficiency in the hotel sector: A case of emerging market

    Directory of Open Access Journals (Sweden)

    Celani John Nyide

    2016-12-01

    Full Text Available Material Flow Cost Accounting (MFCA is one of the Environmental Management Accounting (EMA tools that has been developed to enable environmentally and economically efficient material usage and thus improve resource efficiency. However, the use of this tool to improve resource efficiency in the South African hotel sector remains unknown. An exploratory study, qualitative in nature, was conducted using a single case study with embedded units approach. A Hotel Management Group that met the selection criteria formed part of this study. In-depth interviews were conducted with 10 participants and additional documents were analysed. The investigated hotels have developed technologies that provide an environmental account in both physical and monetary units which constitute the use of MFCA to improve resource efficiencies. However, the study established a number of factors that affect the implementation of MFCA by the hotel sector in a South African context

  3. Beacons of discovery the worldwide science of particle physics

    CERN Document Server

    International Committee for Future Accelerators (ICFA)

    2011-01-01

    To discover what our world is made of and how it works at the most fundamental level is the challenge of particle physics. The tools of particle physics—experiments at particle accelerators and underground laboratories, together with observations of space—bring opportunities for discovery never before within reach. Thousands of scientists from universities and laboratories around the world collaborate to design, build and use unique detectors and accelerators to explore the fundamental physics of matter, energy, space and time. Together, in a common world-wide program of discovery, they provide a deep understanding of the world around us and countless benefits to society. Beacons of Discovery presents a vision of the global science of particle physics at the dawn of a new light on the mystery and beauty of the universe.

  4. Discovery and Use of Online Learning Resources: Case Study Findings

    Science.gov (United States)

    Recker, Mimi M.; Dorward, James; Nelson, Laurie Miller

    2004-01-01

    Much recent research and funding have focused on building Internet-based repositories that contain collections of high-quality learning resources, often called "learning objects." Yet little is known about how non-specialist users, in particular teachers, find, access, and use digital learning resources. To address this gap, this article…

  5. The Second Victim Experience and Support Tool: Validation of an Organizational Resource for Assessing Second Victim Effects and the Quality of Support Resources.

    Science.gov (United States)

    Burlison, Jonathan D; Scott, Susan D; Browne, Emily K; Thompson, Sierra G; Hoffman, James M

    2017-06-01

    Medical errors and unanticipated negative patient outcomes can damage the well-being of health care providers. These affected individuals, referred to as "second victims," can experience various psychological and physical symptoms. Support resources provided by health care organizations to prevent and reduce second victim-related harm are often inadequate. In this study, we present the development and psychometric evaluation of the Second Victim Experience and Support Tool (SVEST), a survey instrument that can assist health care organizations to implement and track the performance of second victim support resources. The SVEST (29 items representing 7 dimensions and 2 outcome variables) was completed by 303 health care providers involved in direct patient care. The survey collected responses on second victim-related psychological and physical symptoms and the quality of support resources. Desirability of possible support resources was also measured. The SVEST was assessed for content validity, internal consistency, and construct validity with confirmatory factor analysis. Confirmatory factor analysis results suggested good model fit for the survey. Cronbach α reliability scores for the survey dimensions ranged from 0.61 to 0.89. The most desired second victim support option was "A respected peer to discuss the details of what happened." The SVEST can be used by health care organizations to evaluate second victim experiences of their staff and the quality of existing support resources. It can also provide health care organization leaders with information on second victim-related support resources most preferred by their staff. The SVEST can be administered before and after implementing new second victim resources to measure perceptions of effectiveness.

  6. Scientific Prediction and Prophetic Patenting in Drug Discovery.

    Science.gov (United States)

    Curry, Stephen H; Schneiderman, Anne M

    2015-01-01

    Pharmaceutical patenting involves writing claims based on both discoveries already made, and on prophesy of future developments in an ongoing project. This is necessitated by the very different timelines involved in the drug discovery and product development process on the one hand, and successful patenting on the other. If patents are sought too early there is a risk that patent examiners will disallow claims because of lack of enablement. If patenting is delayed, claims are at risk of being denied on the basis of existence of prior art, because the body of relevant known science will have developed significantly while the project was being pursued. This review examines the role of prophetic patenting in relation to the essential predictability of many aspects of drug discovery science, promoting the concepts of discipline-related and project-related prediction. This is especially directed towards patenting activities supporting commercialization of academia-based discoveries, where long project timelines occur, and where experience, and resources to pay for patenting, are limited. The need for improved collaborative understanding among project scientists, technology transfer professionals in, for example, universities, patent attorneys, and patent examiners is emphasized.

  7. The Semantic Automated Discovery and Integration (SADI Web service Design-Pattern, API and Reference Implementation

    Directory of Open Access Journals (Sweden)

    Wilkinson Mark D

    2011-10-01

    Full Text Available Abstract Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services

  8. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    Science.gov (United States)

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  9. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    Science.gov (United States)

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in

  10. New Catalog of Resources Enables Paleogeosciences Research

    Science.gov (United States)

    Lingo, R. C.; Horlick, K. A.; Anderson, D. M.

    2014-12-01

    The 21st century promises a new era for scientists of all disciplines, the age where cyber infrastructure enables research and education and fuels discovery. EarthCube is a working community of over 2,500 scientists and students of many Earth Science disciplines who are looking to build bridges between disciplines. The EarthCube initiative will create a digital infrastructure that connects databases, software, and repositories. A catalog of resources (databases, software, repositories) has been produced by the Research Coordination Network for Paleogeosciences to improve the discoverability of resources. The Catalog is currently made available within the larger-scope CINERGI geosciences portal (http://hydro10.sdsc.edu/geoportal/catalog/main/home.page). Other distribution points and web services are planned, using linked data, content services for the web, and XML descriptions that can be harvested using metadata protocols. The databases provide searchable interfaces to find data sets that would otherwise remain dark data, hidden in drawers and on personal computers. The software will be described in catalog entries so just one click will lead users to methods and analytical tools that many geoscientists were unaware of. The repositories listed in the Paleogeosciences Catalog contain physical samples found all across the globe, from natural history museums to the basements of university buildings. EarthCube has over 250 databases, 300 software systems, and 200 repositories which will grow in the coming year. When completed, geoscientists across the world will be connected into a productive workflow for managing, sharing, and exploring geoscience data and information that expedites collaboration and innovation within the paleogeosciences, potentially bringing about new interdisciplinary discoveries.

  11. Web-scale discovery in an academic health sciences library: development and implementation of the EBSCO Discovery Service.

    Science.gov (United States)

    Thompson, Jolinda L; Obrig, Kathe S; Abate, Laura E

    2013-01-01

    Funds made available at the close of the 2010-11 fiscal year allowed purchase of the EBSCO Discovery Service (EDS) for a year-long trial. The appeal of this web-scale discovery product that offers a Google-like interface to library resources was counter-balanced by concerns about quality of search results in an academic health science setting and the challenge of configuring an interface that serves the needs of a diverse group of library users. After initial configuration, usability testing with library users revealed the need for further work before general release. Of greatest concern were continuing issues with the relevance of items retrieved, appropriateness of system-supplied facet terms, and user difficulties with navigating the interface. EBSCO has worked with the library to better understand and identify problems and solutions. External roll-out to users occurred in June 2012.

  12. An Ontology for Description of Drug Discovery Investigations

    Directory of Open Access Journals (Sweden)

    Qi Da

    2010-12-01

    Full Text Available The paper presents an ontology for the description of Drug Discovery Investigation (DDI. This has been developed through the use of a Robot Scientist “Eve”, and in consultation with industry. DDI aims to define the principle entities and the relations in the research and development phase of the drug discovery pipeline. DDI is highly transferable and extendable due to its adherence to accepted standards, and compliance with existing ontology resources. This enables DDI to be integrated with such related ontologies as the Vaccine Ontology, the Advancing Clinico-Genomic Trials on Cancer Master Ontology, etc. DDI is available at http://purl.org/ddi/wikipedia or http://purl.org/ddi/home

  13. Emerging tools for continuous nutrient monitoring networks: Sensors advancing science and water resources protection

    Science.gov (United States)

    Pellerin, Brian; Stauffer, Beth A; Young, Dwane A; Sullivan, Daniel J.; Bricker, Suzanne B.; Walbridge, Mark R; Clyde, Gerard A; Shaw, Denice M

    2016-01-01

    Sensors and enabling technologies are becoming increasingly important tools for water quality monitoring and associated water resource management decisions. In particular, nutrient sensors are of interest because of the well-known adverse effects of nutrient enrichment on coastal hypoxia, harmful algal blooms, and impacts to human health. Accurate and timely information on nutrient concentrations and loads is integral to strategies designed to minimize risk to humans and manage the underlying drivers of water quality impairment. Using nitrate sensors as an example, we highlight the types of applications in freshwater and coastal environments that are likely to benefit from continuous, real-time nutrient data. The concurrent emergence of new tools to integrate, manage and share large data sets is critical to the successful use of nutrient sensors and has made it possible for the field of continuous nutrient monitoring to rapidly move forward. We highlight several near-term opportunities for Federal agencies, as well as the broader scientific and management community, that will help accelerate sensor development, build and leverage sites within a national network, and develop open data standards and data management protocols that are key to realizing the benefits of a large-scale, integrated monitoring network. Investing in these opportunities will provide new information to guide management and policies designed to protect and restore our nation’s water resources.

  14. Master Middle Ware: A Tool to Integrate Water Resources and Fish Population Dynamics Models

    Science.gov (United States)

    Yi, S.; Sandoval Solis, S.; Thompson, L. C.; Kilduff, D. P.

    2017-12-01

    Linking models that investigate separate components of ecosystem processes has the potential to unify messages regarding management decisions by evaluating potential trade-offs in a cohesive framework. This project aimed to improve the ability of riparian resource managers to forecast future water availability conditions and resultant fish habitat suitability, in order to better inform their management decisions. To accomplish this goal, we developed a middleware tool that is capable of linking and overseeing the operations of two existing models, a water resource planning tool Water Evaluation and Planning (WEAP) model and a habitat-based fish population dynamics model (WEAPhish). First, we designed the Master Middle Ware (MMW) software in Visual Basic for Application® in one Excel® file that provided a familiar framework for both data input and output Second, MMW was used to link and jointly operate WEAP and WEAPhish, using Visual Basic Application (VBA) macros to implement system level calls to run the models. To demonstrate the utility of this approach, hydrological, biological, and middleware model components were developed for the Butte Creek basin. This tributary of the Sacramento River, California is managed for both hydropower and the persistence of a threatened population of spring-run Chinook salmon (Oncorhynchus tschawytscha). While we have demonstrated the use of MMW for a particular watershed and fish population, MMW can be customized for use with different rivers and fish populations, assuming basic data requirements are met. This model integration improves on ad hoc linkages for managing data transfer between software programs by providing a consistent, user-friendly, and familiar interface across different model implementations. Furthermore, the data-viewing capabilities of MMW facilitate the rapid interpretation of model results by hydrologists, fisheries biologists, and resource managers, in order to accelerate learning and management decision

  15. Health worker motivation in Africa: the role of non-financial incentives and human resource management tools

    Directory of Open Access Journals (Sweden)

    Imhoff Ingo

    2006-08-01

    Full Text Available Abstract Background There is a serious human resource crisis in the health sector in developing countries, particularly in Africa. One of the challenges is the low motivation of health workers. Experience and the evidence suggest that any comprehensive strategy to maximize health worker motivation in a developing country context has to involve a mix of financial and non-financial incentives. This study assesses the role of non-financial incentives for motivation in two cases, in Benin and Kenya. Methods The study design entailed semi-structured qualitative interviews with doctors and nurses from public, private and NGO facilities in rural areas. The selection of health professionals was the result of a layered sampling process. In Benin 62 interviews with health professionals were carried out; in Kenya 37 were obtained. Results from individual interviews were backed up with information from focus group discussions. For further contextual information, interviews with civil servants in the Ministry of Health and at the district level were carried out. The interview material was coded and quantitative data was analysed with SPSS software. Results and discussion The study shows that health workers overall are strongly guided by their professional conscience and similar aspects related to professional ethos. In fact, many health workers are demotivated and frustrated precisely because they are unable to satisfy their professional conscience and impeded in pursuing their vocation due to lack of means and supplies and due to inadequate or inappropriately applied human resources management (HRM tools. The paper also indicates that even some HRM tools that are applied may adversely affect the motivation of health workers. Conclusion The findings confirm the starting hypothesis that non-financial incentives and HRM tools play an important role with respect to increasing motivation of health professionals. Adequate HRM tools can uphold and strengthen the

  16. Health worker motivation in Africa: the role of non-financial incentives and human resource management tools.

    Science.gov (United States)

    Mathauer, Inke; Imhoff, Ingo

    2006-08-29

    There is a serious human resource crisis in the health sector in developing countries, particularly in Africa. One of the challenges is the low motivation of health workers. Experience and the evidence suggest that any comprehensive strategy to maximize health worker motivation in a developing country context has to involve a mix of financial and non-financial incentives. This study assesses the role of non-financial incentives for motivation in two cases, in Benin and Kenya. The study design entailed semi-structured qualitative interviews with doctors and nurses from public, private and NGO facilities in rural areas. The selection of health professionals was the result of a layered sampling process. In Benin 62 interviews with health professionals were carried out; in Kenya 37 were obtained. Results from individual interviews were backed up with information from focus group discussions. For further contextual information, interviews with civil servants in the Ministry of Health and at the district level were carried out. The interview material was coded and quantitative data was analysed with SPSS software. The study shows that health workers overall are strongly guided by their professional conscience and similar aspects related to professional ethos. In fact, many health workers are demotivated and frustrated precisely because they are unable to satisfy their professional conscience and impeded in pursuing their vocation due to lack of means and supplies and due to inadequate or inappropriately applied human resources management (HRM) tools. The paper also indicates that even some HRM tools that are applied may adversely affect the motivation of health workers. The findings confirm the starting hypothesis that non-financial incentives and HRM tools play an important role with respect to increasing motivation of health professionals. Adequate HRM tools can uphold and strengthen the professional ethos of doctors and nurses. This entails acknowledging their

  17. Evaluating Web-Scale Discovery Services: A Step-by-Step Guide

    Directory of Open Access Journals (Sweden)

    Joseph Deodato

    2015-06-01

    Full Text Available Selecting a web-scale discovery service is a large and important undertaking that involves a significant investment of time, staff, and resources. Finding the right match begins with a thorough and carefully planned evaluation process. In order to be successful, this process should be inclusive, goal-oriented, data-driven, user-centered, and transparent. The following article offers a step-by-step guide for developing a web-scale discovery evaluation plan rooted in these five key principles based on best practices synthesized from the literature as well as the author’s own experiences coordinating the evaluation process at Rutgers University. The goal is to offer academic libraries that are considering acquiring a web-scale discovery service a blueprint for planning a structured and comprehensive evaluation process.

  18. NATURAL RESOURCES ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    D.F. Fenster

    2000-12-11

    The purpose of this report is to summarize the scientific work that was performed to evaluate and assess the occurrence and economic potential of natural resources within the geologic setting of the Yucca Mountain area. The extent of the regional areas of investigation for each commodity differs and those areas are described in more detail in the major subsections of this report. Natural resource assessments have focused on an area defined as the ''conceptual controlled area'' because of the requirements contained in the U.S. Nuclear Regulatory Commission Regulation, 10 CFR Part 60, to define long-term boundaries for potential radionuclide releases. New requirements (proposed 10 CFR Part 63 [Dyer 1999]) have obviated the need for defining such an area. However, for the purposes of this report, the area being discussed, in most cases, is the previously defined ''conceptual controlled area'', now renamed the ''natural resources site study area'' for this report (shown on Figure 1). Resource potential can be difficult to assess because it is dependent upon many factors, including economics (demand, supply, cost), the potential discovery of new uses for resources, or the potential discovery of synthetics to replace natural resource use. The evaluations summarized are based on present-day use and economic potential of the resources. The objective of this report is to summarize the existing reports and information for the Yucca Mountain area on: (1) Metallic mineral and mined energy resources (such as gold, silver, etc., including uranium); (2) Industrial rocks and minerals (such as sand, gravel, building stone, etc.); (3) Hydrocarbons (including oil, natural gas, tar sands, oil shales, and coal); and (4) Geothermal resources. Groundwater is present at the Yucca Mountain site at depths ranging from 500 to 750 m (about 1,600 to 2,500 ft) below the ground surface. Groundwater resources are not discussed in this

  19. Formalizing an integrative, multidisciplinary cancer therapy discovery workflow

    Science.gov (United States)

    McGuire, Mary F.; Enderling, Heiko; Wallace, Dorothy I.; Batra, Jaspreet; Jordan, Marie; Kumar, Sushil; Panetta, John C.; Pasquier, Eddy

    2014-01-01

    Although many clinicians and researchers work to understand cancer, there has been limited success to effectively combine forces and collaborate over time, distance, data and budget constraints. Here we present a workflow template for multidisciplinary cancer therapy that was developed during the 2nd Annual Workshop on Cancer Systems Biology sponsored by Tufts University, Boston, MA in July 2012. The template was applied to the development of a metronomic therapy backbone for neuroblastoma. Three primary groups were identified: clinicians, biologists, and scientists (mathematicians, computer scientists, physicists and engineers). The workflow described their integrative interactions; parallel or sequential processes; data sources and computational tools at different stages as well as the iterative nature of therapeutic development from clinical observations to in vitro, in vivo, and clinical trials. We found that theoreticians in dialog with experimentalists could develop calibrated and parameterized predictive models that inform and formalize sets of testable hypotheses, thus speeding up discovery and validation while reducing laboratory resources and costs. The developed template outlines an interdisciplinary collaboration workflow designed to systematically investigate the mechanistic underpinnings of a new therapy and validate that therapy to advance development and clinical acceptance. PMID:23955390

  20. Geospatial Analysis and Remote Sensing from Airplanes and Satellites for Cultural Resources Management

    Science.gov (United States)

    Giardino, Marco J.; Haley, Bryan S.

    2005-01-01

    Cultural resource management consists of research to identify, evaluate, document and assess cultural resources, planning to assist in decision-making, and stewardship to implement the preservation, protection and interpretation of these decisions and plans. One technique that may be useful in cultural resource management archaeology is remote sensing. It is the acquisition of data and derivative information about objects or materials (targets) located on the Earth's surface or in its atmosphere by using sensor mounted on platforms located at a distance from the targets to make measurements on interactions between the targets and electromagnetic radiation. Included in this definition are systems that acquire imagery by photographic methods and digital multispectral sensors. Data collected by digital multispectral sensors on aircraft and satellite platforms play a prominent role in many earth science applications, including land cover mapping, geology, soil science, agriculture, forestry, water resource management, urban and regional planning, and environmental assessments. Inherent in the analysis of remotely sensed data is the use of computer-based image processing techniques. Geographical information systems (GIS), designed for collecting, managing, and analyzing spatial information, are also useful in the analysis of remotely sensed data. A GIS can be used to integrate diverse types of spatially referenced digital data, including remotely sensed and map data. In archaeology, these tools have been used in various ways to aid in cultural resource projects. For example, they have been used to predict the presence of archaeological resources using modern environmental indicators. Remote sensing techniques have also been used to directly detect the presence of unknown sites based on the impact of past occupation on the Earth's surface. Additionally, remote sensing has been used as a mapping tool aimed at delineating the boundaries of a site or mapping previously

  1. Volatility Discovery

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Scherrer, Cristina; Papailias, Fotis

    The price discovery literature investigates how homogenous securities traded on different markets incorporate information into prices. We take this literature one step further and investigate how these markets contribute to stochastic volatility (volatility discovery). We formally show...... that the realized measures from homogenous securities share a fractional stochastic trend, which is a combination of the price and volatility discovery measures. Furthermore, we show that volatility discovery is associated with the way that market participants process information arrival (market sensitivity......). Finally, we compute volatility discovery for 30 actively traded stocks in the U.S. and report that Nyse and Arca dominate Nasdaq....

  2. Uranium resources: the Canadian status

    International Nuclear Information System (INIS)

    Runnalls, O.J.C.

    1976-01-01

    The history of the uranium industry in Canada is reviewed beginning with the first discoveries and progressing through the booming years of the 1950's, the doldrums of the 1960's, to the present bouyant seller's market and the promising prospects for new discoveries. The upsurge in demand has led to the establishment of a uranium export policy which is described in detail. Recent estimates of resources, production capacity, and domestic demand are also outlined. Finally, a brief description of the utilization of natural uranium in CANDU power reactors is presented

  3. Metabolomics for Biomarker Discovery: Moving to the Clinic

    Science.gov (United States)

    Zhang, Aihua; Sun, Hui; Yan, Guangli; Wang, Ping; Wang, Xijun

    2015-01-01

    To improve the clinical course of diseases, more accurate diagnostic and assessment methods are required as early as possible. In order to achieve this, metabolomics offers new opportunities for biomarker discovery in complex diseases and may provide pathological understanding of diseases beyond traditional technologies. It is the systematic analysis of low-molecular-weight metabolites in biological samples and has become an important tool in clinical research and the diagnosis of human disease and has been applied to discovery and identification of the perturbed pathways. It provides a powerful approach to discover biomarkers in biological systems and offers a holistic approach with the promise to clinically enhance diagnostics. When carried out properly, it could provide insight into the understanding of the underlying mechanisms of diseases, help to identify patients at risk of disease, and predict the response to specific treatments. Currently, metabolomics has become an important tool in clinical research and the diagnosis of human disease and becomes a hot topic. This review will highlight the importance and benefit of metabolomics for identifying biomarkers that accurately screen potential biomarkers of diseases. PMID:26090402

  4. Engines of discovery a century of particle accelerators

    CERN Document Server

    Sessler, Andrew

    2014-01-01

    Particle accelerators exploit the cutting edge of every aspect of today's technology and have themselves contributed to many of these technologies. The largest accelerators have been constructed as research tools for nuclear and high energy physics and there is no doubt that it is this field that has sustained their development culminating in the Large Hadron Collider. An earlier book by the same authors, Engines of Discovery: A Century of Particle Accelerators chronicled the development of these large accelerators and colliders, emphasizing the critical discoveries in applied physics and engineering that drove the field. Particular attention was given to the key individuals who contributed, the methods they used to arrive at their particular discoveries and inventions, often recalling how their human strengths and attitudes may have contributed to their achievements. Much of this historical picture is also to be found, little changed, in Part A of this sequel. Since the first book was written it has become ...

  5. Biomarker discovery in mass spectrometry-based urinary proteomics.

    Science.gov (United States)

    Thomas, Samuel; Hao, Ling; Ricke, William A; Li, Lingjun

    2016-04-01

    Urinary proteomics has become one of the most attractive topics in disease biomarker discovery. MS-based proteomic analysis has advanced continuously and emerged as a prominent tool in the field of clinical bioanalysis. However, only few protein biomarkers have made their way to validation and clinical practice. Biomarker discovery is challenged by many clinical and analytical factors including, but not limited to, the complexity of urine and the wide dynamic range of endogenous proteins in the sample. This article highlights promising technologies and strategies in the MS-based biomarker discovery process, including study design, sample preparation, protein quantification, instrumental platforms, and bioinformatics. Different proteomics approaches are discussed, and progresses in maximizing urinary proteome coverage and standardization are emphasized in this review. MS-based urinary proteomics has great potential in the development of noninvasive diagnostic assays in the future, which will require collaborative efforts between analytical scientists, systems biologists, and clinicians. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Economic impacts of natural resources on a regional economy: the case of the pre-salt oil discoveries in Espirito Santo, Brazil

    Directory of Open Access Journals (Sweden)

    Eduardo Amaral Haddad

    2014-03-01

    Full Text Available The Brazilian government has recently confirmed the discovery of a huge oil and natural gas field in the pre-salt layer of the country’s southeastern coast. It has been said that the oil fields can boost Brazil’s oil production and turn the country into one of the largest oil producers in the world. The fields are spatially concentrated in the coastal areas of a few Brazilian states that may directly benefit from oil production. This paper uses an interregional computable general equilibrium model to assess the impacts of pre-salt on the economy of the State of Espírito Santo, a region already characterized by an economic base that is heavily reliant on natural resources. We focus our analysis on the structural economic impacts on the local economy

  7. Tools for Observation: Art and the Scientific Process

    Science.gov (United States)

    Pettit, E. C.; Coryell-Martin, M.; Maisch, K.

    2015-12-01

    Art can support the scientific process during different phases of a scientific discovery. Art can help explain and extend the scientific concepts for the general public; in this way art is a powerful tool for communication. Art can aid the scientist in processing and interpreting the data towards an understanding of the concepts and processes; in this way art is powerful - if often subconscious - tool to inform the process of discovery. Less often acknowledged, art can help engage students and inspire scientists during the initial development of ideas, observations, and questions; in this way art is a powerful tool to develop scientific questions and hypotheses. When we use art as a tool for communication of scientific discoveries, it helps break down barriers and makes science concepts less intimidating and more accessible and understandable for the learner. Scientists themselves use artistic concepts and processes - directly or indirectly - to help deepen their understanding. Teachers are following suit by using art more to stimulate students' creative thinking and problem solving. We show the value of teaching students to use the artistic "way of seeing" to develop their skills in observation, questioning, and critical thinking. In this way, art can be a powerful tool to engage students (from elementary to graduate) in the beginning phase of a scientific discovery, which is catalyzed by inquiry and curiosity. Through qualitative assessment of the Girls on Ice program, we show that many of the specific techniques taught by art teachers are valuable for science students to develop their observation skills. In particular, the concepts of contour drawing, squinting, gesture drawing, inverted drawing, and others can provide valuable training for student scientists. These art techniques encourage students to let go of preconceptions and "see" the world (the "data") in new ways they help students focus on both large-scale patterns and small-scale details.

  8. Zebrafish models in neuropsychopharmacology and CNS drug discovery.

    Science.gov (United States)

    Khan, Kanza M; Collier, Adam D; Meshalkina, Darya A; Kysil, Elana V; Khatsko, Sergey L; Kolesnikova, Tatyana; Morzherin, Yury Yu; Warnick, Jason E; Kalueff, Allan V; Echevarria, David J

    2017-07-01

    Despite the high prevalence of neuropsychiatric disorders, their aetiology and molecular mechanisms remain poorly understood. The zebrafish (Danio rerio) is increasingly utilized as a powerful animal model in neuropharmacology research and in vivo drug screening. Collectively, this makes zebrafish a useful tool for drug discovery and the identification of disordered molecular pathways. Here, we discuss zebrafish models of selected human neuropsychiatric disorders and drug-induced phenotypes. As well as covering a broad range of brain disorders (from anxiety and psychoses to neurodegeneration), we also summarize recent developments in zebrafish genetics and small molecule screening, which markedly enhance the disease modelling and the discovery of novel drug targets. © 2017 The British Pharmacological Society.

  9. Uranium resources and requirements

    International Nuclear Information System (INIS)

    Silver, J.M.; Wright, W.J.

    1975-08-01

    Australia has about 19% of the reasonably assured resources of uranium in the Western World recoverable at costs of less than $A20 per kilogram, or about 9% of the resources (reasonably assured and estimated additional) recoverable at costs of less than $A30 per kilogram. Australia's potential for further discoveries of uranium is good. Nevertheless, if Australia did not export any of these resources it would probably have only a marginal effect on the development of nuclear power; other resources would be exploited earlier and prices would rise, but not sufficiently to make the costs of nuclear power unattractive. On the other hand, this policy could deny to Australia real benefits in foreign currency earnings, employment and national development. (author)

  10. Data Science and Optimal Learning for Material Discovery and Design

    Science.gov (United States)

    ; Optimal Learning for Material Discovery & Design Data Science and Optimal Learning for Material inference and optimization methods that can constrain predictions using insights and results from theory directions in the application of information theoretic tools to materials problems related to learning from

  11. Uranium resources evaluation model as an exploration tool

    International Nuclear Information System (INIS)

    Ruzicka, V.

    1976-01-01

    Evaluation of uranium resources, as conducted by the Uranium Resources Evaluation Section of the Geological Survey of Canada, comprises operations analogous with those performed during the preparatory stages of uranium exploration. The uranium resources evaluation model, simulating the estimation process, can be divided into four steps. The first step includes definition of major areas and ''unit subdivisions'' for which geological data are gathered, coded, computerized and retrieved. Selection of these areas and ''unit subdivisions'' is based on a preliminary appraisal of their favourability for uranium mineralization. The second step includes analyses of the data, definition of factors controlling uranium minearlization, classification of uranium occurrences into genetic types, and final delineation of favourable areas; this step corresponds to the selection of targets for uranium exploration. The third step includes geological field work; it is equivalent to geological reconnaissance in exploration. The fourth step comprises computation of resources; the preliminary evaluation techniques in the exploration are, as a rule, analogous with the simplest methods employed in the resource evaluation. The uranium resources evaluation model can be conceptually applied for decision-making during exploration or for formulation of exploration strategy using the quantified data as weighting factors. (author)

  12. The first set of EST resource for gene discovery and marker development in pigeonpea (Cajanus cajan L.

    Directory of Open Access Journals (Sweden)

    Byregowda Munishamappa

    2010-03-01

    Full Text Available Abstract Background Pigeonpea (Cajanus cajan (L. Millsp is one of the major grain legume crops of the tropics and subtropics, but biotic stresses [Fusarium wilt (FW, sterility mosaic disease (SMD, etc.] are serious challenges for sustainable crop production. Modern genomic tools such as molecular markers and candidate genes associated with resistance to these stresses offer the possibility of facilitating pigeonpea breeding for improving biotic stress resistance. Availability of limited genomic resources, however, is a serious bottleneck to undertake molecular breeding in pigeonpea to develop superior genotypes with enhanced resistance to above mentioned biotic stresses. With an objective of enhancing genomic resources in pigeonpea, this study reports generation and analysis of comprehensive resource of FW- and SMD- responsive expressed sequence tags (ESTs. Results A total of 16 cDNA libraries were constructed from four pigeonpea genotypes that are resistant and susceptible to FW ('ICPL 20102' and 'ICP 2376' and SMD ('ICP 7035' and 'TTB 7' and a total of 9,888 (9,468 high quality ESTs were generated and deposited in dbEST of GenBank under accession numbers GR463974 to GR473857 and GR958228 to GR958231. Clustering and assembly analyses of these ESTs resulted into 4,557 unique sequences (unigenes including 697 contigs and 3,860 singletons. BLASTN analysis of 4,557 unigenes showed a significant identity with ESTs of different legumes (23.2-60.3%, rice (28.3%, Arabidopsis (33.7% and poplar (35.4%. As expected, pigeonpea ESTs are more closely related to soybean (60.3% and cowpea ESTs (43.6% than other plant ESTs. Similarly, BLASTX similarity results showed that only 1,603 (35.1% out of 4,557 total unigenes correspond to known proteins in the UniProt database (≤ 1E-08. Functional categorization of the annotated unigenes sequences showed that 153 (3.3% genes were assigned to cellular component category, 132 (2.8% to biological process, and 132 (2

  13. The power tool

    International Nuclear Information System (INIS)

    HAYFIELD, J.P.

    1999-01-01

    POWER Tool--Planning, Optimization, Waste Estimating and Resourcing tool, a hand-held field estimating unit and relational database software tool for optimizing disassembly and final waste form of contaminated systems and equipment

  14. The secondary metabolite bioinformatics portal: Computational tools to facilitate synthetic biology of secondary metabolite production

    Directory of Open Access Journals (Sweden)

    Tilmann Weber

    2016-06-01

    Full Text Available Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http://www.secondarymetabolites.org is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field.

  15. The Universe Discovery Guides: A Collaborative Approach to Educating with NASA Science

    Science.gov (United States)

    Manning, James G.; Lawton, Brandon L.; Gurton, Suzanne; Smith, Denise Anne; Schultz, Gregory; Astrophysics Community, NASA

    2015-08-01

    For the 2009 International Year of Astronomy, the then-existing NASA Origins Forum collaborated with the Astronomical Society of the Pacific (ASP) to create a series of monthly “Discovery Guides” for informal educator and amateur astronomer use in educating the public about featured sky objects and associated NASA science themes. Today’s NASA Astrophysics Science Education and Public Outreach Forum (SEPOF), one of the current generation of forums coordinating the work of NASA Science Mission Directorate (SMD) EPO efforts—in collaboration with the ASP and NASA SMD missions and programs--has adapted the Discovery Guides into “evergreen” educational resources suitable for a variety of audiences. The Guides focus on “deep sky” objects and astrophysics themes (stars and stellar evolution, galaxies and the universe, and exoplanets), showcasing EPO resources from more than 30 NASA astrophysics missions and programs in a coordinated and cohesive “big picture” approach across the electromagnetic spectrum, grounded in best practices to best serve the needs of the target audiences.Each monthly guide features a theme and a representative object well-placed for viewing, with an accompanying interpretive story, finding charts, strategies for conveying the topics, and complementary supporting NASA-approved education activities and background information from a spectrum of NASA missions and programs. The Universe Discovery Guides are downloadable from the NASA Night Sky Network web site at nightsky.jpl.nasa.gov and specifically from http://nightsky.jpl.nasa.gov/news-display.cfm?News_ID=611.The presentation will describe the collaborative’s experience in developing the guides, how they place individual science discoveries and learning resources into context for audiences, and how the Guides can be readily used in scientist public outreach efforts, in college and university introductory astronomy classes, and in other engagements between scientists, instructors

  16. RASOnD - A comprehensive resource and search tool for RAS superfamily oncogenes from various species

    Directory of Open Access Journals (Sweden)

    Singh Tej P

    2011-07-01

    Full Text Available Abstract Background The Ras superfamily plays an important role in the control of cell signalling and division. Mutations in the Ras genes convert them into active oncogenes. The Ras oncogenes form a major thrust of global cancer research as they are involved in the development and progression of tumors. This has resulted in the exponential growth of data on Ras superfamily across different public databases and in literature. However, no dedicated public resource is currently available for data mining and analysis on this family. The present database was developed to facilitate straightforward accession, retrieval and analysis of information available on Ras oncogenes from one particular site. Description We have developed the RAS Oncogene Database (RASOnD as a comprehensive knowledgebase that provides integrated and curated information on a single platform for oncogenes of Ras superfamily. RASOnD encompasses exhaustive genomics and proteomics data existing across diverse publicly accessible databases. This resource presently includes overall 199,046 entries from 101 different species. It provides a search tool to generate information about their nucleotide and amino acid sequences, single nucleotide polymorphisms, chromosome positions, orthologies, motifs, structures, related pathways and associated diseases. We have implemented a number of user-friendly search interfaces and sequence analysis tools. At present the user can (i browse the data (ii search any field through a simple or advance search interface and (iii perform a BLAST search and subsequently CLUSTALW multiple sequence alignment by selecting sequences of Ras oncogenes. The Generic gene browser, GBrowse, JMOL for structural visualization and TREEVIEW for phylograms have been integrated for clear perception of retrieved data. External links to related databases have been included in RASOnD. Conclusions This database is a resource and search tool dedicated to Ras oncogenes. It has

  17. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    M. Isabel Vara

    2015-07-01

    Full Text Available Service discovery plays an important role in mobile ad hoc networks (MANETs. The lack of central infrastructure, limited resources and high mobility make service discovery a challenging issue for this kind of network. This article proposes a new service discovery mechanism for discovering and advertising services integrated into the Optimized Link State Routing Protocol Version 2 (OLSRv2. In previous studies, we demonstrated the validity of a similar service discovery mechanism integrated into the previous version of OLSR (OLSRv1. In order to advertise services, we have added a new type-length-value structure (TLV to the OLSRv2 protocol, called service discovery message (SDM, according to the Generalized MANET Packet/Message Format defined in Request For Comments (RFC 5444. Each node in the ad hoc network only advertises its own services. The advertisement frequency is a user-configurable parameter, so that it can be modified depending on the user requirements. Each node maintains two service tables, one to store information about its own services and another one to store information about the services it discovers in the network. We present simulation results, that compare our service discovery integrated into OLSRv2 with the one defined for OLSRv1 and with the integration of service discovery in Ad hoc On-demand Distance Vector (AODV protocol, in terms of service discovery ratio, service latency and network overhead.

  18. Cross-Layer Service Discovery Mechanism for OLSRv2 Mobile Ad Hoc Networks.

    Science.gov (United States)

    Vara, M Isabel; Campo, Celeste

    2015-07-20

    Service discovery plays an important role in mobile ad hoc networks (MANETs). The lack of central infrastructure, limited resources and high mobility make service discovery a challenging issue for this kind of network. This article proposes a new service discovery mechanism for discovering and advertising services integrated into the Optimized Link State Routing Protocol Version 2 (OLSRv2). In previous studies, we demonstrated the validity of a similar service discovery mechanism integrated into the previous version of OLSR (OLSRv1). In order to advertise services, we have added a new type-length-value structure (TLV) to the OLSRv2 protocol, called service discovery message (SDM), according to the Generalized MANET Packet/Message Format defined in Request For Comments (RFC) 5444. Each node in the ad hoc network only advertises its own services. The advertisement frequency is a user-configurable parameter, so that it can be modified depending on the user requirements. Each node maintains two service tables, one to store information about its own services and another one to store information about the services it discovers in the network. We present simulation results, that compare our service discovery integrated into OLSRv2 with the one defined for OLSRv1 and with the integration of service discovery in Ad hoc On-demand Distance Vector (AODV) protocol, in terms of service discovery ratio, service latency and network overhead.

  19. The heat is on: thermodynamic analysis in fragment-based drug discovery

    NARCIS (Netherlands)

    Edink, E.S.; Jansen, C.J.W.; Leurs, R.; De Esch, I.J.

    2010-01-01

    Thermodynamic analysis provides access to the determinants of binding affinity, enthalpy and entropy. In fragment-based drug discovery (FBDD), thermodynamic analysis provides a powerful tool to discriminate fragments based on their potential for successful optimization. The thermodynamic data

  20. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    Science.gov (United States)

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  1. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  2. Weight Estimation Tool for Children Aged 6 to 59 Months in Limited-Resource Settings.

    Science.gov (United States)

    Ralston, Mark E; Myatt, Mark A

    2016-01-01

    A simple, reliable anthropometric tool for rapid estimation of weight in children would be useful in limited-resource settings where current weight estimation tools are not uniformly reliable, nearly all global under-five mortality occurs, severe acute malnutrition is a significant contributor in approximately one-third of under-five mortality, and a weight scale may not be immediately available in emergencies to first-response providers. To determine the accuracy and precision of mid-upper arm circumference (MUAC) and height as weight estimation tools in children under five years of age in low-to-middle income countries. This was a retrospective observational study. Data were collected in 560 nutritional surveys during 1992-2006 using a modified Expanded Program of Immunization two-stage cluster sample design. Locations with high prevalence of acute and chronic malnutrition. A total of 453,990 children met inclusion criteria (age 6-59 months; weight ≤ 25 kg; MUAC 80-200 mm) and exclusion criteria (bilateral pitting edema; biologically implausible weight-for-height z-score (WHZ), weight-for-age z-score (WAZ), and height-for-age z-score (HAZ) values). Weight was estimated using Broselow Tape, Hong Kong formula, and database MUAC alone, height alone, and height and MUAC combined. Mean percentage difference between true and estimated weight, proportion of estimates accurate to within ± 25% and ± 10% of true weight, weighted Kappa statistic, and Bland-Altman bias were reported as measures of tool accuracy. Standard deviation of mean percentage difference and Bland-Altman 95% limits of agreement were reported as measures of tool precision. Database height was a more accurate and precise predictor of weight compared to Broselow Tape 2007 [B], Broselow Tape 2011 [A], and MUAC. Mean percentage difference between true and estimated weight was +0.49% (SD = 10.33%); proportion of estimates accurate to within ± 25% of true weight was 97.36% (95% CI 97.40%, 97.46%); and

  3. Drug Discovery Gets a Boost from Data Science.

    Science.gov (United States)

    Amaro, Rommie E

    2016-08-02

    In this issue of Structure, Schiebel et al. (2016) describe a workflow-driven approach to high-throughput X-ray crystallographic fragment screening and refinement. In doing so, they extend the applicability of X-ray crystallography as a primary fragment-screening tool and show how data science techniques can favorably impact drug discovery efforts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Disk Rock Cutting Tool for the Implementation of Resource-Saving Technologies of Mining of Solid Minerals

    Science.gov (United States)

    Manietyev, Leonid; Khoreshok, Aleksey; Tsekhin, Alexander; Borisov, Andrey

    2017-11-01

    The directions of a resource and energy saving when creating a boom-type effectors of roadheaders of selective action with disc rock cutting tools on a multi-faceted prisms for the destruction of formation of minerals and rocks pricemax are presented. Justified reversing the modes of the crowns and booms to improve the efficiency of mining works. Parameters of destruction of coal and rock faces by the disk tool of a biconical design with the unified fastening knots to many-sided prisms on effectors of extraction mining machines are determined. Parameters of tension of the interfaced elements of knots of fastening of the disk tool at static interaction with the destroyed face of rocks are set. The technical solutions containing the constructive and kinematic communications realizing counter and reverse mode of rotation of two radial crowns with the disk tool on trihedral prisms and cases of booms with the disk tool on tetrahedral prisms in internal space between two axial crowns with the cutter are proposed. Reserves of expansion of the front of loading outside a table of a feeder of the roadheader of selective action, including side zones in which loading corridors by blades of trihedral prisms in internal space between two radial crowns are created are revealed.

  5. Enlisting User Community Perspectives to Inform Development of a Semantic Web Application for Discovery of Cross-Institutional Research Information and Data

    Science.gov (United States)

    Johns, E. M.; Mayernik, M. S.; Boler, F. M.; Corson-Rikert, J.; Daniels, M. D.; Gross, M. B.; Khan, H.; Maull, K. E.; Rowan, L. R.; Stott, D.; Williams, S.; Krafft, D. B.

    2015-12-01

    Researchers seek information and data through a variety of avenues: published literature, search engines, repositories, colleagues, etc. In order to build a web application that leverages linked open data to enable multiple paths for information discovery, the EarthCollab project has surveyed two geoscience user communities to consider how researchers find and share scholarly output. EarthCollab, a cross-institutional, EarthCube funded project partnering UCAR, Cornell University, and UNAVCO, is employing the open-source semantic web software, VIVO, as the underlying technology to connect the people and resources of virtual research communities. This study will present an analysis of survey responses from members of the two case study communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. The survey results illustrate the types of research products that respondents indicate should be discoverable within a digital platform and the current methods used to find publications, data, personnel, tools, and instrumentation. The responses showed that scientists rely heavily on general purpose search engines, such as Google, to find information, but that data center websites and the published literature were also critical sources for finding collaborators, data, and research tools.The survey participants also identify additional features of interest for an information platform such as search engine indexing, connection to institutional web pages, generation of bibliographies and CVs, and outward linking to social media. Through the survey, the user communities prioritized the type of information that is most important to display and describe their work within a research profile. The analysis of this survey will inform our further development of a platform that will

  6. Application of PBPK modelling in drug discovery and development at Pfizer.

    Science.gov (United States)

    Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B

    2012-01-01

    Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.

  7. Photometry, Astrometry, and Discoveries of Ultracool Dwarfs in the Pan-STARRS 3π Survey

    Science.gov (United States)

    Best, William M. J.; Magnier, Eugene A.; Liu, Michael C.; Deacon, Niall; Aller, Kimberly; Zhang, Zhoujian; Pan-STARRS1 Builders

    2018-01-01

    The Pan-STARRS1 3π Survey (PS1)'s far-red optical sensitivity makes it an exceptional new resource for discovering and characterizing ultracool dwarfs. We present a PS1-based catalog of photometry and proper motions of nearly 10,000 M, L, and T dwarfs, along with our analysis of the kinematics of nearby M6-T9 dwarfs, building a comprehensive picture of the local ultracool population. We highlight some especially interesting ultracool discoveries made with PS1, including brown dwarfs with spectral types in the enigmatic L/T transition, wide companions to main sequence stars that serve as age and metallicity bechmarks for substellar models, and free-floating members of the nearby young moving groups and star-forming regions with masses down to ≈5 MJup. With its public release, PS1 will continue to be a vital tool for studying the ultracool population.

  8. Computer-Aided Drug Discovery in Plant Pathology.

    Science.gov (United States)

    Shanmugam, Gnanendra; Jeon, Junhyun

    2017-12-01

    Control of plant diseases is largely dependent on use of agrochemicals. However, there are widening gaps between our knowledge on plant diseases gained from genetic/mechanistic studies and rapid translation of the knowledge into target-oriented development of effective agrochemicals. Here we propose that the time is ripe for computer-aided drug discovery/design (CADD) in molecular plant pathology. CADD has played a pivotal role in development of medically important molecules over the last three decades. Now, explosive increase in information on genome sequences and three dimensional structures of biological molecules, in combination with advances in computational and informational technologies, opens up exciting possibilities for application of CADD in discovery and development of agrochemicals. In this review, we outline two categories of the drug discovery strategies: structure- and ligand-based CADD, and relevant computational approaches that are being employed in modern drug discovery. In order to help readers to dive into CADD, we explain concepts of homology modelling, molecular docking, virtual screening, and de novo ligand design in structure-based CADD, and pharmacophore modelling, ligand-based virtual screening, quantitative structure activity relationship modelling and de novo ligand design for ligand-based CADD. We also provide the important resources available to carry out CADD. Finally, we present a case study showing how CADD approach can be implemented in reality for identification of potent chemical compounds against the important plant pathogens, Pseudomonas syringae and Colletotrichum gloeosporioides .

  9. Scheme Program Documentation Tools

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2004-01-01

    are separate and intended for different documentation purposes they are related to each other in several ways. Both tools are based on XML languages for tool setup and for documentation authoring. In addition, both tools rely on the LAML framework which---in a systematic way---makes an XML language available...... as named functions in Scheme. Finally, the Scheme Elucidator is able to integrate SchemeDoc resources as part of an internal documentation resource....

  10. Challenges in the development of an M4 PAM in vivo tool compound: The discovery of VU0467154 and unexpected DMPK profiles of close analogs.

    Science.gov (United States)

    Wood, Michael R; Noetzel, Meredith J; Poslusney, Michael S; Melancon, Bruce J; Tarr, James C; Lamsal, Atin; Chang, Sichen; Luscombe, Vincent B; Weiner, Rebecca L; Cho, Hyekyung P; Bubser, Michael; Jones, Carrie K; Niswender, Colleen M; Wood, Michael W; Engers, Darren W; Brandon, Nicholas J; Duggan, Mark E; Conn, P Jeffrey; Bridges, Thomas M; Lindsley, Craig W

    2017-01-15

    This letter describes the chemical optimization of a novel series of M 4 positive allosteric modulators (PAMs) based on a 5-amino-thieno[2,3-c]pyridazine core, developed via iterative parallel synthesis, and culminating in the highly utilized rodent in vivo tool compound, VU0467154 (5). This is the first report of the optimization campaign (SAR and DMPK profiling) that led to the discovery of VU0467154, and details all of the challenges faced in allosteric modulator programs (steep SAR, species differences in PAM pharmacology and subtle structural changes affecting CNS penetration). Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Sharing Service Resource Information for Application Integration in a Virtual Enterprise - Modeling the Communication Protocol for Exchanging Service Resource Information

    Science.gov (United States)

    Yamada, Hiroshi; Kawaguchi, Akira

    Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.

  12. A Cross-Layer Route Discovery Framework for Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Wu Jieyi

    2005-01-01

    Full Text Available Most reactive routing protocols in MANETs employ a random delay between rebroadcasting route requests (RREQ in order to avoid "broadcast storms." However this can lead to problems such as "next hop racing" and "rebroadcast redundancy." In addition to this, existing routing protocols for MANETs usually take a single routing strategy for all flows. This may lead to inefficient use of resources. In this paper we propose a cross-layer route discovery framework (CRDF to address these problems by exploiting the cross-layer information. CRDF solves the above problems efficiently and enables a new technique: routing strategy automation (RoSAuto. RoSAuto refers to the technique that each source node automatically decides the routing strategy based on the application requirements and each intermediate node further adapts the routing strategy so that the network resource usage can be optimized. To demonstrate the effectiveness and the efficiency of CRDF, we design and evaluate a macrobian route discovery strategy under CRDF.

  13. Identifying Key Features, Cutting Edge Cloud Resources, and Artificial Intelligence Tools to Achieve User-Friendly Water Science in the Cloud

    Science.gov (United States)

    Pierce, S. A.

    2017-12-01

    Decision making for groundwater systems is becoming increasingly important, as shifting water demands increasingly impact aquifers. As buffer systems, aquifers provide room for resilient responses and augment the actual timeframe for hydrological response. Yet the pace impacts, climate shifts, and degradation of water resources is accelerating. To meet these new drivers, groundwater science is transitioning toward the emerging field of Integrated Water Resources Management, or IWRM. IWRM incorporates a broad array of dimensions, methods, and tools to address problems that tend to be complex. Computational tools and accessible cyberinfrastructure (CI) are needed to cross the chasm between science and society. Fortunately cloud computing environments, such as the new Jetstream system, are evolving rapidly. While still targeting scientific user groups systems such as, Jetstream, offer configurable cyberinfrastructure to enable interactive computing and data analysis resources on demand. The web-based interfaces allow researchers to rapidly customize virtual machines, modify computing architecture and increase the usability and access for broader audiences to advanced compute environments. The result enables dexterous configurations and opening up opportunities for IWRM modelers to expand the reach of analyses, number of case studies, and quality of engagement with stakeholders and decision makers. The acute need to identify improved IWRM solutions paired with advanced computational resources refocuses the attention of IWRM researchers on applications, workflows, and intelligent systems that are capable of accelerating progress. IWRM must address key drivers of community concern, implement transdisciplinary methodologies, adapt and apply decision support tools in order to effectively support decisions about groundwater resource management. This presentation will provide an overview of advanced computing services in the cloud using integrated groundwater management case

  14. Motivating Communities To Go Beyond the Discovery Plateau

    Science.gov (United States)

    Habermann, T.; Kozimor, J.

    2014-12-01

    Years of emphasizing discovery and minimal metadata requirements have resulted in a culture that accepts that metadata are for discovery and complete metadata are too complex or difficult for researchers to understand and create. Evolving the culture past this "data-discovery plateau" requires a multi-faceted approach that addresses the rational and emotional sides of the problem. On the rational side, scientists know that data and results must be well documented in order to be reproducible, re-usable, and trustworthy. We need tools that script critical moves towards well-described destinations and help identify members of the community that are already leading the way towards those destinations. We need mechanisms that help those leaders share their experiences and examples. On the emotional side, we need to emphasize that high-quality metadata makes data trustworthy, divide the improvement process into digestible pieces and create mechanisms for clearly identifying and rewarding progress. We also need to provide clear opportunities for community members to increase their expertise and to share their skills.

  15. The management of scarce water resources using GNSS, InSAR and in-situ micro gravity measurements as monitoring tools

    CSIR Research Space (South Africa)

    Wonnacott, R

    2015-08-01

    Full Text Available of Geomatics, Vol. 4, No. 3, August 2015 213  The management of scarce water resources using GNSS, InSAR and in-situ micro gravity measurements as monitoring tools Richard Wonnacott1, Chris Hartnady1, Jeanine Engelbrecht2 1Umvoto Africa (Pty) Ltd... shown to provide a useful tool for the measurement and monitoring of ground subsidence resulting from numerous natural and anthropogenic causes including the abstraction of groundwater and gas. Zerbini et al (2007) processed and combined data from a...

  16. Picking the Best from the All-Resources Menu: Advanced Tools for Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-31

    Introduces the wide range of electric power systems modeling types and associated questions they can help answer. The presentation focusses on modeling needs for high levels of Distributed Energy Resources (DERs), renewables, and inverter-based technologies as alternatives to traditional centralized power systems. Covers Dynamics, Production Cost/QSTS, Metric Assessment, Resource Planning, and Integrated Simulations with examples drawn from NREL's past and on-going projects. Presented at the McKnight Foundation workshop on 'An All-Resources Approach to Planning for a More Dynamic, Low-Carbon Grid' exploring grid modernization options to replace retiring coal plants in Minnesota.

  17. Medicinal chemistry inspired fragment-based drug discovery.

    Science.gov (United States)

    Lanter, James; Zhang, Xuqing; Sui, Zhihua

    2011-01-01

    Lead generation can be a very challenging phase of the drug discovery process. The two principal methods for this stage of research are blind screening and rational design. Among the rational or semirational design approaches, fragment-based drug discovery (FBDD) has emerged as a useful tool for the generation of lead structures. It is particularly powerful as a complement to high-throughput screening approaches when the latter failed to yield viable hits for further development. Engagement of medicinal chemists early in the process can accelerate the progression of FBDD efforts by incorporating drug-friendly properties in the earliest stages of the design process. Medium-chain acyl-CoA synthetase 2b and ketohexokinase are chosen as examples to illustrate the importance of close collaboration of medicinal chemists, crystallography, and modeling. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Knowledge discovery from models of soil properties developed through data mining

    NARCIS (Netherlands)

    Bui, E.N.; Henderson, B.L.; Viergever, K.

    2006-01-01

    We modelled the distribution of soil properties across the agricultural zone on the Australian continent using data mining and knowledge discovery from databases (DM&KDD) tools. Piecewise linear tree models were built choosing from 19 climate variables, digital elevation model (DEM) and derived

  19. Service Demand Discovery Mechanism for Mobile Social Networks.

    Science.gov (United States)

    Wu, Dapeng; Yan, Junjie; Wang, Honggang; Wang, Ruyan

    2016-11-23

    In the last few years, the service demand for wireless data over mobile networks has continually been soaring at a rapid pace. Thereinto, in Mobile Social Networks (MSNs), users can discover adjacent users for establishing temporary local connection and thus sharing already downloaded contents with each other to offload the service demand. Due to the partitioned topology, intermittent connection and social feature in such a network, the service demand discovery is challenging. In particular, the service demand discovery is exploited to identify the best relay user through the service registration, service selection and service activation. In order to maximize the utilization of limited network resources, a hybrid service demand discovery architecture, such as a Virtual Dictionary User (VDU) is proposed in this paper. Based on the historical data of movement, users can discover their relationships with others. Subsequently, according to the users activity, VDU is selected to facilitate the service registration procedure. Further, the service information outside of a home community can be obtained through the Global Active User (GAU) to support the service selection. To provide the Quality of Service (QoS), the Service Providing User (SPU) is chosen among multiple candidates. Numerical results show that, when compared with other classical service algorithms, the proposed scheme can improve the successful service demand discovery ratio by 25% under reduced overheads.

  20. Management Tools

    Science.gov (United States)

    1987-01-01

    Manugistics, Inc. (formerly AVYX, Inc.) has introduced a new programming language for IBM and IBM compatible computers called TREES-pls. It is a resource management tool originating from the space shuttle, that can be used in such applications as scheduling, resource allocation project control, information management, and artificial intelligence. Manugistics, Inc. was looking for a flexible tool that can be applied to many problems with minimal adaptation. Among the non-government markets are aerospace, other manufacturing, transportation, health care, food and beverage and professional services.

  1. Utilization and perceived problems of online medical resources and search tools among different groups of European physicians.

    Science.gov (United States)

    Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias

    2013-06-26

    There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, Presources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open

  2. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    Science.gov (United States)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship

  3. Optimizing Neighbor Discovery for Ad hoc Networks based on the Bluetooth PAN Profile

    DEFF Research Database (Denmark)

    Kuijpers, Gerben; Nielsen, Thomas Toftegaard; Prasad, Ramjee

    2002-01-01

    IP layer neighbor discovery mechanisms rely highly on broadcast/multicast capabilities of the underlying link layer. The Bluetooth personal area network (PAN) profile has no native link layer broadcast/multicast capabilities and can only emulate this by repeatedly unicast link layer frames....... This paper introduces a neighbor discovery mechanism that utilizes the resources in the Bluetooth PAN profile more efficient. The performance of the new mechanism is investigated using a IPv6 network simulator and compared with emulated broadcasting. It is shown that the signaling overhead can...

  4. Data Discovery of Big and Diverse Climate Change Datasets - Options, Practices and Challenges

    Science.gov (United States)

    Palanisamy, G.; Boden, T.; McCord, R. A.; Frame, M. T.

    2013-12-01

    Developing data search tools is a very common, but often confusing, task for most of the data intensive scientific projects. These search interfaces need to be continually improved to handle the ever increasing diversity and volume of data collections. There are many aspects which determine the type of search tool a project needs to provide to their user community. These include: number of datasets, amount and consistency of discovery metadata, ancillary information such as availability of quality information and provenance, and availability of similar datasets from other distributed sources. Environmental Data Science and Systems (EDSS) group within the Environmental Science Division at the Oak Ridge National Laboratory has a long history of successfully managing diverse and big observational datasets for various scientific programs via various data centers such as DOE's Atmospheric Radiation Measurement Program (ARM), DOE's Carbon Dioxide Information and Analysis Center (CDIAC), USGS's Core Science Analytics and Synthesis (CSAS) metadata Clearinghouse and NASA's Distributed Active Archive Center (ORNL DAAC). This talk will showcase some of the recent developments for improving the data discovery within these centers The DOE ARM program recently developed a data discovery tool which allows users to search and discover over 4000 observational datasets. These datasets are key to the research efforts related to global climate change. The ARM discovery tool features many new functions such as filtered and faceted search logic, multi-pass data selection, filtering data based on data quality, graphical views of data quality and availability, direct access to data quality reports, and data plots. The ARM Archive also provides discovery metadata to other broader metadata clearinghouses such as ESGF, IASOA, and GOS. In addition to the new interface, ARM is also currently working on providing DOI metadata records to publishers such as Thomson Reuters and Elsevier. The ARM

  5. Equation Discovery for Financial Forcasting in Context of Islamic Banking

    Institute of Scientific and Technical Information of China (English)

    Amer Alzaidi; Dimitar Kazakov

    2010-01-01

    This paper describes an equation discovery approach based on machine leamng using LAGdtAMGE as an equation discovery tool,with two sources of input,a dataset and model presented in context-free gammar.The approach is searching a large range of potential equations by a specific model.The parameters of the equation are fitted to find the best equations.The The experiments are illustratedwith commodity prices from the London Metal Exchange for the period of January-October 2009.The outputs of the experiments are a large number of equations;same of the equations display that the predicted rakes are following the market trends in perfect patterns.

  6. Screening the Medicines for Malaria Venture Pathogen Box across Multiple Pathogens Reclassifies Starting Points for Open-Source Drug Discovery.

    Science.gov (United States)

    Duffy, Sandra; Sykes, Melissa L; Jones, Amy J; Shelper, Todd B; Simpson, Moana; Lang, Rebecca; Poulsen, Sally-Ann; Sleebs, Brad E; Avery, Vicky M

    2017-09-01

    Open-access drug discovery provides a substantial resource for diseases primarily affecting the poor and disadvantaged. The open-access Pathogen Box collection is comprised of compounds with demonstrated biological activity against specific pathogenic organisms. The supply of this resource by the Medicines for Malaria Venture has the potential to provide new chemical starting points for a number of tropical and neglected diseases, through repurposing of these compounds for use in drug discovery campaigns for these additional pathogens. We tested the Pathogen Box against kinetoplastid parasites and malaria life cycle stages in vitro Consequently, chemical starting points for malaria, human African trypanosomiasis, Chagas disease, and leishmaniasis drug discovery efforts have been identified. Inclusive of this in vitro biological evaluation, outcomes from extensive literature reviews and database searches are provided. This information encompasses commercial availability, literature reference citations, other aliases and ChEMBL number with associated biological activity, where available. The release of this new data for the Pathogen Box collection into the public domain will aid the open-source model of drug discovery. Importantly, this will provide novel chemical starting points for drug discovery and target identification in tropical disease research. Copyright © 2017 Duffy et al.

  7. Power of Doubling: Population Growth and Resource Consumption

    OpenAIRE

    Sarika Bahadure

    2017-01-01

    Sustainability starts with conserving resources for future generations. Since human’s existence on this earth, he has been consuming natural resources. The resource consumption pace in the past was very slow, but industrialization in 18th century brought a change in the human lifestyle. New inventions and discoveries upgraded the human workforce to machines. The mass manufacture of goods provided easy access to products. In the last few decades, the globalization and change in technologies br...

  8. AFRA-NEST: A Tool for Human Resource Development

    International Nuclear Information System (INIS)

    Amanor, Edison; Akaho, E.H.K.; Serfor-Armah, Y.

    2014-01-01

    Conclusion: • Regional Networks could serve as a common platform to meet the needs for human resource development. • With AFRA-NEST, International cooperation would be strengthened. • Systematic integration and sharing of available nuclear training resources. • Cost of training future nuclear experts could drastically be reduced

  9. Ecological and resource economics as ecosystem management tools

    Science.gov (United States)

    Stephen Farber; Dennis. Bradley

    1999-01-01

    Economic pressures on ecosystems will only intensify in the future. Increased population levels, settlement patterns, and increased incomes will raise the demands for ecosystem resources and their services. The pressure to transform ecosystem natural assets into marketable commodities, whether by harvesting and mining resources or altering landscapes through...

  10. Engineering Application Way of Faults Knowledge Discovery Based on Rough Set Theory

    International Nuclear Information System (INIS)

    Zhao Rongzhen; Deng Linfeng; Li Chao

    2011-01-01

    For the knowledge acquisition puzzle of intelligence decision-making technology in mechanical industry, to use the Rough Set Theory (RST) as a kind of tool to solve the puzzle was researched. And the way to realize the knowledge discovery in engineering application is explored. A case extracting out the knowledge rules from a concise data table shows out some important information. It is that the knowledge discovery similar to the mechanical faults diagnosis is an item of complicated system engineering project. In where, first of all-important tasks is to preserve the faults knowledge into a table with data mode. And the data must be derived from the plant site and should also be as concise as possible. On the basis of the faults knowledge data obtained so, the methods and algorithms to process the data and extract the knowledge rules from them by means of RST can be processed only. The conclusion is that the faults knowledge discovery by the way is a process of rising upward. But to develop the advanced faults diagnosis technology by the way is a large-scale knowledge engineering project for long time. Every step in which should be designed seriously according to the tool's demands firstly. This is the basic guarantees to make the knowledge rules obtained have the values of engineering application and the studies have scientific significance. So, a general framework is designed for engineering application to go along the route developing the faults knowledge discovery technology.

  11. Web-Based Geospatial Tools to Address Hazard Mitigation, Natural Resource Management, and Other Societal Issues

    Science.gov (United States)

    Hearn,, Paul P.

    2009-01-01

    Federal, State, and local government agencies in the United States face a broad range of issues on a daily basis. Among these are natural hazard mitigation, homeland security, emergency response, economic and community development, water supply, and health and safety services. The U.S. Geological Survey (USGS) helps decision makers address these issues by providing natural hazard assessments, information on energy, mineral, water and biological resources, maps, and other geospatial information. Increasingly, decision makers at all levels are challenged not by the lack of information, but by the absence of effective tools to synthesize the large volume of data available, and to utilize the data to frame policy options in a straightforward and understandable manner. While geographic information system (GIS) technology has been widely applied to this end, systems with the necessary analytical power have been usable only by trained operators. The USGS is addressing the need for more accessible, manageable data tools by developing a suite of Web-based geospatial applications that will incorporate USGS and cooperating partner data into the decision making process for a variety of critical issues. Examples of Web-based geospatial tools being used to address societal issues follow.

  12. Entrepreneurship, Transaction Costs, and Resource Attributes

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    transaction costs and property rights shape the process of entrepreneurial discovery. We provide a sketch of the mechanisms that link entrepreneurship, property rights, and transaction costs in a resource-based setting, contributing further to the attempt to take the RBV in a more dynamic direction....

  13. A Community Assessment Tool for Education Resources

    Science.gov (United States)

    Hou, C. Y.; Soyka, H.; Hutchison, V.; Budden, A. E.

    2016-12-01

    In order to facilitate and enhance better understanding of how to conserve life on earth and the environment that sustains it, Data Observation Network for Earth (DataONE) develops, implements, and shares educational activities and materials as part of its commitment to the education of its community, including scientific researchers, educators, and the public. Creating and maintaining educational materials that remain responsive to community needs is reliant on careful evaluations in order to enhance current and future resources. DataONE's extensive collaboration with individuals and organizations has informed the development of its educational resources and through these interactions, the need for a comprehensive, customizable education evaluation instrument became apparent. In this presentation, the authors will briefly describe the design requirements and research behind a prototype instrument that is intended to be used by the community for evaluation of its educational activities and resources. We will then demonstrate the functionality of a web based platform that enables users to identify the type of educational activity across multiple axes. This results in a set of structured evaluation questions that can be included in a survey instrument. Users can also access supporting documentation describing the types of question included in the output or simply download a full editable instrument. Our aim is that by providing the community with access to a structured evaluation instrument, Earth/Geoscience educators will be able to gather feedback easily and efficiently in order to help maintain the quality, currency/relevancy, and value of their resources, and ultimately, support a more data literate community.

  14. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    Energy Technology Data Exchange (ETDEWEB)

    Gebhardt, Thomas, E-mail: gebhardt@mch.rwth-aachen.de; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-06-30

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition-structure-property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  15. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    International Nuclear Information System (INIS)

    Gebhardt, Thomas; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-01-01

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition–structure–property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  16. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  17. IsoNose - Isotopic Tools as Novel Sensors of Earth Surfaces Resources - A new Marie Curie Initial Training Network

    Science.gov (United States)

    von Blanckenburg, Friedhelm; Bouchez, Julien; Bouman, Caludia; Kamber, Balz; Gaillardet, Jérôme; Gorbushina, Anna; James, Rachael; Oelkers, Eric; Tesmer, Maja; Ashton, John

    2015-04-01

    The Marie Curie Initial Training Network »Isotopic Tools as Novel Sensors of Earth Surfaces Resources - IsoNose« is an alliance of eight international partners and five associated partners from science and industry. The project is coordinated at the Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences and will run until February 2018. In the last 15 years advances in novel mass-spectrometric methods have opened opportunities to identify "isotopic fingerprints" of virtually all metals and to make use of the complete information contained in these fingerprints. The understanding developed with these new tools will ultimately guide the exploitation of Earth surface environments. However, progress in bringing these methods to end-users depends on a multi transfer of knowledge between (1) isotope Geochemistry and Microbiology, Environmental Sciences (2), Economic Geology and (3) instrument developers and users in the development of user-friendly and new mass spectrometric methods. IsoNose will focus on three major Earth surface resources: soil, water and metals. These resources are currently being exploited to an unprecedented extent and their efficient management is essential for future sustainable development. Novel stable isotope techniques will disclose the processes generating (e.g. weathering, mineral ore formation) and destroying (e.g. erosion, pollution) these resources. Within this field the following questions will be addressed and answered: - How do novel stable isotope signatures characterize weathering processes? - How do novel stable isotope signatures trace water transport? - How to use novel stable isotope as environmental tracers? - How to use novel stable isotope for detecting and exploring metal ores? - How to improve analytical capabilities and develop robust routine applications for novel stable isotopes? Starting from the central questions mentioned above the IsoNose activities are organized in five scientific work packages: 1

  18. Terminology for Neuroscience Data Discovery: Multi-tree Syntax and Investigator-Derived Semantics

    Science.gov (United States)

    Goldberg, David H.; Grafstein, Bernice; Robert, Adrian; Gardner, Esther P.

    2009-01-01

    The Neuroscience Information Framework (NIF), developed for the NIH Blueprint for Neuroscience Research and available at http://nif.nih.gov and http://neurogateway.org, is built upon a set of coordinated terminology components enabling data and web-resource description and selection. Core NIF terminologies use a straightforward syntax designed for ease of use and for navigation by familiar web interfaces, and readily exportable to aid development of relational-model databases for neuroscience data sharing. Datasets, data analysis tools, web resources, and other entities are characterized by multiple descriptors, each addressing core concepts, including data type, acquisition technique, neuroanatomy, and cell class. Terms for each concept are organized in a tree structure, providing is-a and has-a relations. Broad general terms near each root span the category or concept and spawn more detailed entries for specificity. Related but distinct concepts (e.g., brain area and depth) are specified by separate trees, for easier navigation than would be required by graph representation. Semantics enabling NIF data discovery were selected at one or more workshops by investigators expert in particular systems (vision, olfaction, behavioral neuroscience, neurodevelopment), brain areas (cerebellum, thalamus, hippocampus), preparations (molluscs, fly), diseases (neurodegenerative disease), or techniques (microscopy, computation and modeling, neurogenetics). Workshop-derived integrated term lists are available Open Source at http://brainml.org; a complete list of participants is at http://brainml.org/workshops. PMID:18958630

  19. Thoughtflow: Standards and Tools for Provenance Capture and Workflow Definition to Support Model-Informed Drug Discovery and Development.

    Science.gov (United States)

    Wilkins, J J; Chan, Pls; Chard, J; Smith, G; Smith, M K; Beer, M; Dunn, A; Flandorfer, C; Franklin, C; Gomeni, R; Harnisch, L; Kaye, R; Moodie, S; Sardu, M L; Wang, E; Watson, E; Wolstencroft, K; Cheung, Sya

    2017-05-01

    Pharmacometric analyses are complex and multifactorial. It is essential to check, track, and document the vast amounts of data and metadata that are generated during these analyses (and the relationships between them) in order to comply with regulations, support quality control, auditing, and reporting. It is, however, challenging, tedious, error-prone, and time-consuming, and diverts pharmacometricians from the more useful business of doing science. Automating this process would save time, reduce transcriptional errors, support the retention and transfer of knowledge, encourage good practice, and help ensure that pharmacometric analyses appropriately impact decisions. The ability to document, communicate, and reconstruct a complete pharmacometric analysis using an open standard would have considerable benefits. In this article, the Innovative Medicines Initiative (IMI) Drug Disease Model Resources (DDMoRe) consortium proposes a set of standards to facilitate the capture, storage, and reporting of knowledge (including assumptions and decisions) in the context of model-informed drug discovery and development (MID3), as well as to support reproducibility: "Thoughtflow." A prototype software implementation is provided. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  20. Computational medicinal chemistry in fragment-based drug discovery: what, how and when.

    Science.gov (United States)

    Rabal, Obdulia; Urbano-Cuadrado, Manuel; Oyarzabal, Julen

    2011-01-01

    The use of fragment-based drug discovery (FBDD) has increased in the last decade due to the encouraging results obtained to date. In this scenario, computational approaches, together with experimental information, play an important role to guide and speed up the process. By default, FBDD is generally considered as a constructive approach. However, such additive behavior is not always present, therefore, simple fragment maturation will not always deliver the expected results. In this review, computational approaches utilized in FBDD are reported together with real case studies, where applicability domains are exemplified, in order to analyze them, and then, maximize their performance and reliability. Thus, a proper use of these computational tools can minimize misleading conclusions, keeping the credit on FBDD strategy, as well as achieve higher impact in the drug-discovery process. FBDD goes one step beyond a simple constructive approach. A broad set of computational tools: docking, R group quantitative structure-activity relationship, fragmentation tools, fragments management tools, patents analysis and fragment-hopping, for example, can be utilized in FBDD, providing a clear positive impact if they are utilized in the proper scenario - what, how and when. An initial assessment of additive/non-additive behavior is a critical point to define the most convenient approach for fragments elaboration.

  1. SPME as a promising tool in translational medicine and drug discovery: From bench to bedside.

    Science.gov (United States)

    Goryński, Krzysztof; Goryńska, Paulina; Górska, Agnieszka; Harężlak, Tomasz; Jaroch, Alina; Jaroch, Karol; Lendor, Sofia; Skobowiat, Cezary; Bojko, Barbara

    2016-10-25

    Solid phase microextraction (SPME) is a technology where a small amount of an extracting phase dispersed on a solid support is exposed to the sample for a well-defined period of time. The open-bed geometry and biocompatibility of the materials used for manufacturing of the devices makes it very convenient tool for direct extraction from complex biological matrices. The flexibility of the formats permits tailoring the method according the needs of the particular application. Number of studies concerning monitoring of drugs and their metabolites, analysis of metabolome of volatile as well as non-volatile compounds, determination of ligand-protein binding, permeability and compound toxicity was already reported. All these applications were performed in different matrices including biological fluids and tissues, cell cultures, and in living animals. The low invasiveness of in vivo SPME, ability of using very small sample volumes and analysis of cell cultures permits to address the rule of 3R, which is currently acknowledged ethical standard in R&D labs. In the current review systematic evaluation of the applicability of SPME to studies required to be conduct at different stages of drug discovery and development and translational medicine is presented. The advantages and challenges are discussed based on the examples directly showing given experimental design or on the studies, which could be translated to the models routinely used in drug development process. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The arctic water resource vulnerability index: An integrated assessment tool for community resilience and vulnerability with respect to freshwater

    Science.gov (United States)

    Alessa, L.; Kliskey, A.; Lammers, R.; Arp, C.; White, D.; Hinzman, L.; Busey, R.

    2008-01-01

    People in the Arctic face uncertainty in their daily lives as they contend with environmental changes at a range of scales from local to global. Freshwater is a critical resource to people, and although water resource indicators have been developed that operate from regional to global scales and for midlatitude to equatorial environments, no appropriate index exists for assessing the vulnerability of Arctic communities to changing water resources at the local scale. The Arctic Water Resource Vulnerability Index (AWRVI) is proposed as a tool that Arctic communities can use to assess their relative vulnerability-resilience to changes in their water resources from a variety of biophysical and socioeconomic processes. The AWRVI is based on a social-ecological systems perspective that includes physical and social indicators of change and is demonstrated in three case study communities/watersheds in Alaska. These results highlight the value of communities engaging in the process of using the AWRVI and the diagnostic capability of examining the suite of constituent physical and social scores rather than the total AWRVI score alone. ?? 2008 Springer Science+Business Media, LLC.

  3. The Proteomics Big Challenge for Biomarkers and New Drug-Targets Discovery

    Science.gov (United States)

    Savino, Rocco; Paduano, Sergio; Preianò, Mariaimmacolata; Terracciano, Rosa

    2012-01-01

    In the modern process of drug discovery, clinical, functional and chemical proteomics can converge and integrate synergies. Functional proteomics explores and elucidates the components of pathways and their interactions which, when deregulated, lead to a disease condition. This knowledge allows the design of strategies to target multiple pathways with combinations of pathway-specific drugs, which might increase chances of success and reduce the occurrence of drug resistance. Chemical proteomics, by analyzing the drug interactome, strongly contributes to accelerate the process of new druggable targets discovery. In the research area of clinical proteomics, proteome and peptidome mass spectrometry-profiling of human bodily fluid (plasma, serum, urine and so on), as well as of tissue and of cells, represents a promising tool for novel biomarker and eventually new druggable targets discovery. In the present review we provide a survey of current strategies of functional, chemical and clinical proteomics. Major issues will be presented for proteomic technologies used for the discovery of biomarkers for early disease diagnosis and identification of new drug targets. PMID:23203042

  4. Topology Discovery Using Cisco Discovery Protocol

    OpenAIRE

    Rodriguez, Sergio R.

    2009-01-01

    In this paper we address the problem of discovering network topology in proprietary networks. Namely, we investigate topology discovery in Cisco-based networks. Cisco devices run Cisco Discovery Protocol (CDP) which holds information about these devices. We first compare properties of topologies that can be obtained from networks deploying CDP versus Spanning Tree Protocol (STP) and Management Information Base (MIB) Forwarding Database (FDB). Then we describe a method of discovering topology ...

  5. Perspectives on bioanalytical mass spectrometry and automation in drug discovery.

    Science.gov (United States)

    Janiszewski, John S; Liston, Theodore E; Cole, Mark J

    2008-11-01

    The use of high speed synthesis technologies has resulted in a steady increase in the number of new chemical entities active in the drug discovery research stream. Large organizations can have thousands of chemical entities in various stages of testing and evaluation across numerous projects on a weekly basis. Qualitative and quantitative measurements made using LC/MS are integrated throughout this process from early stage lead generation through candidate nomination. Nearly all analytical processes and procedures in modern research organizations are automated to some degree. This includes both hardware and software automation. In this review we discuss bioanalytical mass spectrometry and automation as components of the analytical chemistry infrastructure in pharma. Analytical chemists are presented as members of distinct groups with similar skillsets that build automated systems, manage test compounds, assays and reagents, and deliver data to project teams. The ADME-screening process in drug discovery is used as a model to highlight the relationships between analytical tasks in drug discovery. Emerging software and process automation tools are described that can potentially address gaps and link analytical chemistry related tasks. The role of analytical chemists and groups in modern 'industrialized' drug discovery is also discussed.

  6. An online knowledge resource and questionnaires as a continuing pharmacy education tool to document reflective learning.

    Science.gov (United States)

    Budzinski, Jason W; Farrell, Barbara; Pluye, Pierre; Grad, Roland M; Repchinsky, Carol; Jovaisas, Barbara; Johnson-Lafleur, Janique

    2012-06-18

    To assess the use of an electronic knowledge resource to document continuing education activities and reveal educational needs of practicing pharmacists. Over a 38-week period, 67 e-mails were sent to 6,500 Canadian Pharmacists Association (CPhA) members. Each e-mail contained a link to an e-Therapeutics+ Highlight, a factual excerpt of selected content from an online drug and therapeutic knowledge resource. Participants were then prompted to complete a pop-up questionnaire. Members completed 4,140 questionnaires. Participants attributed the information they learned in the Highlights to practice improvements (50.4%), learning (57.0%), and motivation to learn more (57.4%). Reading Highlight excerpts and completing Web-based questionnaires is an effective method of continuing education that could be easily documented and tracked, making it an effective tool for use with e-portfolios.

  7. Fragment approaches in structure-based drug discovery

    International Nuclear Information System (INIS)

    Hubbard, Roderick E.

    2008-01-01

    Fragment-based methods are successfully generating novel and selective drug-like inhibitors of protein targets, with a number of groups reporting compounds entering clinical trials. This paper summarizes the key features of the approach as one of the tools in structure-guided drug discovery. There has been considerable interest recently in what is known as 'fragment-based lead discovery'. The novel feature of the approach is to begin with small low-affinity compounds. The main advantage is that a larger potential chemical diversity can be sampled with fewer compounds, which is particularly important for new target classes. The approach relies on careful design of the fragment library, a method that can detect binding of the fragment to the protein target, determination of the structure of the fragment bound to the target, and the conventional use of structural information to guide compound optimization. In this article the methods are reviewed, and experiences in fragment-based discovery of lead series of compounds against kinases such as PDK1 and ATPases such as Hsp90 are discussed. The examples illustrate some of the key benefits and issues of the approach and also provide anecdotal examples of the patterns seen in selectivity and the binding mode of fragments across different protein targets

  8. Venomics-Accelerated Cone Snail Venom Peptide Discovery

    Science.gov (United States)

    Himaya, S. W. A.

    2018-01-01

    Cone snail venoms are considered a treasure trove of bioactive peptides. Despite over 800 species of cone snails being known, each producing over 1000 venom peptides, only about 150 unique venom peptides are structurally and functionally characterized. To overcome the limitations of the traditional low-throughput bio-discovery approaches, multi-omics systems approaches have been introduced to accelerate venom peptide discovery and characterisation. This “venomic” approach is starting to unravel the full complexity of cone snail venoms and to provide new insights into their biology and evolution. The main challenge for venomics is the effective integration of transcriptomics, proteomics, and pharmacological data and the efficient analysis of big datasets. Novel database search tools and visualisation techniques are now being introduced that facilitate data exploration, with ongoing advances in related omics fields being expected to further enhance venomics studies. Despite these challenges and future opportunities, cone snail venomics has already exponentially expanded the number of novel venom peptide sequences identified from the species investigated, although most novel conotoxins remain to be pharmacologically characterised. Therefore, efficient high-throughput peptide production systems and/or banks of miniaturized discovery assays are required to overcome this bottleneck and thus enhance cone snail venom bioprospecting and accelerate the identification of novel drug leads. PMID:29522462

  9. Venomics-Accelerated Cone Snail Venom Peptide Discovery

    Directory of Open Access Journals (Sweden)

    S. W. A. Himaya

    2018-03-01

    Full Text Available Cone snail venoms are considered a treasure trove of bioactive peptides. Despite over 800 species of cone snails being known, each producing over 1000 venom peptides, only about 150 unique venom peptides are structurally and functionally characterized. To overcome the limitations of the traditional low-throughput bio-discovery approaches, multi-omics systems approaches have been introduced to accelerate venom peptide discovery and characterisation. This “venomic” approach is starting to unravel the full complexity of cone snail venoms and to provide new insights into their biology and evolution. The main challenge for venomics is the effective integration of transcriptomics, proteomics, and pharmacological data and the efficient analysis of big datasets. Novel database search tools and visualisation techniques are now being introduced that facilitate data exploration, with ongoing advances in related omics fields being expected to further enhance venomics studies. Despite these challenges and future opportunities, cone snail venomics has already exponentially expanded the number of novel venom peptide sequences identified from the species investigated, although most novel conotoxins remain to be pharmacologically characterised. Therefore, efficient high-throughput peptide production systems and/or banks of miniaturized discovery assays are required to overcome this bottleneck and thus enhance cone snail venom bioprospecting and accelerate the identification of novel drug leads.

  10. Sustaining an Online, Shared Community Resource for Models, Robust Open source Software Tools and Data for Volcanology - the Vhub Experience

    Science.gov (United States)

    Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.

    2015-12-01

    Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical

  11. Gene2Function: An Integrated Online Resource for Gene Function Discovery

    Directory of Open Access Journals (Sweden)

    Yanhui Hu

    2017-08-01

    Full Text Available One of the most powerful ways to develop hypotheses regarding the biological functions of conserved genes in a given species, such as humans, is to first look at what is known about their function in another species. Model organism databases and other resources are rich with functional information but difficult to mine. Gene2Function addresses a broad need by integrating information about conserved genes in a single online resource.

  12. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  13. SV-AUTOPILOT: optimized, automated construction of structural variation discovery and benchmarking pipelines.

    Science.gov (United States)

    Leung, Wai Yi; Marschall, Tobias; Paudel, Yogesh; Falquet, Laurent; Mei, Hailiang; Schönhuth, Alexander; Maoz Moss, Tiffanie Yael

    2015-03-25

    Many tools exist to predict structural variants (SVs), utilizing a variety of algorithms. However, they have largely been developed and tested on human germline or somatic (e.g. cancer) variation. It seems appropriate to exploit this wealth of technology available for humans also for other species. Objectives of this work included: a) Creating an automated, standardized pipeline for SV prediction. b) Identifying the best tool(s) for SV prediction through benchmarking. c) Providing a statistically sound method for merging SV calls. The SV-AUTOPILOT meta-tool platform is an automated pipeline for standardization of SV prediction and SV tool development in paired-end next-generation sequencing (NGS) analysis. SV-AUTOPILOT comes in the form of a virtual machine, which includes all datasets, tools and algorithms presented here. The virtual machine easily allows one to add, replace and update genomes, SV callers and post-processing routines and therefore provides an easy, out-of-the-box environment for complex SV discovery tasks. SV-AUTOPILOT was used to make a direct comparison between 7 popular SV tools on the Arabidopsis thaliana genome using the Landsberg (Ler) ecotype as a standardized dataset. Recall and precision measurements suggest that Pindel and Clever were the most adaptable to this dataset across all size ranges while Delly performed well for SVs larger than 250 nucleotides. A novel, statistically-sound merging process, which can control the false discovery rate, reduced the false positive rate on the Arabidopsis benchmark dataset used here by >60%. SV-AUTOPILOT provides a meta-tool platform for future SV tool development and the benchmarking of tools on other genomes using a standardized pipeline. It optimizes detection of SVs in non-human genomes using statistically robust merging. The benchmarking in this study has demonstrated the power of 7 different SV tools for analyzing different size classes and types of structural variants. The optional merge

  14. International Uranium Resources Evaluation Project (IUREP) national favourability studies: Australia

    International Nuclear Information System (INIS)

    1977-08-01

    In Australia most exploration for uranium has been conducted by companies and individuals. The geological mapping and airborne radiometric surveying conducted by the BMR is made available to interested persons. Exploration for uranium in Australia can be divided into two periods - 1947 to 1961 and 1966-1977. During the first period the Commonwealth Government introduced measures to encourage uranium exploration including a system of rewards for the discovery of uranium ore. This reward system resulted in extensive activity by prospectors particularly in the known mineral fields. Equipped with a Geiger counter or scintillometer, individuals with little or no experience in prospecting could compete with experienced prospectors and geologists. During this period several relative small uranium deposits were discovered generally by prospectors who found outcropping mineralisation. The second phase of uranium exploration in Australia began in 1966 at which time reserves amounted to only 6,200 tonnes of uranium and by 3 977 reserves had been increased to 289,000 tonnes. Most of the exploration was done by companies with substantial exploration budgets utilising more advanced geological and geophysical techniques. In the field of airborne radiometer the development of multi-channel gamma ray spectrometers with large volume crystal detectors increased the sensitivity of the tool as a uranium detector and resulted in several major discoveries. Expenditure or exploration for uranium increased from 1966 to 1971 but has declines in recent years. After listing the major geological elements of Australia, its uranium production and resources are discussed. During the period 1954-71 the total production of uranium concentrate in Australia amounted to 7,780 tonnes of uranium, and was derived from deposits at Rum Jungle (2,990 tonnes U) and the South Alligator River (610 tonnes U) in the Northern Territory, Mary Kathleen (3,460 tonnes U) in Queensland and Radium Hill (720 tonnes U

  15. A methodology for handling exploration risk and constructing supply curves for oil and gas plays when resources are stacked

    International Nuclear Information System (INIS)

    Dallaire, S.M.

    1994-01-01

    The use of project economics to estimate full-cycle supply prices for undiscovered oil and gas resources is a straightforward exercise for those regions where oil and gas plays are not vertically superimposed on one another, ie. are not stacked. Exploration risk is incorporated into such an analysis by using a simple two-outcome decision tree model to include the costs of dry and abandoned wells. The decision tree model can be expanded to include multiple targets or discoveries, but this expansion requires additional drilling statistics and resource assessment data. A methodology is suggested to include exploration risk in the preparation of supply curves when stacked resources are expected and little or no information on uphole resources is available. In this method, all exploration costs for wells drilled to targets in the play being evaluated are assigned to that play, rather than prorated among the multiple targets or discoveries. Undiscovered pools are assumed to either bear all exploration costs (full cycle discoveries) or no exploration costs (half cycle discoveries). The weighted full- and half-cycle supply price is shown to be a more realistic estimate of the supply price of undiscovered pools in a play when stacked resources exist. The statistics required for this methodology are minimal, and resource estimates for prospects in other zones are not required. The equation relating the average pool finding cost to the discovery record is applicable to different scenarios regarding the presence of shallower and deeper resources. The equation derived for the two-outcome decision tree model is shown to be a special case of the general expression. 5 refs., 7 figs

  16. DFAST and DAGA: web-based integrated genome annotation tools and resources.

    Science.gov (United States)

    Tanizawa, Yasuhiro; Fujisawa, Takatomo; Kaminuma, Eli; Nakamura, Yasukazu; Arita, Masanori

    2016-01-01

    Quality assurance and correct taxonomic affiliation of data submitted to public sequence databases have been an everlasting problem. The DDBJ Fast Annotation and Submission Tool (DFAST) is a newly developed genome annotation pipeline with quality and taxonomy assessment tools. To enable annotation of ready-to-submit quality, we also constructed curated reference protein databases tailored for lactic acid bacteria. DFAST was developed so that all the procedures required for DDBJ submission could be done seamlessly online. The online workspace would be especially useful for users not familiar with bioinformatics skills. In addition, we have developed a genome repository, DFAST Archive of Genome Annotation (DAGA), which currently includes 1,421 genomes covering 179 species and 18 subspecies of two genera, Lactobacillus and Pediococcus , obtained from both DDBJ/ENA/GenBank and Sequence Read Archive (SRA). All the genomes deposited in DAGA were annotated consistently and assessed using DFAST. To assess the taxonomic position based on genomic sequence information, we used the average nucleotide identity (ANI), which showed high discriminative power to determine whether two given genomes belong to the same species. We corrected mislabeled or misidentified genomes in the public database and deposited the curated information in DAGA. The repository will improve the accessibility and reusability of genome resources for lactic acid bacteria. By exploiting the data deposited in DAGA, we found intraspecific subgroups in Lactobacillus gasseri and Lactobacillus jensenii , whose variation between subgroups is larger than the well-accepted ANI threshold of 95% to differentiate species. DFAST and DAGA are freely accessible at https://dfast.nig.ac.jp.

  17. Preference vs. Authority: A Comparison of Student Searching in a Subject-Specific Indexing and Abstracting Database and a Customized Discovery Layer

    Science.gov (United States)

    Dahlen, Sarah P. C.; Hanson, Kathlene

    2017-01-01

    Discovery layers provide a simplified interface for searching library resources. Libraries with limited finances make decisions about retaining indexing and abstracting databases when similar information is available in discovery layers. These decisions should be informed by student success at finding quality information as well as satisfaction…

  18. A 2-layer and P2P-based architecture on resource location in future grid environment

    International Nuclear Information System (INIS)

    Pei Erming; Sun Gongxin; Zhang Weiyi; Pang Yangguang; Gu Ming; Ma Nan

    2004-01-01

    Grid and Peer-to-Peer computing are two distributed resource sharing environments developing rapidly in recent years. The final objective of Grid, as well as that of P2P technology, is to pool large sets of resources effectively to be used in a more convenient, fast and transparent way. We can speculate that, though many difference exists, Grid and P2P environments will converge into a large scale resource sharing environment that combines the characteristics of the two environments: large diversity, high heterogeneity (of resources), dynamism, and lack of central control. Resource discovery in this future Grid environment is a basic however, important problem. In this article. We propose a two-layer and P2P-based architecture for resource discovery and design a detailed algorithm for resource request propagation in the computing environment discussed above. (authors)

  19. Simulation with quantum mechanics/molecular mechanics for drug discovery.

    Science.gov (United States)

    Barbault, Florent; Maurel, François

    2015-10-01

    Biological macromolecules, such as proteins or nucleic acids, are (still) molecules and thus they follow the same chemical rules that any simple molecule follows, even if their size generally renders accurate studies unhelpful. However, in the context of drug discovery, a detailed analysis of ligand association is required for understanding or predicting their interactions and hybrid quantum mechanics/molecular mechanics (QM/MM) computations are relevant tools to help elucidate this process. In this review, the authors explore the use of QM/MM for drug discovery. After a brief description of the molecular mechanics (MM) technique, the authors describe the subtractive and additive techniques for QM/MM computations. The authors then present several application cases in topics involved in drug discovery. QM/MM have been widely employed during the last decades to study chemical processes such as enzyme-inhibitor interactions. However, despite the enthusiasm around this area, plain MM simulations may be more meaningful than QM/MM. To obtain reliable results, the authors suggest fixing several keystone parameters according to the underlying chemistry of each studied system.

  20. Endophytic Fungi as Novel Resources of natural Therapeutics

    Directory of Open Access Journals (Sweden)

    Maheshwari Rajamanikyam

    2017-08-01

    Full Text Available ABSTRACT Fungal endophytes constitute a major part of the unexplored fungal diversity. Endophytic fungi (EF are an important source for novel, potential and active metabolites. Plant-endophyte interaction and endophyte -endophyte interactions study provide insights into mutualism and metabolite production by fungi. Bioactive compounds produced by endophytes main function are helping the host plants to resist external biotic and abiotic stress, which benefit the host survival in return. These organisms mainly consist of members of the Ascomycota, Basidiomycota, Zygomycota and Oomycota. Recently, the genome sequencing technology has emerged as one of the most efficient tools that can provide whole information of a genome in a small period of time. Endophytes are fertile ground for drug discovery. EFare considered as the hidden members of the microbial world and represent an underutilized resource for new therapeutics and compounds. Endophytes are rich source of natural products displaying broad spectrum of biological activities like anticancer, antibacterial, antiviral, immunomodulatory, antidiabetic, antioxidant, anti-arthritis and anti-inflammatory.

  1. Managing research and surveillance projects in real-time with a novel open-source eManagement tool designed for under-resourced countries.

    Science.gov (United States)

    Steiner, Andreas; Hella, Jerry; Grüninger, Servan; Mhalu, Grace; Mhimbira, Francis; Cercamondi, Colin I; Doulla, Basra; Maire, Nicolas; Fenner, Lukas

    2016-09-01

    A software tool is developed to facilitate data entry and to monitor research projects in under-resourced countries in real-time. The eManagement tool "odk_planner" is written in the scripting languages PHP and Python. The odk_planner is lightweight and uses minimal internet resources. It was designed to be used with the open source software Open Data Kit (ODK). The users can easily configure odk_planner to meet their needs, and the online interface displays data collected from ODK forms in a graphically informative way. The odk_planner also allows users to upload pictures and laboratory results and sends text messages automatically. User-defined access rights protect data and privacy. We present examples from four field applications in Tanzania successfully using the eManagement tool: 1) clinical trial; 2) longitudinal Tuberculosis (TB) Cohort Study with a complex visit schedule, where it was used to graphically display missing case report forms, upload digitalized X-rays, and send text message reminders to patients; 3) intervention study to improve TB case detection, carried out at pharmacies: a tablet-based electronic referral system monitored referred patients, and sent automated messages to remind pharmacy clients to visit a TB Clinic; and 4) TB retreatment case monitoring designed to improve drug resistance surveillance: clinicians at four public TB clinics and lab technicians at the TB reference laboratory used a smartphone-based application that tracked sputum samples, and collected clinical and laboratory data. The user friendly, open source odk_planner is a simple, but multi-functional, Web-based eManagement tool with add-ons that helps researchers conduct studies in under-resourced countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Open Science Meets Stem Cells: A New Drug Discovery Approach for Neurodegenerative Disorders.

    Science.gov (United States)

    Han, Chanshuai; Chaineau, Mathilde; Chen, Carol X-Q; Beitel, Lenore K; Durcan, Thomas M

    2018-01-01

    Neurodegenerative diseases are a challenge for drug discovery, as the biological mechanisms are complex and poorly understood, with a paucity of models that faithfully recapitulate these disorders. Recent advances in stem cell technology have provided a paradigm shift, providing researchers with tools to generate human induced pluripotent stem cells (iPSCs) from patient cells. With the potential to generate any human cell type, we can now generate human neurons and develop "first-of-their-kind" disease-relevant assays for small molecule screening. Now that the tools are in place, it is imperative that we accelerate discoveries from the bench to the clinic. Using traditional closed-door research systems raises barriers to discovery, by restricting access to cells, data and other research findings. Thus, a new strategy is required, and the Montreal Neurological Institute (MNI) and its partners are piloting an "Open Science" model. One signature initiative will be that the MNI biorepository will curate and disseminate patient samples in a more accessible manner through open transfer agreements. This feeds into the MNI open drug discovery platform, focused on developing industry-standard assays with iPSC-derived neurons. All cell lines, reagents and assay findings developed in this open fashion will be made available to academia and industry. By removing the obstacles many universities and companies face in distributing patient samples and assay results, our goal is to accelerate translational medical research and the development of new therapies for devastating neurodegenerative disorders.

  3. Open Science Meets Stem Cells: A New Drug Discovery Approach for Neurodegenerative Disorders

    Directory of Open Access Journals (Sweden)

    Chanshuai Han

    2018-02-01

    Full Text Available Neurodegenerative diseases are a challenge for drug discovery, as the biological mechanisms are complex and poorly understood, with a paucity of models that faithfully recapitulate these disorders. Recent advances in stem cell technology have provided a paradigm shift, providing researchers with tools to generate human induced pluripotent stem cells (iPSCs from patient cells. With the potential to generate any human cell type, we can now generate human neurons and develop “first-of-their-kind” disease-relevant assays for small molecule screening. Now that the tools are in place, it is imperative that we accelerate discoveries from the bench to the clinic. Using traditional closed-door research systems raises barriers to discovery, by restricting access to cells, data and other research findings. Thus, a new strategy is required, and the Montreal Neurological Institute (MNI and its partners are piloting an “Open Science” model. One signature initiative will be that the MNI biorepository will curate and disseminate patient samples in a more accessible manner through open transfer agreements. This feeds into the MNI open drug discovery platform, focused on developing industry-standard assays with iPSC-derived neurons. All cell lines, reagents and assay findings developed in this open fashion will be made available to academia and industry. By removing the obstacles many universities and companies face in distributing patient samples and assay results, our goal is to accelerate translational medical research and the development of new therapies for devastating neurodegenerative disorders.

  4. Scholarly information discovery in the networked academic learning environment

    CERN Document Server

    Li, LiLi

    2014-01-01

    In the dynamic and interactive academic learning environment, students are required to have qualified information literacy competencies while critically reviewing print and electronic information. However, many undergraduates encounter difficulties in searching peer-reviewed information resources. Scholarly Information Discovery in the Networked Academic Learning Environment is a practical guide for students determined to improve their academic performance and career development in the digital age. Also written with academic instructors and librarians in mind who need to show their students how to access and search academic information resources and services, the book serves as a reference to promote information literacy instructions. This title consists of four parts, with chapters on the search for online and printed information via current academic information resources and services: part one examines understanding information and information literacy; part two looks at academic information delivery in the...

  5. Activities, Animations, and Online Tools to Enable Undergraduate Student Learning of Geohazards, Climate Change, and Water Resources

    Science.gov (United States)

    Pratt-Sitaula, B. A.; Walker, B.; Douglas, B. J.; Cronin, V. S.; Funning, G.; Stearns, L. A.; Charlevoix, D.; Miller, M. M.

    2017-12-01

    The NSF-funded GEodesy Tools for Societal Issues (GETSI) project is developing teaching resources for use in introductory and majors-level courses, emphasizing a broad range of geodetic methods and data applied to societally important issues. The modules include a variety of hands-on activities, demonstrations, animations, and interactive online tools in order to facilitate student learning and engagement. A selection of these activities will be showcased at the AGU session. These activities and data analysis exercises are embedded in 4-6 units per module. Modules can take 2-3 weeks of course time total or individual units and activities can be selected and used over just 1-2 class periods. Existing modules are available online via serc.carleton.edu/getsi/ and include "Ice mass and sea level changes", "Imaging active tectonics with LiDAR and InSAR", "Measuring water resources with GPS, gravity, and traditional methods", "Surface process hazards", and "GPS, strain, and earthquakes". Modules, and their activities and demonstrations were designed by teams of faculty and content experts and underwent rigorous classroom testing and review using the process developed by the Science Education Resource Center's InTeGrate Project (serc.carleton.edu/integrate). All modules are aligned to Earth Science and Climate literacy principles. GETSI collaborating institutions are UNAVCO (which runs NSF's Geodetic Facility), Indiana University, and Mt San Antonio College. Initial funding came from NSF's TUES (Transforming Undergraduate Education in STEM). A second phase of funding from NSF IUSE (Improving Undergraduate STEM Education) is just starting and will fund another six modules (including their demonstrations, activities, and hands-on activities) as well as considerably more instructor professional development to facilitate implementation and use.

  6. The Southern HII Region Discovery Survey

    Science.gov (United States)

    Wenger, Trey; Miller Dickey, John; Jordan, Christopher; Bania, Thomas M.; Balser, Dana S.; Dawson, Joanne; Anderson, Loren D.; Armentrout, William P.; McClure-Griffiths, Naomi

    2016-01-01

    HII regions are zones of ionized gas surrounding recently formed high-mass (OB-type) stars. They are among the brightest objects in the sky at radio wavelengths. HII regions provide a useful tool in constraining the Galactic morphological structure, chemical structure, and star formation rate. We describe the Southern HII Region Discovery Survey (SHRDS), an Australia Telescope Compact Array (ATCA) survey that discovered ~80 new HII regions (so far) in the Galactic longitude range 230 degrees to 360 degrees. This project is an extension of the Green Bank Telescope HII Region Discovery Survey (GBT HRDS), Arecibo HRDS, and GBT Widefield Infrared Survey Explorer (WISE) HRDS, which together discovered ~800 new HII regions in the Galactic longitude range -20 degrees to 270 degrees. Similar to those surveys, candidate HII regions were chosen from 20 micron emission (from WISE) coincident with 10 micron (WISE) and 20 cm (SGPS) emission. By using the ATCA to detect radio continuum and radio recombination line emission from a subset of these candidates, we have added to the population of known Galactic HII regions.

  7. Business Model Discovery by Technology Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Steven Muegge

    2012-04-01

    Full Text Available Value creation and value capture are central to technology entrepreneurship. The ways in which a particular firm creates and captures value are the foundation of that firm's business model, which is an explanation of how the business delivers value to a set of customers at attractive profits. Despite the deep conceptual link between business models and technology entrepreneurship, little is known about the processes by which technology entrepreneurs produce successful business models. This article makes three contributions to partially address this knowledge gap. First, it argues that business model discovery by technology entrepreneurs can be, and often should be, disciplined by both intention and structure. Second, it provides a tool for disciplined business model discovery that includes an actionable process and a worksheet for describing a business model in a form that is both concise and explicit. Third, it shares preliminary results and lessons learned from six technology entrepreneurs applying a disciplined process to strengthen or reinvent the business models of their own nascent technology businesses.

  8. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    Directory of Open Access Journals (Sweden)

    Irene Niks

    2018-02-01

    Full Text Available Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.

  9. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.

    Science.gov (United States)

    Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene

    2018-02-13

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.

  10. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    Science.gov (United States)

    Niks, Irene; Gevers, Josette

    2018-01-01

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350

  11. 14 CFR 406.143 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Discovery. 406.143 Section 406.143... Transportation Adjudications § 406.143 Discovery. (a) Initiation of discovery. Any party may initiate discovery... after a complaint has been filed. (b) Methods of discovery. The following methods of discovery are...

  12. GIS Technology: Resource and Habitability Assessment Tool

    Data.gov (United States)

    National Aeronautics and Space Administration — We are applying Geographic Information Systems (GIS) to new orbital data sets for lunar resource assessment and the identification of past habitable environments on...

  13. Higgs Discovery

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2013-01-01

    has been challenged by the discovery of a not-so-heavy Higgs-like state. I will therefore review the recent discovery \\cite{Foadi:2012bb} that the standard model top-induced radiative corrections naturally reduce the intrinsic non-perturbative mass of the composite Higgs state towards the desired...... via first principle lattice simulations with encouraging results. The new findings show that the recent naive claims made about new strong dynamics at the electroweak scale being disfavoured by the discovery of a not-so-heavy composite Higgs are unwarranted. I will then introduce the more speculative......I discuss the impact of the discovery of a Higgs-like state on composite dynamics starting by critically examining the reasons in favour of either an elementary or composite nature of this state. Accepting the standard model interpretation I re-address the standard model vacuum stability within...

  14. Open source drug discovery--a new paradigm of collaborative research in tuberculosis drug development.

    Science.gov (United States)

    Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K

    2011-09-01

    It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Snpdat: Easy and rapid annotation of results from de novo snp discovery projects for model and non-model organisms

    Directory of Open Access Journals (Sweden)

    Doran Anthony G

    2013-02-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are the most abundant genetic variant found in vertebrates and invertebrates. SNP discovery has become a highly automated, robust and relatively inexpensive process allowing the identification of many thousands of mutations for model and non-model organisms. Annotating large numbers of SNPs can be a difficult and complex process. Many tools available are optimised for use with organisms densely sampled for SNPs, such as humans. There are currently few tools available that are species non-specific or support non-model organism data. Results Here we present SNPdat, a high throughput analysis tool that can provide a comprehensive annotation of both novel and known SNPs for any organism with a draft sequence and annotation. Using a dataset of 4,566 SNPs identified in cattle using high-throughput DNA sequencing we demonstrate the annotations performed and the statistics that can be generated by SNPdat. Conclusions SNPdat provides users with a simple tool for annotation of genomes that are either not supported by other tools or have a small number of annotated SNPs available. SNPdat can also be used to analyse datasets from organisms which are densely sampled for SNPs. As a command line tool it can easily be incorporated into existing SNP discovery pipelines and fills a niche for analyses involving non-model organisms that are not supported by many available SNP annotation tools. SNPdat will be of great interest to scientists involved in SNP discovery and analysis projects, particularly those with limited bioinformatics experience.

  16. Resource Discovery: Comparative Results on Two Catalog Interfaces

    Directory of Open Access Journals (Sweden)

    Heather Hessel

    2012-06-01

    Full Text Available Like many libraries, the University of Minnesota Libraries-Twin Cities now offers a next-generation catalog alongside a traditional online public access catalog (OPAC. One year after the launch of its new platform as the default catalog, usage data for the OPAC remained relatively high, and anecdotal comments raised questions. In response, the Libraries conducted surveys that covered topics such as perceptions of success, known-item searching, preferred search environments, and desirable resource types. Results show distinct differences in the behavior of faculty, graduate student, and undergraduate survey respondents, and between library staff and non-library staff respondents. Both quantitative and qualitative data inform the analysis and conclusions.

  17. Fungal genome resources at NCBI

    Science.gov (United States)

    Robbertse, B.; Tatusova, T.

    2011-01-01

    The National Center for Biotechnology Information (NCBI) is well known for the nucleotide sequence archive, GenBank and sequence analysis tool BLAST. However, NCBI integrates many types of biomolecular data from variety of sources and makes it available to the scientific community as interactive web resources as well as organized releases of bulk data. These tools are available to explore and compare fungal genomes. Searching all databases with Fungi [organism] at http://www.ncbi.nlm.nih.gov/ is the quickest way to find resources of interest with fungal entries. Some tools though are resources specific and can be indirectly accessed from a particular database in the Entrez system. These include graphical viewers and comparative analysis tools such as TaxPlot, TaxMap and UniGene DDD (found via UniGene Homepage). Gene and BioProject pages also serve as portals to external data such as community annotation websites, BioGrid and UniProt. There are many different ways of accessing genomic data at NCBI. Depending on the focus and goal of research projects or the level of interest, a user would select a particular route for accessing genomic databases and resources. This review article describes methods of accessing fungal genome data and provides examples that illustrate the use of analysis tools. PMID:22737589

  18. Natural Products in the Discovery of Agrochemicals.

    Science.gov (United States)

    Loiseleur, Olivier

    2017-12-01

    Natural products have a long history of being used as, or serving as inspiration for, novel crop protection agents. Many of the discoveries in agrochemical research in the last decades have their origin in a wide range of natural products from a variety of sources. In light of the continuing need for new tools to address an ever-changing array of fungal, weed and insect pests, new agricultural practices and evolving regulatory requirements, the needs for new agrochemical tools remains as critical as ever. In that respect, nature continues to be an important source for novel chemical structures and biological mechanisms to be applied for the development of pest control agents. Here we review several of the natural products and their derivatives which contributed to shape crop protection research in past and present.

  19. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  20. SEMANTIC WEB SERVICES – DISCOVERY, SELECTION AND COMPOSITION TECHNIQUES

    OpenAIRE

    Sowmya Kamath S; Ananthanarayana V.S

    2013-01-01

    Web services are already one of the most important resources on the Internet. As an integrated solution for realizing the vision of the Next Generation Web, semantic web services combine semantic web technology with web service technology, envisioning automated life cycle management of web services. This paper discusses the significance and importance of service discovery & selection to business logic, and the requisite current research in the various phases of the semantic web...

  1. Planetary Sciences Literature - Access and Discovery

    Science.gov (United States)

    Henneken, Edwin A.; ADS Team

    2017-10-01

    The NASA Astrophysics Data System (ADS) has been around for over 2 decades, helping professional astronomers and planetary scientists navigate, without charge, through the increasingly complex environment of scholarly publications. As boundaries between disciplines dissolve and expand, the ADS provides powerful tools to help researchers discover useful information efficiently. In its new form, code-named ADS Bumblebee (https://ui.adsabs.harvard.edu), it may very well answer questions you didn't know you had! While the classic ADS (http://ads.harvard.edu) focuses mostly on searching basic metadata (author, title and abstract), today's ADS is best described as a an "aggregator" of scholarly resources relevant to the needs of researchers in astronomy and planetary sciences, and providing a discovery environment on top of this. In addition to indexing content from a variety of publishers, data and software archives, the ADS enriches its records by text-mining and indexing the full-text articles (about 4.7 million in total, with 130,000 from planetary science journals), enriching its metadata through the extraction of citations and acknowledgments. Recent technology developments include a new Application Programming Interface (API), a new user interface featuring a variety of visualizations and bibliometric analysis, and integration with ORCID services to support paper claiming. The new ADS provides powerful tools to help you find review papers on a given subject, prolific authors working on a subject and who they are collaborating with (within and outside their group) and papers most read by by people who read recent papers on the topic of your interest. These are just a couple of examples of the capabilities of the new ADS. We currently index most journals covering the planetary sciences and we are striving to include those journals most frequently cited by planetary science publications. The ADS is operated by the Smithsonian Astrophysical Observatory under NASA

  2. Talking to children about their HIV status: a review of available resources, tools, and models for improving and promoting pediatric disclosure.

    Science.gov (United States)

    Wright, S; Amzel, A; Ikoro, N; Srivastava, M; Leclerc-Madlala, S; Bowsky, S; Miller, H; Phelps, B R

    2017-08-01

    As children living with HIV (CLHIV) grow into adolescence and adulthood, caregivers and healthcare providers are faced with the sensitive challenge of when to disclose to a CLHIV his or her HIV status. Despite WHO recommendations for CLHIV to know their status, in countries most affected by HIV, effective resources are often limited, and national guidance on disclosure is often lacking. To address the need for effective resources, gray and scientific literature was searched to identify existing tools and resources that can aid in the disclosure process. From peer-reviewed literature, seven disclosure models from six different countries were identified. From the gray literature, 23 resources were identified including children's books (15), job aides to assist healthcare providers (5), and videos (3). While these existing resources can be tailored to reflect local norms and used to aid in the disclosure process, careful consideration must be taken in order to avoid damaging disclosure practices.

  3. The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access

    Science.gov (United States)

    Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.

    2017-12-01

    The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH Repository

  4. From machine learning to deep learning: progress in machine intelligence for rational drug discovery.

    Science.gov (United States)

    Zhang, Lu; Tan, Jianjun; Han, Dan; Zhu, Hao

    2017-11-01

    Machine intelligence, which is normally presented as artificial intelligence, refers to the intelligence exhibited by computers. In the history of rational drug discovery, various machine intelligence approaches have been applied to guide traditional experiments, which are expensive and time-consuming. Over the past several decades, machine-learning tools, such as quantitative structure-activity relationship (QSAR) modeling, were developed that can identify potential biological active molecules from millions of candidate compounds quickly and cheaply. However, when drug discovery moved into the era of 'big' data, machine learning approaches evolved into deep learning approaches, which are a more powerful and efficient way to deal with the massive amounts of data generated from modern drug discovery approaches. Here, we summarize the history of machine learning and provide insight into recently developed deep learning approaches and their applications in rational drug discovery. We suggest that this evolution of machine intelligence now provides a guide for early-stage drug design and discovery in the current big data era. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Performance Evaluation of a Cluster-Based Service Discovery Protocol for Heterogeneous Wireless Sensor Networks

    NARCIS (Netherlands)

    Marin Perianu, Raluca; Scholten, Johan; Havinga, Paul J.M.; Hartel, Pieter H.

    2006-01-01

    Abstract—This paper evaluates the performance in terms of resource consumption of a service discovery protocol proposed for heterogeneous Wireless Sensor Networks (WSNs). The protocol is based on a clustering structure, which facilitates the construction of a distributed directory. Nodes with higher

  6. The discovery of the periodic table as a case of simultaneous discovery.

    Science.gov (United States)

    Scerri, Eric

    2015-03-13

    The article examines the question of priority and simultaneous discovery in the context of the discovery of the periodic system. It is argued that rather than being anomalous, simultaneous discovery is the rule. Moreover, I argue that the discovery of the periodic system by at least six authors in over a period of 7 years represents one of the best examples of a multiple discovery. This notion is supported by a new view of the evolutionary development of science through a mechanism that is dubbed Sci-Gaia by analogy with Lovelock's Gaia hypothesis. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Beyond Discovery

    DEFF Research Database (Denmark)

    Korsgaard, Steffen; Sassmannshausen, Sean Patrick

    2017-01-01

    In this chapter we explore four alternatives to the dominant discovery view of entrepreneurship; the development view, the construction view, the evolutionary view, and the Neo-Austrian view. We outline the main critique points of the discovery presented in these four alternatives, as well...

  8. Resourcing Future Generations - Challenges for geoscience: a new IUGS initiative

    Science.gov (United States)

    Oberhänsli, Roland; Lambert, Ian

    2014-05-01

    In a world with rapidly increasing population and technological development new space based remote sensing tools allowed for new discoveries and production of water, energy- and mineral-resources, including minerals, soils and construction materials. This has impact on politics, socio-economic development and thus calls for a strong involvement of geosciences because one of humanities biggest challenges will be, to rise living standards particularly in less developed countries. Any growth will lead to an increase of demand for natural resources. But especially for readily available mineral resources supply appears to be limited. Particularly demand for so called high-tech commodities - platinum group or rare earth elements - increased. This happened often faster than new discoveries were made. All this, while areas available for exploration decreased as the need for urban and agricultural use increased. Despite strong efforts in increasing efficiency of recycling, shortage in some commodities has to be expected. A major concern is that resources are not distributed evenly on our planet. Thus supplies depend on political stability, socio-economic standards and pricing. In the light of these statements IUGS is scoping a new initiative, Resourcing Future Generations (RFG), which is predicated on the fact that mining will continue to be an essential activity to meet the needs of future generations. RFG is aimed at identifying and addressing key challenges involved in securing natural resources to meet global needs post-2030. We consider that mineral resources should be the initial focus, but energy, soils, water resources and land use should also be covered. Addressing the multi-generational needs for mineral and other natural resources requires data, research and actions under four general themes: 1. Comprehensive evaluation and quantification of 21st century supply and demand. 2. Enhanced understanding of subsurface as it relates to mineral (energy and groundwater

  9. "Eureka, Eureka!" Discoveries in Science

    Science.gov (United States)

    Agarwal, Pankaj

    2011-01-01

    Accidental discoveries have been of significant value in the progress of science. Although accidental discoveries are more common in pharmacology and chemistry, other branches of science have also benefited from such discoveries. While most discoveries are the result of persistent research, famous accidental discoveries provide a fascinating…

  10. Constructing a Cross-Domain Resource Inventory: Key Components and Results of the EarthCube CINERGI Project.

    Science.gov (United States)

    Zaslavsky, I.; Richard, S. M.; Malik, T.; Hsu, L.; Gupta, A.; Grethe, J. S.; Valentine, D. W., Jr.; Lehnert, K. A.; Bermudez, L. E.; Ozyurt, I. B.; Whitenack, T.; Schachne, A.; Giliarini, A.

    2015-12-01

    While many geoscience-related repositories and data discovery portals exist, finding information about available resources remains a pervasive problem, especially when searching across multiple domains and catalogs. Inconsistent and incomplete metadata descriptions, disparate access protocols and semantic differences across domains, and troves of unstructured or poorly structured information which is hard to discover and use are major hindrances toward discovery, while metadata compilation and curation remain manual and time-consuming. We report on methodology, main results and lessons learned from an ongoing effort to develop a geoscience-wide catalog of information resources, with consistent metadata descriptions, traceable provenance, and automated metadata enhancement. Developing such a catalog is the central goal of CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability), an EarthCube building block project (earthcube.org/group/cinergi). The key novel technical contributions of the projects include: a) development of a metadata enhancement pipeline and a set of document enhancers to automatically improve various aspects of metadata descriptions, including keyword assignment and definition of spatial extents; b) Community Resource Viewers: online applications for crowdsourcing community resource registry development, curation and search, and channeling metadata to the unified CINERGI inventory, c) metadata provenance, validation and annotation services, d) user interfaces for advanced resource discovery; and e) geoscience-wide ontology and machine learning to support automated semantic tagging and faceted search across domains. We demonstrate these CINERGI components in three types of user scenarios: (1) improving existing metadata descriptions maintained by government and academic data facilities, (2) supporting work of several EarthCube Research Coordination Network projects in assembling information resources for their domains

  11. Protein Data Bank Japan (PDBj): updated user interfaces, resource description framework, analysis tools for large structures.

    Science.gov (United States)

    Kinjo, Akira R; Bekker, Gert-Jan; Suzuki, Hirofumi; Tsuchiya, Yuko; Kawabata, Takeshi; Ikegawa, Yasuyo; Nakamura, Haruki

    2017-01-04

    The Protein Data Bank Japan (PDBj, http://pdbj.org), a member of the worldwide Protein Data Bank (wwPDB), accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins. We herein outline the updated web user interfaces together with RESTful web services and the backend relational database that support the former. To enhance the interoperability of the PDB data, we have previously developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, which is now a wwPDB standard called wwPDB/RDF. We have enhanced the connectivity of the wwPDB/RDF data by incorporating various external data resources. Services for searching, comparing and analyzing the ever-increasing large structures determined by hybrid methods are also described. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. FAF-Drugs2: free ADME/tox filtering tool to assist drug discovery and chemical biology projects.

    Science.gov (United States)

    Lagorce, David; Sperandio, Olivier; Galons, Hervé; Miteva, Maria A; Villoutreix, Bruno O

    2008-09-24

    Drug discovery and chemical biology are exceedingly complex and demanding enterprises. In recent years there are been increasing awareness about the importance of predicting/optimizing the absorption, distribution, metabolism, excretion and toxicity (ADMET) properties of small chemical compounds along the search process rather than at the final stages. Fast methods for evaluating ADMET properties of small molecules often involve applying a set of simple empirical rules (educated guesses) and as such, compound collections' property profiling can be performed in silico. Clearly, these rules cannot assess the full complexity of the human body but can provide valuable information and assist decision-making. This paper presents FAF-Drugs2, a free adaptable tool for ADMET filtering of electronic compound collections. FAF-Drugs2 is a command line utility program (e.g., written in Python) based on the open source chemistry toolkit OpenBabel, which performs various physicochemical calculations, identifies key functional groups, some toxic and unstable molecules/functional groups. In addition to filtered collections, FAF-Drugs2 can provide, via Gnuplot, several distribution diagrams of major physicochemical properties of the screened compound libraries. We have developed FAF-Drugs2 to facilitate compound collection preparation, prior to (or after) experimental screening or virtual screening computations. Users can select to apply various filtering thresholds and add rules as needed for a given project. As it stands, FAF-Drugs2 implements numerous filtering rules (23 physicochemical rules and 204 substructure searching rules) that can be easily tuned.

  13. 19 CFR 356.20 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 356.20 Section 356.20 Customs Duties... § 356.20 Discovery. (a) Voluntary discovery. All parties are encouraged to engage in voluntary discovery... sanctions proceeding. (b) Limitations on discovery. The administrative law judge shall place such limits...

  14. Chemical Discovery

    Science.gov (United States)

    Brown, Herbert C.

    1974-01-01

    The role of discovery in the advance of the science of chemistry and the factors that are currently operating to handicap that function are considered. Examples are drawn from the author's work with boranes. The thesis that exploratory research and discovery should be encouraged is stressed. (DT)

  15. Resource-Aware Planning for Shadowed and Uncertain Domains, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Discovery of frozen volatiles at the lunar poles is transformative to space exploration. In-situ resources will provide fuel to support far-reaching exploration and...

  16. Integrated groundwater resource management in Indus Basin using satellite gravimetry and physical modeling tools.

    Science.gov (United States)

    Iqbal, Naveed; Hossain, Faisal; Lee, Hyongki; Akhter, Gulraiz

    2017-03-01

    Reliable and frequent information on groundwater behavior and dynamics is very important for effective groundwater resource management at appropriate spatial scales. This information is rarely available in developing countries and thus poses a challenge for groundwater managers. The in situ data and groundwater modeling tools are limited in their ability to cover large domains. Remote sensing technology can now be used to continuously collect information on hydrological cycle in a cost-effective way. This study evaluates the effectiveness of a remote sensing integrated physical modeling approach for groundwater management in Indus Basin. The Gravity Recovery and Climate Experiment Satellite (GRACE)-based gravity anomalies from 2003 to 2010 were processed to generate monthly groundwater storage changes using the Variable Infiltration Capacity (VIC) hydrologic model. The groundwater storage is the key parameter of interest for groundwater resource management. The spatial and temporal patterns in groundwater storage (GWS) are useful for devising the appropriate groundwater management strategies. GRACE-estimated GWS information with large-scale coverage is valuable for basin-scale monitoring and decision making. This frequently available information is found useful for the identification of groundwater recharge areas, groundwater storage depletion, and pinpointing of the areas where groundwater sustainability is at risk. The GWS anomalies were found to favorably agree with groundwater model simulations from Visual MODFLOW and in situ data. Mostly, a moderate to severe GWS depletion is observed causing a vulnerable situation to the sustainability of this groundwater resource. For the sustainable groundwater management, the region needs to implement groundwater policies and adopt water conservation techniques.

  17. 24 CFR 180.500 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 180.500 Section 180.500... OPPORTUNITY CONSOLIDATED HUD HEARING PROCEDURES FOR CIVIL RIGHTS MATTERS Discovery § 180.500 Discovery. (a) In general. This subpart governs discovery in aid of administrative proceedings under this part. Discovery in...

  18. 22 CFR 224.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discovery. 224.21 Section 224.21 Foreign....21 Discovery. (a) The following types of discovery are authorized: (1) Requests for production of... parties, discovery is available only as ordered by the ALJ. The ALJ shall regulate the timing of discovery...

  19. Public-Private Partnerships in Lead Discovery: Overview and Case Studies.

    Science.gov (United States)

    Gottwald, Matthias; Becker, Andreas; Bahr, Inke; Mueller-Fahrnow, Anke

    2016-09-01

    The pharmaceutical industry is faced with significant challenges in its efforts to discover new drugs that address unmet medical needs. Safety concerns and lack of efficacy are the two main technical reasons for attrition. Improved early research tools including predictive in silico, in vitro, and in vivo models, as well as a deeper understanding of the disease biology, therefore have the potential to improve success rates. The combination of internal activities with external collaborations in line with the interests and needs of all partners is a successful approach to foster innovation and to meet the challenges. Collaboration can take place in different ways, depending on the requirements of the participants. In this review, the value of public-private partnership approaches will be discussed, using examples from the Innovative Medicines Initiative (IMI). These examples describe consortia approaches to develop tools and processes for improving target identification and validation, as well as lead identification and optimization. The project "Kinetics for Drug Discovery" (K4DD), focusing on the adoption of drug-target binding kinetics analysis in the drug discovery decision-making process, is described in more detail. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Genomic Enzymology: Web Tools for Leveraging Protein Family Sequence-Function Space and Genome Context to Discover Novel Functions.

    Science.gov (United States)

    Gerlt, John A

    2017-08-22

    The exponentially increasing number of protein and nucleic acid sequences provides opportunities to discover novel enzymes, metabolic pathways, and metabolites/natural products, thereby adding to our knowledge of biochemistry and biology. The challenge has evolved from generating sequence information to mining the databases to integrating and leveraging the available information, i.e., the availability of "genomic enzymology" web tools. Web tools that allow identification of biosynthetic gene clusters are widely used by the natural products/synthetic biology community, thereby facilitating the discovery of novel natural products and the enzymes responsible for their biosynthesis. However, many novel enzymes with interesting mechanisms participate in uncharacterized small-molecule metabolic pathways; their discovery and functional characterization also can be accomplished by leveraging information in protein and nucleic acid databases. This Perspective focuses on two genomic enzymology web tools that assist the discovery novel metabolic pathways: (1) Enzyme Function Initiative-Enzyme Similarity Tool (EFI-EST) for generating sequence similarity networks to visualize and analyze sequence-function space in protein families and (2) Enzyme Function Initiative-Genome Neighborhood Tool (EFI-GNT) for generating genome neighborhood networks to visualize and analyze the genome context in microbial and fungal genomes. Both tools have been adapted to other applications to facilitate target selection for enzyme discovery and functional characterization. As the natural products community has demonstrated, the enzymology community needs to embrace the essential role of web tools that allow the protein and genome sequence databases to be leveraged for novel insights into enzymological problems.

  1. 19 CFR 207.109 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 207.109 Section 207.109 Customs Duties... and Committee Proceedings § 207.109 Discovery. (a) Discovery methods. All parties may obtain discovery under such terms and limitations as the administrative law judge may order. Discovery may be by one or...

  2. 15 CFR 25.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Discovery. 25.21 Section 25.21... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for..., discovery is available only as ordered by the ALJ. The ALJ shall regulate the timing of discovery. (d...

  3. Online Resources to Support Professional Development for Managing and Preserving Geospatial Data

    Science.gov (United States)

    Downs, R. R.; Chen, R. S.

    2013-12-01

    Improved capabilities of information and communication technologies (ICT) enable the development of new systems and applications for collecting, managing, disseminating, and using scientific data. New knowledge, skills, and techniques are also being developed to leverage these new ICT capabilities and improve scientific data management practices throughout the entire data lifecycle. In light of these developments and in response to increasing recognition of the wider value of scientific data for society, government agencies are requiring plans for the management, stewardship, and public dissemination of data and research products that are created by government-funded studies. Recognizing that data management and dissemination have not been part of traditional science education programs, new educational programs and learning resources are being developed to prepare new and practicing scientists, data scientists, data managers, and other data professionals with skills in data science and data management. Professional development and training programs also are being developed to address the need for scientists and professionals to improve their expertise in using the tools and techniques for managing and preserving scientific data. The Geospatial Data Preservation Resource Center offers an online catalog of various open access publications, open source tools, and freely available information for the management and stewardship of geospatial data and related resources, such as maps, GIS, and remote sensing data. Containing over 500 resources that can be found by type, topic, or search query, the geopreservation.org website enables discovery of various types of resources to improve capabilities for managing and preserving geospatial data. Applications and software tools can be found for use online or for download. Online journal articles, presentations, reports, blogs, and forums are also available through the website. Available education and training materials include

  4. 30 CFR 56.12033 - Hand-held electric tools.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hand-held electric tools. 56.12033 Section 56.12033 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL....12033 Hand-held electric tools. Hand-held electric tools shall not be operated at high potential...

  5. Engineering a mobile health tool for resource-poor settings to assess and manage cardiovascular disease risk: SMARThealth study.

    Science.gov (United States)

    Raghu, Arvind; Praveen, Devarsetty; Peiris, David; Tarassenko, Lionel; Clifford, Gari

    2015-04-29

    The incidence of chronic diseases in low- and middle-income countries is rapidly increasing both in urban and rural regions. A major challenge for health systems globally is to develop innovative solutions for the prevention and control of these diseases. This paper discusses the development and pilot testing of SMARTHealth, a mobile-based, point-of-care Clinical Decision Support (CDS) tool to assess and manage cardiovascular disease (CVD) risk in resource-constrained settings. Through pilot testing, the preliminary acceptability, utility, and efficiency of the CDS tool was obtained. The CDS tool was part of an mHealth system comprising a mobile application that consisted of an evidence-based risk prediction and management algorithm, and a server-side electronic medical record system. Through an agile development process and user-centred design approach, key features of the mobile application that fitted the requirements of the end users and environment were obtained. A comprehensive analytics framework facilitated a data-driven approach to investigate four areas, namely, system efficiency, end-user variability, manual data entry errors, and usefulness of point-of-care management recommendations to the healthcare worker. A four-point Likert scale was used at the end of every risk assessment to gauge ease-of-use of the system. The system was field-tested with eleven village healthcare workers and three Primary Health Centre doctors, who screened a total of 292 adults aged 40 years and above. 34% of participants screened by health workers were identified by the CDS tool to be high CVD risk and referred to a doctor. In-depth analysis of user interactions found the CDS tool feasible for use and easily integrable into the workflow of healthcare workers. Following completion of the pilot, further technical enhancements were implemented to improve uptake of the mHealth platform. It will then be evaluated for effectiveness and cost-effectiveness in a cluster randomized

  6. Handbook of Research on E-Transformation and Human Resources Management Technologies: Organizational Outcomes and Challenges

    NARCIS (Netherlands)

    Bondarouk, Tatiana; Ruel, Hubertus Johannes Maria; Guiderdoni-Jourdain, Karine; Oiry, Ewan

    2009-01-01

    Digital advancements and discoveries are now challenging traditional human resource management services within businesses. The Handbook of Research on E-Transformation and Human Resources Management Technologies: Organizational Outcomes and Challenges provides practical, situated, and unique

  7. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  8. Discovery machines accelerators for science, technology, health and innovation

    CERN Document Server

    Australian Academy of Sciences

    2016-01-01

    Discovery machines: Accelerators for science, technology, health and innovation explores the science of particle accelerators, the machines that supercharge our ability to discover the secrets of nature and have opened up new tools in medicine, energy, manufacturing, and the environment as well as in pure research. Particle accelerators are now an essential ingredient in discovery science because they offer new ways to analyse the world, such as by probing objects with high energy x-rays or colliding them beams of electrons. They also have a huge—but often unnoticed—impact on all our lives; medical imaging, cancer treatment, new materials and even the chips that power our phones and computers have all been transformed by accelerators of various types. Research accelerators also provide fundamental infrastructure that encourages better collaboration between international and domestic scientists, organisations and governments.

  9. 39 CFR 963.14 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Discovery. 963.14 Section 963.14 Postal Service... PANDERING ADVERTISEMENTS STATUTE, 39 U.S.C. 3008 § 963.14 Discovery. Discovery is to be conducted on a... such discovery as he or she deems reasonable and necessary. Discovery may include one or more of the...

  10. Multi-criteria evaluation of natural gas resources

    International Nuclear Information System (INIS)

    Afgan, Naim H.; Pilavachi, Petros A.; Carvalho, Maria G.

    2007-01-01

    Geologically estimated natural gas resources are 500 Tcm. With the advance in geological science increase of estimated resources is expected. Natural gas reserves in 2000 have been proved to be around 165 Tcm. As it is known the reserves are subject to two constraints, namely: capital invested in the exploration and drilling technologies used to discover new reserves. The natural gas scarcity factor, i.e. ratio between available reserves and natural gas consumption, is around 300 years for the last 50 years. The new discovery of natural gas reserves has given rise to a new energy strategy based on natural gas. Natural gas utilization is constantly increasing in the last 50 years. With new technologies for deep drilling, we have come to know that there are enormous gas resources available at relatively low price. These new discoveries together with high demand for the environment saving have introduced a new energy strategy on the world scale. This paper presents an evaluation of the potential natural gas utilization in energy sector. As the criteria in this analysis resource, economic, environmental, social and technological indicators are used. Among the potential options of gas utilization following systems are considered: Gas turbine power plant, combine cycle plant, CHP power plant, steam turbine gas-fired power plant, fuel cells power plant. Multi-criteria method was used for the assessment of potential options with priority given to the Resource, Economic and Social Indicators. Results obtained are presented in graphical form representing priority list of potential options under specific constraints in the priority of natural gas utilization strategy in energy sector

  11. Remote sensing change detection tools for natural resource managers: Understanding concepts and tradeoffs in the design of landscape monitoring projects

    Science.gov (United States)

    Robert E. Kennedy; Philip A. Townsend; John E. Gross; Warren B. Cohen; Paul Bolstad; Wang Y. Q.; Phyllis Adams

    2009-01-01

    Remote sensing provides a broad view of landscapes and can be consistent through time, making it an important tool for monitoring and managing protected areas. An impediment to broader use of remote sensing science for monitoring has been the need for resource managers to understand the specialized capabilities of an ever-expanding array of image sources and analysis...

  12. The development of high-content screening (HCS) technology and its importance to drug discovery.

    Science.gov (United States)

    Fraietta, Ivan; Gasparri, Fabio

    2016-01-01

    High-content screening (HCS) was introduced about twenty years ago as a promising analytical approach to facilitate some critical aspects of drug discovery. Its application has spread progressively within the pharmaceutical industry and academia to the point that it today represents a fundamental tool in supporting drug discovery and development. Here, the authors review some of significant progress in the HCS field in terms of biological models and assay readouts. They highlight the importance of high-content screening in drug discovery, as testified by its numerous applications in a variety of therapeutic areas: oncology, infective diseases, cardiovascular and neurodegenerative diseases. They also dissect the role of HCS technology in different phases of the drug discovery pipeline: target identification, primary compound screening, secondary assays, mechanism of action studies and in vitro toxicology. Recent advances in cellular assay technologies, such as the introduction of three-dimensional (3D) cultures, induced pluripotent stem cells (iPSCs) and genome editing technologies (e.g., CRISPR/Cas9), have tremendously expanded the potential of high-content assays to contribute to the drug discovery process. Increasingly predictive cellular models and readouts, together with the development of more sophisticated and affordable HCS readers, will further consolidate the role of HCS technology in drug discovery.

  13. 30 CFR 57.12033 - Hand-held electric tools.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hand-held electric tools. 57.12033 Section 57.12033 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL... Surface and Underground § 57.12033 Hand-held electric tools. Hand-held electric tools shall not be...

  14. Web-based services for drug design and discovery.

    Science.gov (United States)

    Frey, Jeremy G; Bird, Colin L

    2011-09-01

    Reviews of the development of drug discovery through the 20(th) century recognised the importance of chemistry and increasingly bioinformatics, but had relatively little to say about the importance of computing and networked computing in particular. However, the design and discovery of new drugs is arguably the most significant single application of bioinformatics and cheminformatics to have benefitted from the increases in the range and power of the computational techniques since the emergence of the World Wide Web, commonly now referred to as simply 'the Web'. Web services have enabled researchers to access shared resources and to deploy standardized calculations in their search for new drugs. This article first considers the fundamental principles of Web services and workflows, and then explores the facilities and resources that have evolved to meet the specific needs of chem- and bio-informatics. This strategy leads to a more detailed examination of the basic components that characterise molecules and the essential predictive techniques, followed by a discussion of the emerging networked services that transcend the basic provisions, and the growing trend towards embracing modern techniques, in particular the Semantic Web. In the opinion of the authors, the issues that require community action are: increasing the amount of chemical data available for open access; validating the data as provided; and developing more efficient links between the worlds of cheminformatics and bioinformatics. The goal is to create ever better drug design services.

  15. eagle-i: An Ontology-Driven Framework For Biomedical Resource Curation And Discovery

    OpenAIRE

    Erik Segerdell; Melanie L. Wilson; Ted Bashor; Daniela Bourges-Waldegg; Karen Corday; H. Robert Frost; Tenille Johnson; Christopher J. Shaffer; Larry Stone; Carlo Torniai; Melissa A. Haendel

    2010-01-01

    The eagle-i Consortium ("http://www.eagle-i.org/home":www.eagle-i.org/home) comprises nine geographically and ethnically diverse universities across America working to build a federated network of research resources. Biomedical research generates many resources that are rarely shared or published, including: reagents, protocols, instruments, expertise, organisms, training opportunities, software, human studies, and biological specimens. The goal of eagle-i is to improve biomedical r...

  16. The ChIP-Seq tools and web server: a resource for analyzing ChIP-seq and other types of genomic data.

    Science.gov (United States)

    Ambrosini, Giovanna; Dreos, René; Kumar, Sunil; Bucher, Philipp

    2016-11-18

    ChIP-seq and related high-throughput chromatin profilig assays generate ever increasing volumes of highly valuable biological data. To make sense out of it, biologists need versatile, efficient and user-friendly tools for access, visualization and itegrative analysis of such data. Here we present the ChIP-Seq command line tools and web server, implementing basic algorithms for ChIP-seq data analysis starting with a read alignment file. The tools are optimized for memory-efficiency and speed thus allowing for processing of large data volumes on inexpensive hardware. The web interface provides access to a large database of public data. The ChIP-Seq tools have a modular and interoperable design in that the output from one application can serve as input to another one. Complex and innovative tasks can thus be achieved by running several tools in a cascade. The various ChIP-Seq command line tools and web services either complement or compare favorably to related bioinformatics resources in terms of computational efficiency, ease of access to public data and interoperability with other web-based tools. The ChIP-Seq server is accessible at http://ccg.vital-it.ch/chipseq/ .

  17. Teach Astronomy: An Online Textbook for Introductory Astronomy Courses and Resources for Informal Learners

    Science.gov (United States)

    Hardegree-Ullman, Kevin; Impey, C. D.; Patikkal, A.

    2012-05-01

    This year we implemented Teach Astronomy (www.teachastronomy.com) as a free online resource to be used as a teaching tool for non-science major astronomy courses and for a general audience interested in the subject. The comprehensive content includes: an introductory astronomy text book by Chris Impey, astronomy articles on Wikipedia, images from the Astronomy Picture of the Day, two to three minute topical video clips by Chris Impey, podcasts from 365 Days of Astronomy, and astronomy news from Science Daily. Teach Astronomy utilizes a novel technology to cluster, display, and navigate search results, called a Wikimap. Steep increases in textbook prices and the unique capabilities of emerging web technology motivated the development of this free online resource. Recent additions to Teach Astronomy include: images and diagrams for the textbook articles, mobile device implementation, and suggested homework assignments for instructors that utilize recent discoveries in astronomy. We present an overview of how Teach Astronomy has been implemented for use in the classroom and informal settings, and suggestions for utilizing the rich content and features of the web site.

  18. Knowledge Discovery in Biological Databases for Revealing Candidate Genes Linked to Complex Phenotypes.

    Science.gov (United States)

    Hassani-Pak, Keywan; Rawlings, Christopher

    2017-06-13

    Genetics and "omics" studies designed to uncover genotype to phenotype relationships often identify large numbers of potential candidate genes, among which the causal genes are hidden. Scientists generally lack the time and technical expertise to review all relevant information available from the literature, from key model species and from a potentially wide range of related biological databases in a variety of data formats with variable quality and coverage. Computational tools are needed for the integration and evaluation of heterogeneous information in order to prioritise candidate genes and components of interaction networks that, if perturbed through potential interventions, have a positive impact on the biological outcome in the whole organism without producing negative side effects. Here we review several bioinformatics tools and databases that play an important role in biological knowledge discovery and candidate gene prioritization. We conclude with several key challenges that need to be addressed in order to facilitate biological knowledge discovery in the future.

  19. Knowledge Discovery Process: Case Study of RNAV Adherence of Radar Track Data

    Science.gov (United States)

    Matthews, Bryan

    2018-01-01

    This talk is an introduction to the knowledge discovery process, beginning with: identifying the problem, choosing data sources, matching the appropriate machine learning tools, and reviewing the results. The overview will be given in the context of an ongoing study that is assessing RNAV adherence of commercial aircraft in the national airspace.

  20. Green Power Partner Resources

    Science.gov (United States)

    EPA Green Power Partners can access tools and resources to help promote their green power commitments. Partners use these tools to communicate the benefits of their green power use to their customers, stakeholders, and the general public.

  1. A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.

    2010-12-01

    High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to

  2. SpirPep: an in silico digestion-based platform to assist bioactive peptides discovery from a genome-wide database.

    Science.gov (United States)

    Anekthanakul, Krittima; Hongsthong, Apiradee; Senachak, Jittisak; Ruengjitchatchawalya, Marasri

    2018-04-20

    Bioactive peptides, including biological sources-derived peptides with different biological activities, are protein fragments that influence the functions or conditions of organisms, in particular humans and animals. Conventional methods of identifying bioactive peptides are time-consuming and costly. To quicken the processes, several bioinformatics tools are recently used to facilitate screening of the potential peptides prior their activity assessment in vitro and/or in vivo. In this study, we developed an efficient computational method, SpirPep, which offers many advantages over the currently available tools. The SpirPep web application tool is a one-stop analysis and visualization facility to assist bioactive peptide discovery. The tool is equipped with 15 customized enzymes and 1-3 miscleavage options, which allows in silico digestion of protein sequences encoded by protein-coding genes from single, multiple, or genome-wide scaling, and then directly classifies the peptides by bioactivity using an in-house database that contains bioactive peptides collected from 13 public databases. With this tool, the resulting peptides are categorized by each selected enzyme, and shown in a tabular format where the peptide sequences can be tracked back to their original proteins. The developed tool and webpages are coded in PHP and HTML with CSS/JavaScript. Moreover, the tool allows protein-peptide alignment visualization by Generic Genome Browser (GBrowse) to display the region and details of the proteins and peptides within each parameter, while considering digestion design for the desirable bioactivity. SpirPep is efficient; it takes less than 20 min to digest 3000 proteins (751,860 amino acids) with 15 enzymes and three miscleavages for each enzyme, and only a few seconds for single enzyme digestion. Obviously, the tool identified more bioactive peptides than that of the benchmarked tool; an example of validated pentapeptide (FLPIL) from LC-MS/MS was demonstrated. The

  3. USMC Logistics Resource Allocation Optimization Tool

    Science.gov (United States)

    2015-12-01

    Pryor 2006). The background information included assumptions of the weight carried by pack animals vs. war horses and the amount of feed consumed...by different animals while being transported at sea versus open grazing on land. A book review of this workshop’s proceedings provides the essence...this study’s ability to model all aspects of mission, equipment types, failure modes, repair times, carcass cycles, etc., into a flexible tool

  4. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan Erbs; Søndergaard, Hans; Ravn, Anders P.

    2013-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations - especially memory consumption - tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating: (1) a lean virtual machine (HVM) without any external dependencies on POSIX-like libraries or other OS functionalities, (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables, and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  5. Accessing external innovation in drug discovery and development.

    Science.gov (United States)

    Tufféry, Pierre

    2015-06-01

    A decline in the productivity of the pharmaceutical industry research and development (R&D) pipeline has highlighted the need to reconsider the classical strategies of drug discovery and development, which are based on internal resources, and to identify new means to improve the drug discovery process. Accepting that the combination of internal and external ideas can improve innovation, ways to access external innovation, that is, opening projects to external contributions, have recently been sought. In this review, the authors look at a number of external innovation opportunities. These include increased interactions with academia via academic centers of excellence/innovation centers, better communication on projects using crowdsourcing or social media and new models centered on external providers such as built-to-buy startups or virtual pharmaceutical companies. The buzz for accessing external innovation relies on the pharmaceutical industry's major challenge to improve R&D productivity, a conjuncture favorable to increase interactions with academia and new business models supporting access to external innovation. So far, access to external innovation has mostly been considered during early stages of drug development, and there is room for enhancement. First outcomes suggest that external innovation should become part of drug development in the long term. However, the balance between internal and external developments in drug discovery can vary largely depending on the company strategies.

  6. United States uranium resources: an analysis of historical data

    International Nuclear Information System (INIS)

    Lieberman, M.A.

    1976-01-01

    Using historical data, a study of U.S. uranium resources was performed with emphasis on discovery and drilling rates for the time interval from 1948 until the present. The ultimate recoverable resource up to a forward cost category of $30 or less per pound is estimated to be 1,134,000 short tons--about one third the estimate offered by ERDA. A serious shortfall in uranium supply is predicted for the late 1980's if nuclear power proceeds as planned; and courses of action are recommended for uranium resource management

  7. Computer-aided drug discovery [v1; ref status: indexed, http://f1000r.es/5ij

    Directory of Open Access Journals (Sweden)

    Jürgen Bajorath

    2015-08-01

    Full Text Available Computational approaches are an integral part of interdisciplinary drug discovery research. Understanding the science behind computational tools, their opportunities, and limitations is essential to make a true impact on drug discovery at different levels. If applied in a scientifically meaningful way, computational methods improve the ability to identify and evaluate potential drug molecules, but there remain weaknesses in the methods that preclude naïve applications. Herein, current trends in computer-aided drug discovery are reviewed, and selected computational areas are discussed. Approaches are highlighted that aid in the identification and optimization of new drug candidates. Emphasis is put on the presentation and discussion of computational concepts and methods, rather than case studies or application examples. As such, this contribution aims to provide an overview of the current methodological spectrum of computational drug discovery for a broad audience.

  8. Usability of Discovery Portals

    OpenAIRE

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals are not spatial data experts but professionals with limited spatial knowledge, and a focus outside the spatial domain. An exploratory usability experiment was carried out in which three discovery p...

  9. Drawbacks and benefits associated with inter-organizational collaboration along the discovery-development-delivery continuum: a cancer research network case study.

    Science.gov (United States)

    Harris, Jenine K; Provan, Keith G; Johnson, Kimberly J; Leischow, Scott J

    2012-07-25

    The scientific process around cancer research begins with scientific discovery, followed by development of interventions, and finally delivery of needed interventions to people with cancer. Numerous studies have identified substantial gaps between discovery and delivery in health research. Team science has been identified as a possible solution for closing the discovery to delivery gap; however, little is known about effective ways of collaborating within teams and across organizations. The purpose of this study was to determine benefits and drawbacks associated with organizational collaboration across the discovery-development-delivery research continuum. Representatives of organizations working on cancer research across a state answered a survey about how they collaborated with other cancer research organizations in the state and what benefits and drawbacks they experienced while collaborating. We used exponential random graph modeling to determine the association between these benefits and drawbacks and the presence of a collaboration tie between any two network members. Different drawbacks and benefits were associated with discovery, development, and delivery collaborations. The only consistent association across all three was with the drawback of difficulty due to geographic differences, which was negatively associated with collaboration, indicating that those organizations that had collaborated were less likely to perceive a barrier related to geography. The benefit, enhanced access to other knowledge, was positive and significant in the development and delivery networks, indicating that collaborating organizations viewed improved knowledge exchange as a benefit of collaboration. 'Acquisition of additional funding or other resources' and 'development of new tools and methods' were negatively significantly related to collaboration in these networks. So, although improved knowledge access was an outcome of collaboration, more tangible outcomes were not being

  10. OSIRIS, an entirely in-house developed drug discovery informatics system.

    Science.gov (United States)

    Sander, Thomas; Freyss, Joel; von Korff, Modest; Reich, Jacqueline Renée; Rufener, Christian

    2009-02-01

    We present OSIRIS, an entirely in-house developed drug discovery informatics system. Its components cover all information handling aspects from compound synthesis via biological testing to preclinical development. Its design principles are platform and vendor independence, a consistent look and feel, and complete coverage of the drug discovery process by custom tailored applications. These include electronic laboratory notebook applications for biology and chemistry, tools for high-throughput and secondary screening evaluation, chemistry-aware data visualization, physicochemical property prediction, 3D-pharmacophore comparisons, interactive modeling, computing grid based ligand-protein docking, and more. Most applications are developed in Java and are built on top of a Java library layer that provides reusable cheminformatics functionality and GUI components such as chemical editors, structure canonicalization, substructure search, combinatorial enumeration, enhanced stereo perception, force field minimization, and conformation generation.

  11. Tools for Engaging Scientists in Education and Public Outreach: Resources from NASA's Science Mission Directorate Forums

    Science.gov (United States)

    Buxner, S.; Grier, J.; Meinke, B. K.; Gross, N. A.; Woroner, M.

    2014-12-01

    The NASA Science Education and Public Outreach (E/PO) Forums support the NASA Science Mission Directorate (SMD) and its E/PO community by enhancing the coherency and efficiency of SMD-funded E/PO programs. The Forums foster collaboration and partnerships between scientists with content expertise and educators with pedagogy expertise. We will present tools to engage and resources to support scientists' engagement in E/PO efforts. Scientists can get connected to educators and find support materials and links to resources to support their E/PO work through the online SMD E/PO community workspace (http://smdepo.org) The site includes resources for scientists interested in E/PO including one page guides about "How to Get Involved" and "How to Increase Your Impact," as well as the NASA SMD Scientist Speaker's Bureau to connect scientists to audiences across the country. Additionally, there is a set of online clearinghouses that provide ready-made lessons and activities for use by scientists and educators: NASA Wavelength (http://nasawavelength.org/) and EarthSpace (http://www.lpi.usra.edu/earthspace/). The NASA Forums create and partner with organizations to provide resources specifically for undergraduate science instructors including slide sets for Earth and Space Science classes on the current topics in astronomy and planetary science. The Forums also provide professional development opportunities at professional science conferences each year including AGU, LPSC, AAS, and DPS to support higher education faculty who are teaching undergraduate courses. These offerings include best practices in instruction, resources for teaching planetary science and astronomy topics, and other special topics such as working with diverse students and the use of social media in the classroom. We are continually soliciting ways that we can better support scientists' efforts in effectively engaging in E/PO. Please contact Sanlyn Buxner (buxner@psi.edu) or Jennifer Grier (jgrier@psi.edu) to

  12. Tools and resources for neuroanatomy education: a systematic review.

    Science.gov (United States)

    Arantes, M; Arantes, J; Ferreira, M A

    2018-05-03

    The aim of this review was to identify studies exploring neuroanatomy teaching tools and their impact in learning, as a basis towards the implementation of a neuroanatomy program in the context of a curricular reform in medical education. Computer-assisted searches were conducted through March 2017 in the PubMed, Web of Science, Medline, Current Contents Connect, KCI and Scielo Citation Index databases. Four sets of keywords were used, combining "neuroanatomy" with "education", "teaching", "learning" and "student*". Studies were reviewed independently by two readers, and data collected were confirmed by a third reader. Of the 214 studies identified, 29 studies reported data on the impact of using specific neuroanatomy teaching tools. Most of them (83%) were published in the last 8 years and were conducted in the United States of America (65.52%). Regarding the participants, medical students were the most studied sample (37.93%) and the majority of the studies (65.52%) had less than 100 participants. Approximately half of the studies included in this review used digital teaching tools (e.g., 3D computer neuroanatomy models), whereas the remaining used non-digital learning tools (e.g., 3D physical models). Our work highlight the progressive interest in the study of neuroanatomy teaching tools over the last years, as evidenced from the number of publications and highlight the need to consider new tools, coping with technological development in medical education.

  13. Pulsed laser deposition—invention or discovery?

    International Nuclear Information System (INIS)

    Venkatesan, T

    2014-01-01

    The evolution of pulsed laser deposition had been an exciting process of invention and discovery, with the development of high T c superconducting films as the main driver. It has become the method of choice in research and development for rapid prototyping of multicomponent inorganic materials for preparing a variety of thin films, heterostructures and atomically sharp interfaces, and has become an indispensable tool for advancing oxide electronics. In this paper I will give a personal account of the invention and development of this process at Bellcore/Rutgers, the opportunity, challenges and mostly the extraordinary excitement that was generated, typical of any disruptive technology. (paper)

  14. Impact of Drought on Groundwater and Soil Moisture - A Geospatial Tool for Water Resource Management

    Science.gov (United States)

    Ziolkowska, J. R.; Reyes, R.

    2016-12-01

    For many decades, recurring droughts in different regions in the US have been negatively impacting ecosystems and economic sectors. Oklahoma and Texas have been suffering from exceptional and extreme droughts in 2011-2014, with almost 95% of the state areas being affected (Drought Monitor, 2015). Accordingly, in 2011 alone, around 1.6 billion were lost in the agricultural sector alone as a result of drought in Oklahoma (Stotts 2011), and 7.6 billion in Texas agriculture (Fannin 2012). While surface water is among the instant indicators of drought conditions, it does not translate directly to groundwater resources that are the main source of irrigation water. Both surface water and groundwater are susceptible to drought, while groundwater depletion is a long-term process and might not show immediately. However, understanding groundwater availability is crucial for designing water management strategies and sustainable water use in the agricultural sector and other economic sectors. This paper presents an interactive geospatially weighted evaluation model and a tool at the same time to analyze groundwater resources that can be used for decision support in water management. The tool combines both groundwater and soil moisture changes in Oklahoma and Texas in 2003-2014, thus representing the most important indicators of agricultural and hydrological drought. The model allows for analyzing temporal and geospatial long-term drought at the county level. It can be expanded to other regions in the US and the world. The model has been validated with the Palmer Drought Index Severity Index to account for other indicators of meteorological drought. It can serve as a basis for an upcoming socio-economic and environmental analysis of drought events in the short and long-term in different geographic regions.

  15. Building A NGS Genomic Resource: Towards Molecular Breeding In L. Perenne

    DEFF Research Database (Denmark)

    Ruttink, Tom; Roldán-Ruiz, Isabel; Asp, Torben

    To advance the application of molecular breeding in Lolium perenne, we have generated a sequence resource to facilitate gene discovery and SNP marker development. Illumina GAII transcriptome sequencing was performed on meristem-enriched samples of 14 Lolium genotypes. De novo assemblies for indiv......To advance the application of molecular breeding in Lolium perenne, we have generated a sequence resource to facilitate gene discovery and SNP marker development. Illumina GAII transcriptome sequencing was performed on meristem-enriched samples of 14 Lolium genotypes. De novo assemblies...... of SNP markers in selected candidate genes. In parallel, a germplasm collection of 602 Lolium genotypes was established and is being phenotyped for plant architecture, reproductive characteristics, flowering time, and forage quality traits. We will test through association genetics whether phenotypic...

  16. The clinical impact of recent advances in LC-MS for cancer biomarker discovery and verification

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hui; Shi, Tujin; Qian, Wei-Jun; Liu, Tao; Kagan, Jacob; Srivastava, Sudhir; Smith, Richard D.; Rodland, Karin D.; Camp, David G.

    2015-12-04

    Mass spectrometry-based proteomics has become an indispensable tool in biomedical research with broad applications ranging from fundamental biology, systems biology, and biomarker discovery. Recent advances in LC-MS have made it become a major technology in clinical applications, especially in cancer biomarker discovery and verification. To overcome the challenges associated with the analysis of clinical samples, such as extremely wide dynamic range of protein concentrations in biofluids and the need to perform high throughput and accurate quantification, significant efforts have been devoted to improve the overall performance of LC-MS bases clinical proteomics. In this review, we summarize the recent advances in LC-MS in the aspect of cancer biomarker discovery and quantification, and discuss its potentials, limitations, and future perspectives.

  17. A real-time Java tool chain for resource constrained platforms

    DEFF Research Database (Denmark)

    Korsholm, Stephan E.; Søndergaard, Hans; Ravn, Anders Peter

    2014-01-01

    The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms. The con......The Java programming language was originally developed for embedded systems, but the resource requirements of previous and current Java implementations – especially memory consumption – tend to exclude them from being used on a significant class of resource constrained embedded platforms...... by integrating the following: (1) a lean virtual machine without any external dependencies on POSIX-like libraries or other OS functionalities; (2) a hardware abstraction layer, implemented almost entirely in Java through the use of hardware objects, first level interrupt handlers, and native variables; and (3....... An evaluation of the presented solution shows that the miniCDj benchmark gets reduced to a size where it can run on resource constrained platforms....

  18. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    Science.gov (United States)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is

  19. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    Science.gov (United States)

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  20. 19 CFR 354.10 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Discovery. 354.10 Section 354.10 Customs Duties... ANTIDUMPING OR COUNTERVAILING DUTY ADMINISTRATIVE PROTECTIVE ORDER § 354.10 Discovery. (a) Voluntary discovery. All parties are encouraged to engage in voluntary discovery procedures regarding any matter, not...

  1. 36 CFR 1150.63 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Discovery. 1150.63 Section... PRACTICE AND PROCEDURES FOR COMPLIANCE HEARINGS Prehearing Conferences and Discovery § 1150.63 Discovery. (a) Parties are encouraged to engage in voluntary discovery procedures. For good cause shown under...

  2. 37 CFR 11.52 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Discovery. 11.52 Section 11... Disciplinary Proceedings; Jurisdiction, Sanctions, Investigations, and Proceedings § 11.52 Discovery. Discovery... establishes that discovery is reasonable and relevant, the hearing officer, under such conditions as he or she...

  3. The University of New Mexico Center for Molecular Discovery

    Science.gov (United States)

    Edwards, Bruce S.; Gouveia, Kristine; Oprea, Tudor I.; Sklar, Larry A.

    2015-01-01

    The University of New Mexico Center for Molecular Discovery (UNMCMD) is an academic research center that specializes in discovery using high throughput flow cytometry (HTFC) integrated with virtual screening, as well as knowledge mining and drug informatics. With a primary focus on identifying small molecules that can be used as chemical probes and as leads for drug discovery, it is a central core resource for research and translational activities at UNM that supports implementation and management of funded screening projects as well as “up-front” services such as consulting for project design and implementation, assistance in assay development and generation of preliminary data for pilot projects in support of competitive grant applications. The HTFC platform in current use represents advanced, proprietary technology developed at UNM that is now routinely capable of processing bioassays arrayed in 96-, 384- and 1536-well formats at throughputs of 60,000 or more wells per day. Key programs at UNMCMD include screening of research targets submitted by the international community through NIH’s Molecular Libraries Program; a multi-year effort involving translational partnerships at UNM directed towards drug repurposing - identifying new uses for clinically approved drugs; and a recently established personalized medicine initiative for advancing cancer therapy by the application of “smart” oncology drugs in selected patients based on response patterns of their cancer cells in vitro. UNMCMD discoveries, innovation, and translation have contributed to a wealth of inventions, patents, licenses and publications, as well as startup companies, clinical trials and a multiplicity of domestic and international collaborative partnerships to further the research enterprise. PMID:24409953

  4. Usability of Discovery Portals

    NARCIS (Netherlands)

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals

  5. FAF-Drugs2: Free ADME/tox filtering tool to assist drug discovery and chemical biology projects

    Directory of Open Access Journals (Sweden)

    Miteva Maria A

    2008-09-01

    Full Text Available Abstract Background Drug discovery and chemical biology are exceedingly complex and demanding enterprises. In recent years there are been increasing awareness about the importance of predicting/optimizing the absorption, distribution, metabolism, excretion and toxicity (ADMET properties of small chemical compounds along the search process rather than at the final stages. Fast methods for evaluating ADMET properties of small molecules often involve applying a set of simple empirical rules (educated guesses and as such, compound collections' property profiling can be performed in silico. Clearly, these rules cannot assess the full complexity of the human body but can provide valuable information and assist decision-making. Results This paper presents FAF-Drugs2, a free adaptable tool for ADMET filtering of electronic compound collections. FAF-Drugs2 is a command line utility program (e.g., written in Python based on the open source chemistry toolkit OpenBabel, which performs various physicochemical calculations, identifies key functional groups, some toxic and unstable molecules/functional groups. In addition to filtered collections, FAF-Drugs2 can provide, via Gnuplot, several distribution diagrams of major physicochemical properties of the screened compound libraries. Conclusion We have developed FAF-Drugs2 to facilitate compound collection preparation, prior to (or after experimental screening or virtual screening computations. Users can select to apply various filtering thresholds and add rules as needed for a given project. As it stands, FAF-Drugs2 implements numerous filtering rules (23 physicochemical rules and 204 substructure searching rules that can be easily tuned.

  6. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    Science.gov (United States)

    Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R

    2016-06-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.

  7. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    Directory of Open Access Journals (Sweden)

    Andra Waagmeester

    2016-06-01

    Full Text Available The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.

  8. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources

    Science.gov (United States)

    Waagmeester, Andra; Pico, Alexander R.

    2016-01-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  9. 14 CFR 16.213 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Discovery. 16.213 Section 16.213... PRACTICE FOR FEDERALLY-ASSISTED AIRPORT ENFORCEMENT PROCEEDINGS Hearings § 16.213 Discovery. (a) Discovery... discovery permitted by this section if a party shows that— (1) The information requested is cumulative or...

  10. 28 CFR 76.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Discovery. 76.21 Section 76.21 Judicial... POSSESSION OF CERTAIN CONTROLLED SUBSTANCES § 76.21 Discovery. (a) Scope. Discovery under this part covers... as a general guide for discovery practices in proceedings before the Judge. However, unless otherwise...

  11. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    Science.gov (United States)

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  12. Automated tool for virtual screening and pharmacology-based pathway prediction and analysis

    Directory of Open Access Journals (Sweden)

    Sugandh Kumar

    2017-10-01

    Full Text Available The virtual screening is an effective tool for the lead identification in drug discovery. However, there are limited numbers of crystal structures available as compared to the number of biological sequences which makes (Structure Based Drug Discovery SBDD a difficult choice. The current tool is an attempt to automate the protein structure modelling and automatic virtual screening followed by pharmacology-based prediction and analysis. Starting from sequence(s, this tool automates protein structure modelling, binding site identification, automated docking, ligand preparation, post docking analysis and identification of hits in the biological pathways that can be modulated by a group of ligands. This automation helps in the characterization of ligands selectivity and action of ligands on a complex biological molecular network as well as on individual receptor. The judicial combination of the ligands binding different receptors can be used to inhibit selective biological pathways in a disease. This tool also allows the user to systemically investigate network-dependent effects of a drug or drug candidate.

  13. 40 CFR 27.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Discovery. 27.21 Section 27.21... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for..., discovery is available only as ordered by the presiding officer. The presiding officer shall regulate the...

  14. 37 CFR 41.150 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Discovery. 41.150 Section 41... COMMERCE PRACTICE BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES Contested Cases § 41.150 Discovery. (a) Limited discovery. A party is not entitled to discovery except as authorized in this subpart. The...

  15. 14 CFR 13.220 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Discovery. 13.220 Section 13.220... INVESTIGATIVE AND ENFORCEMENT PROCEDURES Rules of Practice in FAA Civil Penalty Actions § 13.220 Discovery. (a) Initiation of discovery. Any party may initiate discovery described in this section, without the consent or...

  16. 49 CFR 604.38 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 7 2010-10-01 2010-10-01 false Discovery. 604.38 Section 604.38 Transportation... TRANSPORTATION CHARTER SERVICE Hearings. § 604.38 Discovery. (a) Permissible forms of discovery shall be within the discretion of the PO. (b) The PO shall limit the frequency and extent of discovery permitted by...

  17. 15 CFR 719.10 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Discovery. 719.10 Section 719.10... Discovery. (a) General. The parties are encouraged to engage in voluntary discovery regarding any matter... the Federal Rules of Civil Procedure relating to discovery apply to the extent consistent with this...

  18. 24 CFR 26.18 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 26.18 Section 26.18... PROCEDURES Hearings Before Hearing Officers Discovery § 26.18 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery procedures, which may commence at any time after an answer has...

  19. 42 CFR 426.532 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Discovery. 426.532 Section 426.532 Public Health... § 426.532 Discovery. (a) General rule. If the Board orders discovery, the Board must establish a reasonable timeframe for discovery. (b) Protective order—(1) Request for a protective order. Any party...

  20. 49 CFR 1503.633 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Discovery. 1503.633 Section 1503.633... Rules of Practice in TSA Civil Penalty Actions § 1503.633 Discovery. (a) Initiation of discovery. Any party may initiate discovery described in this section, without the consent or approval of the ALJ, at...

  1. 14 CFR 1264.120 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Discovery. 1264.120 Section 1264.120... PENALTIES ACT OF 1986 § 1264.120 Discovery. (a) The following types of discovery are authorized: (1..., discovery is available only as ordered by the presiding officer. The presiding officer shall regulate the...

  2. 22 CFR 128.6 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Discovery. 128.6 Section 128.6 Foreign... Discovery. (a) Discovery by the respondent. The respondent, through the Administrative Law Judge, may... discovery if the interests of national security or foreign policy so require, or if necessary to comply with...

  3. 24 CFR 26.42 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Discovery. 26.42 Section 26.42... PROCEDURES Hearings Pursuant to the Administrative Procedure Act Discovery § 26.42 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery procedures, which may commence at any time...

  4. 49 CFR 386.37 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Discovery. 386.37 Section 386.37 Transportation... and Hearings § 386.37 Discovery. (a) Parties may obtain discovery by one or more of the following...; and requests for admission. (b) Discovery may not commence until the matter is pending before the...

  5. 29 CFR 1955.32 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Discovery. 1955.32 Section 1955.32 Labor Regulations...) PROCEDURES FOR WITHDRAWAL OF APPROVAL OF STATE PLANS Preliminary Conference and Discovery § 1955.32 Discovery... allow discovery by any other appropriate procedure, such as by interrogatories upon a party or request...

  6. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  7. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  8. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  9. De-novo discovery of differentially abundant transcription factor binding sites including their positional preference.

    Science.gov (United States)

    Keilwagen, Jens; Grau, Jan; Paponov, Ivan A; Posch, Stefan; Strickert, Marc; Grosse, Ivo

    2011-02-10

    Transcription factors are a main component of gene regulation as they activate or repress gene expression by binding to specific binding sites in promoters. The de-novo discovery of transcription factor binding sites in target regions obtained by wet-lab experiments is a challenging problem in computational biology, which has not been fully solved yet. Here, we present a de-novo motif discovery tool called Dispom for finding differentially abundant transcription factor binding sites that models existing positional preferences of binding sites and adjusts the length of the motif in the learning process. Evaluating Dispom, we find that its prediction performance is superior to existing tools for de-novo motif discovery for 18 benchmark data sets with planted binding sites, and for a metazoan compendium based on experimental data from micro-array, ChIP-chip, ChIP-DSL, and DamID as well as Gene Ontology data. Finally, we apply Dispom to find binding sites differentially abundant in promoters of auxin-responsive genes extracted from Arabidopsis thaliana microarray data, and we find a motif that can be interpreted as a refined auxin responsive element predominately positioned in the 250-bp region upstream of the transcription start site. Using an independent data set of auxin-responsive genes, we find in genome-wide predictions that the refined motif is more specific for auxin-responsive genes than the canonical auxin-responsive element. In general, Dispom can be used to find differentially abundant motifs in sequences of any origin. However, the positional distribution learned by Dispom is especially beneficial if all sequences are aligned to some anchor point like the transcription start site in case of promoter sequences. We demonstrate that the combination of searching for differentially abundant motifs and inferring a position distribution from the data is beneficial for de-novo motif discovery. Hence, we make the tool freely available as a component of the open

  10. 42 CFR 426.432 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Discovery. 426.432 Section 426.432 Public Health... § 426.432 Discovery. (a) General rule. If the ALJ orders discovery, the ALJ must establish a reasonable timeframe for discovery. (b) Protective order—(1) Request for a protective order. Any party receiving a...

  11. 10 CFR 13.21 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Discovery. 13.21 Section 13.21 Energy NUCLEAR REGULATORY COMMISSION PROGRAM FRAUD CIVIL REMEDIES § 13.21 Discovery. (a) The following types of discovery are...) Unless mutually agreed to by the parties, discovery is available only as ordered by the ALJ. The ALJ...

  12. 49 CFR 1121.2 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Discovery. 1121.2 Section 1121.2 Transportation... TRANSPORTATION RULES OF PRACTICE RAIL EXEMPTION PROCEDURES § 1121.2 Discovery. Discovery shall follow the procedures set forth at 49 CFR part 1114, subpart B. Discovery may begin upon the filing of the petition for...

  13. 38 CFR 42.21 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Discovery. 42.21 Section... IMPLEMENTING THE PROGRAM FRAUD CIVIL REMEDIES ACT § 42.21 Discovery. (a) The following types of discovery are... creation of a document. (c) Unless mutually agreed to by the parties, discovery is available only as...

  14. 22 CFR 521.21 - Discovery.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Discovery. 521.21 Section 521.21 Foreign... Discovery. (a) The following types of discovery are authorized: (1) Requests for production of documents for... interpreted to require the creation of a document. (c) Unless mutually agreed to by the parties, discovery is...

  15. 31 CFR 10.71 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Discovery. 10.71 Section 10.71 Money... SERVICE Rules Applicable to Disciplinary Proceedings § 10.71 Discovery. (a) In general. Discovery may be... relevance, materiality and reasonableness of the requested discovery and subject to the requirements of § 10...

  16. 39 CFR 955.15 - Discovery.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Discovery. 955.15 Section 955.15 Postal Service... APPEALS § 955.15 Discovery. (a) The parties are encouraged to engage in voluntary discovery procedures. In connection with any deposition or other discovery procedure, the Board may issue any order which justice...

  17. 43 CFR 35.21 - Discovery.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Discovery. 35.21 Section 35.21 Public... AND STATEMENTS § 35.21 Discovery. (a) The following types of discovery are authorized: (1) Requests...) Unless mutually agreed to by the parties, discovery is available only as ordered by the ALJ. The ALJ...

  18. 15 CFR 766.9 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Discovery. 766.9 Section 766.9... PROCEEDINGS § 766.9 Discovery. (a) General. The parties are encouraged to engage in voluntary discovery... provisions of the Federal Rules of Civil Procedure relating to discovery apply to the extent consistent with...

  19. Renewable material resource potential

    NARCIS (Netherlands)

    van Weenen, H.; Wever, R.; Quist, J.; Tukker, A.; Woudstra, J.; Boons, F.A.A.; Beute, N.

    2010-01-01

    Renewable material resources, consist of complex systems and parts. Their sub-systems and sub-sub-systems, have unique, specific, general and common properties. The character of the use that is made of these resources, depends on the availability of knowledge, experience, methods, tools, machines

  20. INIS-based Japanese literature materials of bibliographic tools for human resource development

    International Nuclear Information System (INIS)

    Kunii, Katsuhiko; Gonda, Mayuki; Ikeda, Kiyoshi; Nagaya, Shun; Itabashi, Keizo; Nakajima, Hidemitsu; Mineo, Yukinobu

    2011-01-01

    The Library of the Japan Atomic Energy Agency (JAEA) has developed two Japanese literature materials of bibliographic tools based on the International Nuclear Information System (INIS) of the IAEA which contains over 3.3 million records of 127 countries and 24 international organizations. These materials have been elaborated by appropriately designating Japanese terminology of nuclear field corresponding with English terminology or vice versa. One is 'Transliterated Japanese journal title list' and the other is 'INIS Thesaurus in Japanese'. While the former is served as a reference that enables users to access articles of Japanese journals better matching their needs, the latter is served as a dictionary to bridge the gap on nuclear field terminologies between over 30,000 English terms and Japanese terms which correspond with those in a semantic manner. The application of those materials to the INIS's full text collection over 280,000 of technical reports, proceedings etc. as an archive is helpful for enhancement of human resource development. The authors describe the effectiveness of those INIS-based materials with bibliographic references of Fukushima Daiichi NPS accident. (author)