WorldWideScience

Sample records for engineering communities database

  1. Effective Engineering Outreach through an Undergraduate Mentoring Team and Module Database

    Science.gov (United States)

    Young, Colin; Butterfield, Anthony E.

    2014-01-01

    The rising need for engineers has led to increased interest in community outreach in engineering departments nationwide. We present a sustainable outreach model involving trained undergraduate mentors to build ties with K-12 teachers and students. An associated online module database of chemical engineering demonstrations, available to educators…

  2. Security Research on Engineering Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, ...

  3. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  4. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    Science.gov (United States)

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search

  5. Database Search Engines: Paradigms, Challenges and Solutions.

    Science.gov (United States)

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  6. Development of an Engineering Soil Database

    Science.gov (United States)

    2017-12-27

    ER D C TR 1 7- 15 Rapid Airfield Damage Recovery (RADR) Program Development of an Engineering Soil Database En gi ne er R es ea rc...distribution is unlimited. The US Army Engineer Research and Development Center (ERDC) solves the nation’s toughest engineering and environmental...challenges. ERDC develops innovative solutions in civil and military engineering , geospatial sciences, water resources, and environmental sciences

  7. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  8. Ei Compendex: A new database makes life easier for engineers

    CERN Multimedia

    2001-01-01

    The Library is expanding its range of databases. The latest arrival, called Ei Compendex, is the world's most comprehensive engineering database, which indexes engineering literature published throughout the world. It also offers bibliographic entries for articles published in scientific journals and for conference proceedings and covers an extensive range of subjects from mechanical engineering to the environment, materials science, solid state physics and superconductivity. Moreover, it is the most relevant quality control and engineering management database. Ei Compendex contains over 4.6 million references from over 2600 journals, conference proceedings and technical reports dating from 1966 to the present. Every year, 220,000 new abstracts are added to the database which is also updated on a weekly basis. In the case of articles published in recent years, it provides an electronic link to the full texts of all major publishers. The database also contains the full texts of Elsevier periodicals (over 250...

  9. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  10. Engineering method to build the composite structure ply database

    Directory of Open Access Journals (Sweden)

    Qinghua Shi

    Full Text Available In this paper, a new method to build a composite ply database with engineering design constraints is proposed. This method has two levels: the core stacking sequence design and the whole stacking sequence design. The core stacking sequences are obtained by the full permutation algorithm considering the ply ratio requirement and the dispersion character which characterizes the dispersion of ply angles. The whole stacking sequences are the combinations of the core stacking sequences. By excluding the ply sequences which do not meet the engineering requirements, the final ply database is obtained. One example with the constraints that the total layer number is 100 and the ply ratio is 30:60:10 is presented to validate the method. This method provides a new way to set up the ply database based on the engineering requirements without adopting intelligent optimization algorithms. Keywords: Composite ply database, VBA program, Structure design, Stacking sequence

  11. Database resources for the tuberculosis community.

    Science.gov (United States)

    Lew, Jocelyne M; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D; Gordon, Stephen V; Schnappinger, Dirk; Cole, Stewart T; Sobral, Bruno

    2013-01-01

    Access to online repositories for genomic and associated "-omics" datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th-8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. An approach in building a chemical compound search engine in oracle database.

    Science.gov (United States)

    Wang, H; Volarath, P; Harrison, R

    2005-01-01

    A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.

  13. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  14. The LAILAPS Search Engine: Relevance Ranking in Life Science Databases

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2010-06-01

    Full Text Available Search engines and retrieval systems are popular tools at a life science desktop. The manual inspection of hundreds of database entries, that reflect a life science concept or fact, is a time intensive daily work. Hereby, not the number of query results matters, but the relevance does. In this paper, we present the LAILAPS search engine for life science databases. The concept is to combine a novel feature model for relevance ranking, a machine learning approach to model user relevance profiles, ranking improvement by user feedback tracking and an intuitive and slim web user interface, that estimates relevance rank by tracking user interactions. Queries are formulated as simple keyword lists and will be expanded by synonyms. Supporting a flexible text index and a simple data import format, LAILAPS can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases.

  15. Software Engineering Laboratory (SEL) database organization and user's guide

    Science.gov (United States)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  16. Quantifying complexity in metabolic engineering using the LASER database

    Directory of Open Access Journals (Sweden)

    James D. Winkler

    2016-12-01

    Full Text Available We previously introduced the LASER database (Learning Assisted Strain EngineeRing, https://bitbucket.org/jdwinkler/laser_release (Winkler et al. 2015 to serve as a platform for understanding past and present metabolic engineering practices. Over the past year, LASER has been expanded by 50% to include over 600 engineered strains from 450 papers, including their growth conditions, genetic modifications, and other information in an easily searchable format. Here, we present the results of our efforts to use LASER as a means for defining the complexity of a metabolic engineering “design”. We evaluate two complexity metrics based on the concepts of construction difficulty and novelty. No correlation is observed between expected product yield and complexity, allowing minimization of complexity without a performance trade-off. We envision the use of such complexity metrics to filter and prioritize designs prior to implementation of metabolic engineering efforts, thereby potentially reducing the time, labor, and expenses of large-scale projects. Possible future developments based on an expanding LASER database are then discussed. Keywords: Metabolic engineering, Synthetic biology, Standardization, Design tools

  17. Presentation of Knovel - technical information portal for the engineering community | 15 February

    CERN Multimedia

    CERN Library

    2013-01-01

    The Library invites you to a presentation of Knovel, given by Gary Kearns, Knovel Managing Director - EMEA.   Friday 15 February 2013 from 11:00 to 12:30 room 30-7-018 (Kjell Johnsen Auditorium) Knovel is a web-based discovery platform meeting the information needs of the engineering community. It combines the functionalities of an ebooks platform and of a search engine querying a plurality of online databases. These functionalities are complemented by analytical tools that permit to extract and manipulate data from ebooks content. The agenda of the presentation is available here.

  18. The CAPEC Database

    DEFF Research Database (Denmark)

    Nielsen, Thomas Lund; Abildskov, Jens; Harper, Peter Mathias

    2001-01-01

    in the compound. This classification makes the CAPEC database a very useful tool, for example, in the development of new property models, since properties of chemically similar compounds are easily obtained. A program with efficient search and retrieval functions of properties has been developed.......The Computer-Aided Process Engineering Center (CAPEC) database of measured data was established with the aim to promote greater data exchange in the chemical engineering community. The target properties are pure component properties, mixture properties, and special drug solubility data....... The database divides pure component properties into primary, secondary, and functional properties. Mixture properties are categorized in terms of the number of components in the mixture and the number of phases present. The compounds in the database have been classified on the basis of the functional groups...

  19. Engaging Community College Students Using an Engineering Learning Community

    Science.gov (United States)

    Maccariella, James, Jr.

    The study investigated whether community college engineering student success was tied to a learning community. Three separate data collection sources were utilized: surveys, interviews, and existing student records. Mann-Whitney tests were used to assess survey data, independent t-tests were used to examine pre-test data, and independent t-tests, analyses of covariance (ANCOVA), chi-square tests, and logistic regression were used to examine post-test data. The study found students that participated in the Engineering TLC program experienced a significant improvement in grade point values for one of the three post-test courses studied. In addition, the analysis revealed the odds of fall-to-spring retention were 5.02 times higher for students that participated in the Engineering TLC program, and the odds of graduating or transferring were 4.9 times higher for students that participated in the Engineering TLC program. However, when confounding variables were considered in the study (engineering major, age, Pell Grant participation, gender, ethnicity, and full-time/part-time status), the analyses revealed no significant relationship between participation in the Engineering TLC program and course success, fall-to-spring retention, and graduation/transfer. Thus, the confounding variables provided alternative explanations for results. The Engineering TLC program was also found to be effective in providing mentoring opportunities, engagement and motivation opportunities, improved self confidence, and a sense of community. It is believed the Engineering TLC program can serve as a model for other community college engineering programs, by striving to build a supportive environment, and provide guidance and encouragement throughout an engineering student's program of study.

  20. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  1. Towards a sustainable community database: taking advantage of ...

    African Journals Online (AJOL)

    This study aimed to assess whether the magnitude of possession and retention of RTH cards is adequate to serve as a community database for monitoring and evaluating health interventions targeting under fives. ... database. This paper discusses some of the strategies to increase retention of the cards by caretakers.

  2. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  3. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  4. Developing of database on nuclear power engineering and purchase of ORACLE system

    International Nuclear Information System (INIS)

    Liu Renkang

    1996-01-01

    This paper presents a point of view according development of database on the nuclear power engineering and performance of ORACLE database manager system. ORACLE system is a practical database system for purchasing

  5. DCODE: A Distributed Column-Oriented Database Engine for Big Data Analytics

    OpenAIRE

    Liu, Yanchen; Cao, Fang; Mortazavi, Masood; Chen, Mengmeng; Yan, Ning; Ku, Chi; Adnaik, Aniket; Morgan, Stephen; Shi, Guangyu; Wang, Yuhu; Fang, Fan

    2015-01-01

    Part 10: Big Data and Text Mining; International audience; We propose a novel Distributed Column-Oriented Database Engine (DCODE) for efficient analytic query processing that combines advantages of both column storage and parallel processing. In DCODE, we enhance an existing open-source columnar database engine by adding the capability for handling queries over a cluster. Specifically, we studied parallel query execution and optimization techniques such as horizontal partitioning, exchange op...

  6. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  7. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  8. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    Science.gov (United States)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  9. Analysis and Design of Web-Based Database Application for Culinary Community

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2017-03-01

    Full Text Available This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the culinary community. This research used literature review, user interviews, and questionnaires. Moreover, the database system development life cycle was used as a guide for designing a database especially for conceptual database design, logical database design, and physical design database. Web-based application design used eight golden rules for user interface design. The result of this research is the availability of a web-based database application that can fulfill the needs of users in the culinary field related to communication and recipe management.

  10. Preparing engineers for the challenges of community engagement

    Science.gov (United States)

    Harsh, Matthew; Bernstein, Michael J.; Wetmore, Jameson; Cozzens, Susan; Woodson, Thomas; Castillo, Rafael

    2017-11-01

    Despite calls to address global challenges through community engagement, engineers are not formally prepared to engage with communities. Little research has been done on means to address this 'engagement gap' in engineering education. We examine the efficacy of an intensive, two-day Community Engagement Workshop for engineers, designed to help engineers better look beyond technology, listen to and learn from people, and empower communities. We assessed the efficacy of the workshop in a non-experimental pre-post design using a questionnaire and a concept map. Questionnaire results indicate participants came away better able to ask questions more broadly inclusive of non-technological dimensions of engineering projects. Concept map results indicate participants have a greater understanding of ways social factors shape complex material systems after completing the programme. Based on the workshop's strengths and weaknesses, we discuss the potential of expanding and supplementing the programme to help engineers account for social aspects central to engineered systems.

  11. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  12. News from the Library: Knovel, a technical information portal for the engineering community

    CERN Multimedia

    CERN Library

    2013-01-01

    Knovel is a web-based discovery platform meeting the information needs of the engineering community.   Knovel combines the functionalities of an e-book platform and of a search engine querying a plurality of online databases. These functionalities are complemented by analytical tools that permit the extraction and manipulation of data from e-book content. Knovel provides subscribers with access to more than 4,000 leading reference works and databases from more than 100 international publishers and professional societies (AIAA, AIChE, ASME and NACE, among others) through a single interface. Knovelʼs comprehensive collection of content, covering 31 subject areas, is continually updated as new titles become available to reflect the evolving needs of users. Knovelʼs tools - including its interactive tables and graphs - not only help users to find information hidden in complex graphs, equations and tables quickly, but also to analyse and manipulate data as easily as sorting a spread sheet. Us...

  13. Engineering chemical interactions in microbial communities.

    Science.gov (United States)

    Kenny, Douglas J; Balskus, Emily P

    2018-03-05

    Microbes living within host-associated microbial communities (microbiotas) rely on chemical communication to interact with surrounding organisms. These interactions serve many purposes, from supplying the multicellular host with nutrients to antagonizing invading pathogens, and breakdown of chemical signaling has potentially negative consequences for both the host and microbiota. Efforts to engineer microbes to take part in chemical interactions represent a promising strategy for modulating chemical signaling within these complex communities. In this review, we discuss prominent examples of chemical interactions found within host-associated microbial communities, with an emphasis on the plant-root microbiota and the intestinal microbiota of animals. We then highlight how an understanding of such interactions has guided efforts to engineer microbes to participate in chemical signaling in these habitats. We discuss engineering efforts in the context of chemical interactions that enable host colonization, promote host health, and exclude pathogens. Finally, we describe prominent challenges facing this field and propose new directions for future engineering efforts.

  14. Databases in welding engineering - definition and starting phase of the integrated welding engineering information system

    International Nuclear Information System (INIS)

    Barthelmess, H.; Queren, W.; Stracke, M.

    1989-01-01

    The structure and function of the Information AAssociation for Welding Engineering, newly established by the Deutscher Verband fuer Schweisstechnik, are presented. Examined are: special literature for welding techniques - value and prospects; databases accessible to the public for information on welding techniques; concept for the Information Association for Welding Engineering; the four phases to establish databasis for facts and expert systems of the Information Association for Welding Engineering; the pilot project 'MVT-Data base' (hot crack data base for data of modified varestraint-transvarestraint tests). (orig./MM) [de

  15. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  16. An analysis of extended entity relationship constructs extraction in database reverse engineering approaches

    International Nuclear Information System (INIS)

    Jilani, M.A.; Aziz, A.; Hussain, T.

    2008-01-01

    Database reverse Engineering is technique used for transforming relational schema into a conceptual schema for finding and fixing design flaw for maintaining and re-engineering database systems for the integration of database system with another and migration of a database system from one platform to another. We studied the approaches from year 1993 to 2006 to find out which EER construct cannot be retrieved by most of the DBRE approaches so that they can be retrieved in the future. For each EER construct that can be retrieved by using a given DBRE approach. We show whether they are retrieved without user involvement (automatically). Partial user involvement (semi-automatically) or full user involvement (manually). We also discuss the relevant advantages and limitations of each DBRE technique considered in this paper. (author)

  17. Perspectives on a Big Data Application: What Database Engineers and IT Students Need to Know

    Directory of Open Access Journals (Sweden)

    E. Erturk

    2015-10-01

    Full Text Available Cloud Computing and Big Data are important and related current trends in the world of information technology. They will have significant impact on the curricula of computer engineering and information systems at universities and higher education institutions. Learning about big data is useful for both working database professionals and students, in accordance with the increase in jobs requiring these skills. It is also important to address a broad gamut of database engineering skills, i.e. database design, installation, and operation. Therefore the authors have investigated MongoDB, a popular application, both from the perspective of industry retraining for database specialists and for teaching. This paper demonstrates some practical activities that can be done by students at the Eastern Institute of Technology New Zealand. In addition to testing and preparing new content for future students, this paper contributes to the very recent and emerging academic literature in this area. This paper concludes with general recommendations for IT educators, database engineers, and other IT professionals.

  18. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  19. Data collection procedures for the Software Engineering Laboratory (SEL) database

    Science.gov (United States)

    Heller, Gerard; Valett, Jon; Wild, Mary

    1992-01-01

    This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.

  20. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    Science.gov (United States)

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  1. Seismic Search Engine: A distributed database for mining large scale seismic data

    Science.gov (United States)

    Liu, Y.; Vaidya, S.; Kuzma, H. A.

    2009-12-01

    The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.

  2. Archiving, ordering and searching: search engines, algorithms, databases and deep mediatization

    DEFF Research Database (Denmark)

    Andersen, Jack

    2018-01-01

    This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic o...... reviewed recent trends in mediatization research, the argument is discussed and unfolded in-between the material and social constructivist-phenomenological interpretations of mediatization. In conclusion, it is discussed how deep this form of mediatization can be taken to be.......This article argues that search engines, algorithms, and databases can be considered as a way of understanding deep mediatization (Couldry & Hepp, 2016). They are embedded in a variety of social and cultural practices and as such they change our communicative actions to be shaped by their logic...

  3. Role of Database Management Systems in Selected Engineering Institutions of Andhra Pradesh: An Analytical Survey

    Directory of Open Access Journals (Sweden)

    Kutty Kumar

    2016-06-01

    Full Text Available This paper aims to analyze the function of database management systems from the perspective of librarians working in engineering institutions in Andhra Pradesh. Ninety-eight librarians from one hundred thirty engineering institutions participated in the study. The paper reveals that training by computer suppliers and software packages are the significant mode of acquiring DBMS skills by librarians; three-fourths of the librarians are postgraduate degree holders. Most colleges use database applications for automation purposes and content value. Electrical problems and untrained staff seem to be major constraints faced by respondents for managing library databases.

  4. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  5. Municipal solid waste management: Identification and analysis of engineering indexes representing demand and costs generated in virtuous Italian communities

    Energy Technology Data Exchange (ETDEWEB)

    Gamberini, R., E-mail: rita.gamberini@unimore.it; Del Buono, D.; Lolli, F.; Rimini, B.

    2013-11-15

    Highlights: • Collection and analysis of real life data in the field of Municipal Solid Waste (MSW) generation and costs for management. • Study of 92 virtuous Italian communities. • Elaboration of trends of engineering indexes useful during design and evaluation of MSWM systems. - Abstract: The definition and utilisation of engineering indexes in the field of Municipal Solid Waste Management (MSWM) is an issue of interest for technicians and scientists, which is widely discussed in literature. Specifically, the availability of consolidated engineering indexes is useful when new waste collection services are designed, along with when their performance is evaluated after a warm-up period. However, most published works in the field of MSWM complete their study with an analysis of isolated case studies. Conversely, decision makers require tools for information collection and exchange in order to trace the trends of these engineering indexes in large experiments. In this paper, common engineering indexes are presented and their values analysed in virtuous Italian communities, with the aim of contributing to the creation of a useful database whose data could be used during experiments, by indicating examples of MSWM demand profiles and the costs required to manage them.

  6. Towards BioDBcore: a community-defined information specification for biological databases

    Science.gov (United States)

    Gaudet, Pascale; Bairoch, Amos; Field, Dawn; Sansone, Susanna-Assunta; Taylor, Chris; Attwood, Teresa K.; Bateman, Alex; Blake, Judith A.; Bult, Carol J.; Cherry, J. Michael; Chisholm, Rex L.; Cochrane, Guy; Cook, Charles E.; Eppig, Janan T.; Galperin, Michael Y.; Gentleman, Robert; Goble, Carole A.; Gojobori, Takashi; Hancock, John M.; Howe, Douglas G.; Imanishi, Tadashi; Kelso, Janet; Landsman, David; Lewis, Suzanna E.; Mizrachi, Ilene Karsch; Orchard, Sandra; Ouellette, B. F. Francis; Ranganathan, Shoba; Richardson, Lorna; Rocca-Serra, Philippe; Schofield, Paul N.; Smedley, Damian; Southan, Christopher; Tan, Tin Wee; Tatusova, Tatiana; Whetzel, Patricia L.; White, Owen; Yamasaki, Chisato

    2011-01-01

    The present article proposes the adoption of a community-defined, uniform, generic description of the core attributes of biological databases, BioDBCore. The goals of these attributes are to provide a general overview of the database landscape, to encourage consistency and interoperability between resources and to promote the use of semantic and syntactic standards. BioDBCore will make it easier for users to evaluate the scope and relevance of available resources. This new resource will increase the collective impact of the information present in biological databases. PMID:21097465

  7. Database for propagation models

    Science.gov (United States)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  8. BioMart Central Portal: an open database network for the biological community

    Science.gov (United States)

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek

    2011-01-01

    BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507

  9. Database 'catalogue of techniques applied to materials and products of nuclear engineering'

    International Nuclear Information System (INIS)

    Lebedeva, E.E.; Golovanov, V.N.; Podkopayeva, I.A.; Temnoyeva, T.A.

    2002-01-01

    The database 'Catalogue of techniques applied to materials and products of nuclear engineering' (IS MERI) was developed to provide informational support for SSC RF RIAR and other enterprises in scientific investigations. This database contains information on the techniques used at RF Minatom enterprises for reactor material properties investigation. The main purpose of this system consists in the assessment of the current status of the reactor material science experimental base for the further planning of experimental activities and methodical support improvement. (author)

  10. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    Science.gov (United States)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  11. Performance engineering in the community atmosphere model

    International Nuclear Information System (INIS)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-01-01

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years

  12. Humanitarian engineering placements in our own communities

    Science.gov (United States)

    VanderSteen, J. D. J.; Hall, K. R.; Baillie, C. A.

    2010-05-01

    There is an increasing interest in the humanitarian engineering curriculum, and a service-learning placement could be an important component of such a curriculum. International placements offer some important pedagogical advantages, but also have some practical and ethical limitations. Local community-based placements have the potential to be transformative for both the student and the community, although this potential is not always seen. In order to investigate the role of local placements, qualitative research interviews were conducted. Thirty-two semi-structured research interviews were conducted and analysed, resulting in a distinct outcome space. It is concluded that local humanitarian engineering placements greatly complement international placements and are strongly recommended if international placements are conducted. More importantly it is seen that we are better suited to address the marginalised in our own community, although it is often easier to see the needs of an outside populace.

  13. Design and implementation of Web-based SDUV-FEL engineering database system

    International Nuclear Information System (INIS)

    Sun Xiaoying; Shen Liren; Dai Zhimin; Xie Dong

    2006-01-01

    A design of Web-based SDUV-FEL engineering database and its implementation are introduced. This system will save and offer static data and archived data of SDUV-FEL, and build a proper and effective platform for share of SDUV-FEL data. It offers usable and reliable SDUV-FEL data for operators and scientists. (authors)

  14. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  15. Women Engineering Transfer Students: The Community College Experience

    Science.gov (United States)

    Patterson, Susan J.

    2011-01-01

    An interpretative philosophical framework was applied to a case study to document the particular experiences and perspectives of ten women engineering transfer students who once attended a community college and are currently enrolled in one of two university professional engineering programs. This study is important because women still do not earn…

  16. Centralized database for interconnection system design. [for spacecraft

    Science.gov (United States)

    Billitti, Joseph W.

    1989-01-01

    A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.

  17. Imprinting Community College Computer Science Education with Software Engineering Principles

    Science.gov (United States)

    Hundley, Jacqueline Holliday

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and maintenance. We proposed that some software engineering principles can be incorporated into the introductory-level of the computer science curriculum. Our vision is to give community college students a broader exposure to the software development lifecycle. For those students who plan to transfer to a baccalaureate program subsequent to their community college education, our vision is to prepare them sufficiently to move seamlessly into mainstream computer science and software engineering degrees. For those students who plan to move from the community college to a programming career, our vision is to equip them with the foundational knowledge and skills required by the software industry. To accomplish our goals, we developed curriculum modules for teaching seven of the software engineering knowledge areas within current computer science introductory-level courses. Each module was designed to be self-supported with suggested learning objectives, teaching outline, software tool support, teaching activities, and other material to assist the instructor in using it.

  18. Municipal solid waste management: identification and analysis of engineering indexes representing demand and costs generated in virtuous Italian communities.

    Science.gov (United States)

    Gamberini, R; Del Buono, D; Lolli, F; Rimini, B

    2013-11-01

    The definition and utilisation of engineering indexes in the field of Municipal Solid Waste Management (MSWM) is an issue of interest for technicians and scientists, which is widely discussed in literature. Specifically, the availability of consolidated engineering indexes is useful when new waste collection services are designed, along with when their performance is evaluated after a warm-up period. However, most published works in the field of MSWM complete their study with an analysis of isolated case studies. Conversely, decision makers require tools for information collection and exchange in order to trace the trends of these engineering indexes in large experiments. In this paper, common engineering indexes are presented and their values analysed in virtuous Italian communities, with the aim of contributing to the creation of a useful database whose data could be used during experiments, by indicating examples of MSWM demand profiles and the costs required to manage them. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. High Performance Protein Sequence Database Scanning on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Adrianto Wirawan

    2009-01-01

    Full Text Available The enormous growth of biological sequence databases has caused bioinformatics to be rapidly moving towards a data-intensive, computational science. As a result, the computational power needed by bioinformatics applications is growing rapidly as well. The recent emergence of low cost parallel multicore accelerator technologies has made it possible to reduce execution times of many bioinformatics applications. In this paper, we demonstrate how the Cell Broadband Engine can be used as a computational platform to accelerate two approaches for protein sequence database scanning: exhaustive and heuristic. We present efficient parallelization techniques for two representative algorithms: the dynamic programming based Smith–Waterman algorithm and the popular BLASTP heuristic. Their implementation on a Playstation®3 leads to significant runtime savings compared to corresponding sequential implementations.

  20. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  1. Assessment and application of national environmental databases and mapping tools at the local level to two community case studies.

    Science.gov (United States)

    Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad

    2011-03-01

    Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.

  2. BioMart Central Portal: an open database network for the biological community

    OpenAIRE

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Genova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.

    2011-01-01

    International audience; BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common inte...

  3. Information Management in Creative Engineering Design and Capabilities of Database Transactions

    DEFF Research Database (Denmark)

    Jacobsen, Kim; Eastman, C. A.; Jeng, T. S.

    1997-01-01

    This paper examines the information management requirements and sets forth the general criteria for collaboration and concurrency control in creative engineering design. Our work attempts to recognize the full range of concurrency, collaboration and complex transactions structure now practiced...... in manual and semi-automated design and the range of capabilities needed as the demands for enhanced but flexible electronic information management unfolds.The objective of this paper is to identify new issues that may advance the use of databases to support creative engineering design. We start...... with a generalized description of the structure of design tasks and how information management in design is dealt with today. After this review, we identify extensions to current information management capabilities that have been realized and/or proposed to support/augment what designers can do now. Given...

  4. Usage of the Jess Engine, Rules and Ontology to Query a Relational Database

    Science.gov (United States)

    Bak, Jaroslaw; Jedrzejek, Czeslaw; Falkowski, Maciej

    We present a prototypical implementation of a library tool, the Semantic Data Library (SDL), which integrates the Jess (Java Expert System Shell) engine, rules and ontology to query a relational database. The tool extends functionalities of previous OWL2Jess with SWRL implementations and takes full advantage of the Jess engine, by separating forward and backward reasoning. The optimization of integration of all these technologies is an advancement over previous tools. We discuss the complexity of the query algorithm. As a demonstration of capability of the SDL library, we execute queries using crime ontology which is being developed in the Polish PPBW project.

  5. Tinkering and Technical Self-Efficacy of Engineering Students at the Community College

    Science.gov (United States)

    Baker, Dale R.; Wood, Lorelei; Corkins, James; Krause, Stephen

    2015-01-01

    Self-efficacy in engineering is important because individuals with low self-efficacy have lower levels of achievement and persistence in engineering majors. To examine self-efficacy among community college engineering students, an instrument to specifically measure two important aspects of engineering, tinkering and technical self-efficacy, was…

  6. Study of Scientific Production of Community Medicines' Department Indexed in ISI Citation Databases.

    Science.gov (United States)

    Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa

    2016-10-01

    In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine's department. On the type of carrier, community medicine's department by

  7. Database design using entity-relationship diagrams

    CERN Document Server

    Bagui, Sikha

    2011-01-01

    Data, Databases, and the Software Engineering ProcessDataBuilding a DatabaseWhat is the Software Engineering Process?Entity Relationship Diagrams and the Software Engineering Life Cycle          Phase 1: Get the Requirements for the Database          Phase 2: Specify the Database          Phase 3: Design the DatabaseData and Data ModelsFiles, Records, and Data ItemsMoving from 3 × 5 Cards to ComputersDatabase Models     The Hierarchical ModelThe Network ModelThe Relational ModelThe Relational Model and Functional DependenciesFundamental Relational DatabaseRelational Database and SetsFunctional

  8. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  9. A community-based, interdisciplinary rehabilitation engineering course.

    Science.gov (United States)

    Lundy, Mary; Aceros, Juan

    2016-08-01

    A novel, community-based course was created through collaboration between the School of Engineering and the Physical Therapy program at the University of North Florida. This course offers a hands-on, interdisciplinary training experience for undergraduate engineering students through team-based design projects where engineering students are partnered with physical therapy students. Students learn the process of design, fabrication and testing of low-tech and high-tech rehabilitation technology for children with disabilities, and are exposed to a clinical experience under the guidance of licensed therapists. This course was taught in two consecutive years and pre-test/post-test data evaluating the impact of this interprofessional education experience on the students is presented using the Public Service Motivation Scale, Civic Actions Scale, Civic Attitudes Scale, and the Interprofessional Socialization and Valuing Scale.

  10. Database on wind characteristics - Structure and philosophy

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database, part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes the first part of the Annex XVII reporting, and it contains a detailed description of the database structure, the data quality control procedures, the selected indexing of the data and the hardware system. (au)

  11. Effects of antagonistic ecosystem engineers on macrofauna communities in a patchy, intertidal mudflat landscape

    NARCIS (Netherlands)

    Eklof, J. S.; Donadi, S.; van der Heide, T.; van der Zee, E. M.; Eriksson, B. K.

    Ecosystem engineers are organisms that strongly modify abiotic conditions and in the process alter associated communities. Different types of benthic ecosystem engineers have been suggested to facilitate different communities in otherwise similar marine environments, partly because they alter

  12. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    Science.gov (United States)

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  13. Database on wind characteristics. Users manual

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2001-11-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - is to provide wind energy planners and designers, as well as the international wind engineering community in general, with easy access to quality controlled measured wind field time series observed in a wide range of environments. The project partners are Sweden, Norway, U.S.A., The Netherlands, Japan and Denmark, with Denmark as the Operating Agent. The reporting of IEA R and D Annex XVII falls in three separate parts. Part one deals with the overall structure and philosophy behind the database (including the applied data quality control procedures), part two accounts in details for the available data in the established database bank and part three is the Users Manual describing the various ways to access and analyse the data. The present report constitutes part three of the Annex XVII reporting and contains a trough description of the available online facilities for identifying, selecting, downloading and handling measured wind field time series and resource data from 'Database on Wind Characteristics'. (au)

  14. Developing a Global Database of Historic Flood Events to Support Machine Learning Flood Prediction in Google Earth Engine

    Science.gov (United States)

    Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.

    2016-12-01

    There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.

  15. KNOVEL: A NEW SERVICE FOR THE ENGINEERING COMMUNITY

    CERN Multimedia

    2001-01-01

    Electronic books available on trial at CERN Electronic preprints and journals have become tools used on a daily basis, but so far the CERN Library did not provide a significant collection of electronic books. This is now about to change, so searching for scientific information, and in particular engineering-related references, is now easier than ever before. CERN has access to the electronic book collection via Knovel on a trial base until Xmas. Knovel has a database of some of the leading engineering reference handbooks and conference proceedings, published by Reed Elsevier, ASME, ASM, Butterworth, CRC Press, McGraw-Hill and others. The full-text of all e-books can be searched simultaneously. Another Knovel feature allows users to search for data (materials properties) across the whole digital collection. The Web search engine and display interface has been developed to support a wide range of information and file types: text, tables, equations, graphics, figures. This new resource is linked to from the Libr...

  16. Tight-coupling of groundwater flow and transport modelling engines with spatial databases and GIS technology: a new approach integrating Feflow and ArcGIS

    Directory of Open Access Journals (Sweden)

    Ezio Crestaz

    2012-09-01

    Full Text Available Implementation of groundwater flow and transport numerical models is generally a challenge, time-consuming and financially-demanding task, in charge to specialized modelers and consulting firms. At a later stage, within clearly stated limits of applicability, these models are often expected to be made available to less knowledgeable personnel to support/design and running of predictive simulations within more familiar environments than specialized simulation systems. GIS systems coupled with spatial databases appear to be ideal candidates to address problem above, due to their much wider diffusion and expertise availability. Current paper discusses the issue from a tight-coupling architecture perspective, aimed at integration of spatial databases, GIS and numerical simulation engines, addressing both observed and computed data management, retrieval and spatio-temporal analysis issues. Observed data can be migrated to the central database repository and then used to set up transient simulation conditions in the background, at run time, while limiting additional complexity and integrity failure risks as data duplication during data transfer through proprietary file formats. Similarly, simulation scenarios can be set up in a familiar GIS system and stored to spatial database for later reference. As numerical engine is tightly coupled with the GIS, simulations can be run within the environment and results themselves saved to the database. Further tasks, as spatio-temporal analysis (i.e. for postcalibration auditing scopes, cartography production and geovisualization, can then be addressed using traditional GIS tools. Benefits of such an approach include more effective data management practices, integration and availability of modeling facilities in a familiar environment, streamlining spatial analysis processes and geovisualization requirements for the non-modelers community. Major drawbacks include limited 3D and time-dependent support in

  17. High-Fidelity Aerothermal Engineering Analysis for Planetary Probes Using DOTNET Framework and OLAP Cubes Database

    Directory of Open Access Journals (Sweden)

    Prabhakar Subrahmanyam

    2009-01-01

    Full Text Available This publication presents the architecture integration and implementation of various modules in Sparta framework. Sparta is a trajectory engine that is hooked to an Online Analytical Processing (OLAP database for Multi-dimensional analysis capability. OLAP is an Online Analytical Processing database that has a comprehensive list of atmospheric entry probes and their vehicle dimensions, trajectory data, aero-thermal data and material properties like Carbon, Silicon and Carbon-Phenolic based Ablators. An approach is presented for dynamic TPS design. OLAP has the capability to run in one simulation several different trajectory conditions and the output is stored back into the database and can be queried for appropriate trajectory type. An OLAP simulation can be setup by spawning individual threads to run for three types of trajectory: Nominal, Undershoot and Overshoot trajectory. Sparta graphical user interface provides capabilities to choose from a list of flight vehicles or enter trajectory and geometry information of a vehicle in design. DOTNET framework acts as a middleware layer between the trajectory engine and the user interface and also between the web user interface and the OLAP database. Trajectory output can be obtained in TecPlot format, Excel output or in a KML (Keyhole Markup Language format. Framework employs an API (application programming interface to convert trajectory data into a formatted KML file that is used by Google Earth for simulating Earth-entry fly-by visualizations.

  18. ENGINEERING INSIDE PROCESS OF URBAN RENEWAL AND COMMUNITY MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Barrantes, K.

    2015-06-01

    Full Text Available This paper aims to show the community management process and interdisciplinary work involve in the Social Action project named “University social work: Calle de la Amargura towards a physical, recreational and cultural renewal” which belongs to the Civil Engineering School of Universidad de Costa Rica (UCR This initiative began in 2005 as a response to the security issue in a place known as “Calle de la Amargura”, in Costa Rica, this street has been stigmatized as an unsafe and damaged spot. Even though, this place has a negative concept, it has a huge urban potential as a meeting point for youth; this, due to closeness to Universidad de Costa Rica. Nevertheless, situations as drugs dealing and violence have created a negative perception within people all around the country. This project of urban renewal since the beginning has sought to enhance the perception of “Calle de la Amargura” from three axes: the development of educational and leisure activities, the foundation of community working networks and the improvement of physical conditions. Interdisciplinary groups were created in different areas such as engineer, arts, social sciences, health and education. Today, this plan is a recognize project, which involves a hard work on public space appropriation. Indeed, this paper seeks to expose the high content of social action and community management process of urban renewal leading by Engineering Faculty

  19. Predicting Minimum Control Speed on the Ground (VMCG) and Minimum Control Airspeed (VMCA) of Engine Inoperative Flight Using Aerodynamic Database and Propulsion Database Generators

    Science.gov (United States)

    Hadder, Eric Michael

    There are many computer aided engineering tools and software used by aerospace engineers to design and predict specific parameters of an airplane. These tools help a design engineer predict and calculate such parameters such as lift, drag, pitching moment, takeoff range, maximum takeoff weight, maximum flight range and much more. However, there are very limited ways to predict and calculate the minimum control speeds of an airplane in engine inoperative flight. There are simple solutions, as well as complicated solutions, yet there is neither standard technique nor consistency throughout the aerospace industry. To further complicate this subject, airplane designers have the option of using an Automatic Thrust Control System (ATCS), which directly alters the minimum control speeds of an airplane. This work addresses this issue with a tool used to predict and calculate the Minimum Control Speed on the Ground (VMCG) as well as the Minimum Control Airspeed (VMCA) of any existing or design-stage airplane. With simple line art of an airplane, a program called VORLAX is used to generate an aerodynamic database used to calculate the stability derivatives of an airplane. Using another program called Numerical Propulsion System Simulation (NPSS), a propulsion database is generated to use with the aerodynamic database to calculate both VMCG and VMCA. This tool was tested using two airplanes, the Airbus A320 and the Lockheed Martin C130J-30 Super Hercules. The A320 does not use an Automatic Thrust Control System (ATCS), whereas the C130J-30 does use an ATCS. The tool was able to properly calculate and match known values of VMCG and VMCA for both of the airplanes. The fact that this tool was able to calculate the known values of VMCG and VMCA for both airplanes means that this tool would be able to predict the VMCG and VMCA of an airplane in the preliminary stages of design. This would allow design engineers the ability to use an Automatic Thrust Control System (ATCS) as part

  20. Making a search engine for Indocean - A database of abstracts: An experience

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.; Haravu, L.J.

    stream_size 23701 stream_content_type text/plain stream_name Inf_Manage_Trends_Issues_2003_307.pdf.txt stream_source_info Inf_Manage_Trends_Issues_2003_307.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8... Information Mallagement : Trends and Issues (Festschrift ill honour of Prof S. Seetharama) 52 . Making a Search Engine for Indocean - A Database of Abstracts : An Experience Murari P Tapaswi* and L J Haravu** *Documentation Officer. National Information...

  1. A Qualitative Study of African American Women in Engineering Technology Programs in Community Colleges

    Science.gov (United States)

    Blakley, Jacquelyn

    2016-01-01

    This study examined the experiences of African American women in engineering technology programs in community colleges. There is a lack of representation of African American women in engineering technology programs throughout higher education, especially in community/technical colleges. There is also lack of representation of African American…

  2. SAADA: Astronomical Databases Made Easier

    Science.gov (United States)

    Michel, L.; Nguyen, H. N.; Motch, C.

    2005-12-01

    Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.

  3. Development of a Civil Engineer Corps Community Portal Prototype

    National Research Council Canada - National Science Library

    Rader, Neil

    2002-01-01

    The Civil Engineer Corps (CEC) is a relatively small Navy community consisting of approximately 1300 officers, Billet locations for the CEC range from Bahrain, Saudi Arabia to Keflavik, Iceland, CEC officers have a broad range...

  4. Development and application of characteristic database for uranium mining and metallurgy in the library of Beijing Research Institute of Chemical Engineering and Metallurgy

    International Nuclear Information System (INIS)

    Gao Renxi

    2012-01-01

    Beijing Research Institute of Chemical Engineering and Metallurgy (BRICEM) is a multi disciplinary comprehensive research institute engaged in uranium mining, engineering design and related material researches. After 53 years of researches and development, BRICEM has accumulated a plenty of valuable data and resources. By analyzing the actual conditions of BRICEM's technological database, this thesis aims to propose the idea of building a characteristic database for uranium mining and metallurgy. It gives an in-depth analysis on content design, development status and problems of database development, in order to come up with solutions to these problems, as well as suggestions on the future development plans of the characteristic database. (author)

  5. FoodMicrobionet: A database for the visualisation and exploration of food bacterial communities based on network analysis.

    Science.gov (United States)

    Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo

    2016-02-16

    Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry. Copyright © 2015. Published by

  6. Developing an Understanding of Higher Education Science and Engineering Learning Communities

    Science.gov (United States)

    Coll, Richard K.; Eames, Chris

    2008-01-01

    This article sets the scene for this special issue of "Research in Science & Technological Education", dedicated to understanding higher education science and engineering learning communities. We examine what the literature has to say about the nature of, and factors influencing, higher education learning communities. A discussion of…

  7. The Emdros Text Database Engine as a Platform for Persuasive Computing

    DEFF Research Database (Denmark)

    Sandborg-Petersen, Ulrik

    2013-01-01

    This paper describes the nature and scope of Emdros, a text database engine for annotated text. Three case-studies of persuasive learning systems using Emdros as an important architectural component are described, and their status as to participation in the three legs of BJ Fogg's Functional Triad...... of Persuasive Design is assessed. Various properties of Emdros are discussed, both with respect to competing systems, and with respect to the three case studies. It is argued that these properties together enable Emdros to form part of the foundation for a large class of systems whose primary function involves...

  8. Benefiting Female Students in Science, Math, and Engineering: The Nuts and Bolts of Establishing a WISE (Women in Science and Engineering) Learning Community

    Science.gov (United States)

    Pace, Diana; Witucki, Laurie; Blumreich, Kathleen

    2008-01-01

    This paper describes the rationale and the step by step process for setting up a WISE (Women in Science and Engineering) learning community at one institution. Background information on challenges for women in science and engineering and the benefits of a learning community for female students in these major areas are described. Authors discuss…

  9. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  10. Russian Academy of Engineering: a strong power for integration of engineering community

    Directory of Open Access Journals (Sweden)

    GUSEV Boris Vladimirovich

    2015-04-01

    Full Text Available Russian Academy of Engineering is legal successor of the Engineering Academy of USSR, founded by 20 ministries and departments of USSR and RSFSR on May 13, 1990. The Engineering Academy of USSR since the very beginning of its functioning, has launched its task-oriented activity on strengthening of links between science and industry, on solving the problems of using the results of basic (fundamental research and their accelerated adaptation into the industry. In the post-Soviet period, on the basis of the Academy, the Ministry of Justice of the Russian Federation, on December 24, 1991, registered the All-Russian Public Organization Russian Academy of Engineering (RAE. At the present time, RAE includes over 1350 full and corresponding members, prominent Russian scientists, engineers and industry organizers, over 200 member societies which include major Russian science & technology organizations, and over 40 regional engineering-technical structures, departments of RAE. RAE carries out large-scale work on the development of science & technology areas in science, creating new machinery and technologies, organization of efficient functioning of the Russian Engineering community. During the 25-year period of work, about 4,5 thousand new technologies were developed, over 6,5 thousand monographs were published. Over 4 thousand patents were obtained. 209 members of RAE became laureates of State Prize of USSR and RF, 376 members of RAE became laureates of Government Prize of USSR and RF. Annual value of science & research, project and other works in the area of engineering amounts from 0,5 to 1 billion roubles. This information and reference edition of the Encyclopedia of the Russian Academy of Engineering is dedicated to the 25th anniversary of the Russian Academy of Engineering. The Encyclopedia includes creative biographies of more than 1750 full and corresponding members of RAE, prominent scientists, distinguished engineers and organizers of industry

  11. Assessment of community-submitted ontology annotations from a novel database-journal partnership.

    Science.gov (United States)

    Berardini, Tanya Z; Li, Donghui; Muller, Robert; Chetty, Raymond; Ploetz, Larry; Singh, Shanker; Wensel, April; Huala, Eva

    2012-01-01

    As the scientific literature grows, leading to an increasing volume of published experimental data, so does the need to access and analyze this data using computational tools. The most commonly used method to convert published experimental data on gene function into controlled vocabulary annotations relies on a professional curator, employed by a model organism database or a more general resource such as UniProt, to read published articles and compose annotation statements based on the articles' contents. A more cost-effective and scalable approach capable of capturing gene function data across the whole range of biological research organisms in computable form is urgently needed. We have analyzed a set of ontology annotations generated through collaborations between the Arabidopsis Information Resource and several plant science journals. Analysis of the submissions entered using the online submission tool shows that most community annotations were well supported and the ontology terms chosen were at an appropriate level of specificity. Of the 503 individual annotations that were submitted, 97% were approved and community submissions captured 72% of all possible annotations. This new method for capturing experimental results in a computable form provides a cost-effective way to greatly increase the available body of annotations without sacrificing annotation quality. Database URL: www.arabidopsis.org.

  12. Broadening engineering education: bringing the community in : commentary on "social responsibility in French engineering education: a historical and sociological analysis".

    Science.gov (United States)

    Conlon, Eddie

    2013-12-01

    Two issues of particular interest in the Irish context are (1) the motivation for broadening engineering education to include the humanities, and an emphasis on social responsibility and (2) the process by which broadening can take place. Greater community engagement, arising from a socially-driven model of engineering education, is necessary if engineering practice is to move beyond its present captivity by corporate interests.

  13. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  14. Estimating Survival Rates in Engineering for Community College Transfer Students Using Grades in Calculus and Physics

    Science.gov (United States)

    Laugerman, Marcia; Shelley, Mack; Rover, Diane; Mickelson, Steve

    2015-01-01

    This study uses a unique synthesized set of data for community college students transferring to engineering by combining several cohorts of longitudinal data along with transcript-level data, from both the Community College and the University, to measure success rates in engineering. The success rates are calculated by developing Kaplan-Meier…

  15. Reactome graph database: Efficient access to complex pathway data

    Science.gov (United States)

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  16. Reactome graph database: Efficient access to complex pathway data.

    Directory of Open Access Journals (Sweden)

    Antonio Fabregat

    2018-01-01

    Full Text Available Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j as well as the new ContentService (REST API that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  17. Reactome graph database: Efficient access to complex pathway data.

    Science.gov (United States)

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  18. A community sharing hands-on centers in engineer's training

    Directory of Open Access Journals (Sweden)

    jean-pierre jpt Taboy

    2006-02-01

    Full Text Available As teachers in Technical Universities, we must think about the engineer's training. We need good applicants, up to date hardware and software for hand-on. Each university don't have enough money and technical people to cover the new needs. A community sharing remote hand-on centers could be a solution.

  19. BGD: a database of bat genomes.

    Science.gov (United States)

    Fang, Jianfei; Wang, Xuan; Mu, Shuo; Zhang, Shuyi; Dong, Dong

    2015-01-01

    Bats account for ~20% of mammalian species, and are the only mammals with true powered flight. For the sake of their specialized phenotypic traits, many researches have been devoted to examine the evolution of bats. Until now, some whole genome sequences of bats have been assembled and annotated, however, a uniform resource for the annotated bat genomes is still unavailable. To make the extensive data associated with the bat genomes accessible to the general biological communities, we established a Bat Genome Database (BGD). BGD is an open-access, web-available portal that integrates available data of bat genomes and genes. It hosts data from six bat species, including two megabats and four microbats. Users can query the gene annotations using efficient searching engine, and it offers browsable tracks of bat genomes. Furthermore, an easy-to-use phylogenetic analysis tool was also provided to facilitate online phylogeny study of genes. To the best of our knowledge, BGD is the first database of bat genomes. It will extend our understanding of the bat evolution and be advantageous to the bat sequences analysis. BGD is freely available at: http://donglab.ecnu.edu.cn/databases/BatGenome/.

  20. BGD: a database of bat genomes.

    Directory of Open Access Journals (Sweden)

    Jianfei Fang

    Full Text Available Bats account for ~20% of mammalian species, and are the only mammals with true powered flight. For the sake of their specialized phenotypic traits, many researches have been devoted to examine the evolution of bats. Until now, some whole genome sequences of bats have been assembled and annotated, however, a uniform resource for the annotated bat genomes is still unavailable. To make the extensive data associated with the bat genomes accessible to the general biological communities, we established a Bat Genome Database (BGD. BGD is an open-access, web-available portal that integrates available data of bat genomes and genes. It hosts data from six bat species, including two megabats and four microbats. Users can query the gene annotations using efficient searching engine, and it offers browsable tracks of bat genomes. Furthermore, an easy-to-use phylogenetic analysis tool was also provided to facilitate online phylogeny study of genes. To the best of our knowledge, BGD is the first database of bat genomes. It will extend our understanding of the bat evolution and be advantageous to the bat sequences analysis. BGD is freely available at: http://donglab.ecnu.edu.cn/databases/BatGenome/.

  1. JT-60 database system, 2

    International Nuclear Information System (INIS)

    Itoh, Yasuhiro; Kurihara, Kenichi; Kimura, Toyoaki.

    1987-07-01

    The JT-60 central control system, ''ZENKEI'' collects the control and instrumentation data relevant to discharge and device status data for plant monitoring. The former of the engineering data amounts to about 3 Mbytes per shot of discharge. The ''ZENKEI'' control system which consists of seven minicomputers for on-line real-time control has little performance of handling such a large amount of data for physical and engineering analysis. In order to solve this problem, it was planned to establish the experimental database on the Front-end Processor (FEP) of general purpose large computer in JAERI Computer Center. The database management system (DBMS), therefore, has been developed for creating the database during the shot interval. The engineering data are shipped up from ''ZENKEI'' to FEP through the dedicated communication line after the shot. The hierarchical data model has been adopted in this database, which consists of the data files with tree structure of three keys of system, discharge type and shot number. The JT-60 DBMS provides the data handling packages of subroutines for interfacing the database with user's application programs. The subroutine packages for supporting graphic processing and the function of access control for security of the database are also prepared in this DBMS. (author)

  2. Teaching `community engagement' in engineering education for international development: Integration of an interdisciplinary social work curriculum

    Science.gov (United States)

    Gilbert, Dorie J.; Lehman Held, Mary; Ellzey, Janet L.; Bailey, William T.; Young, Laurie B.

    2015-05-01

    This article reviews the literature on challenges faced by engineering faculty in educating their students on community-engaged, sustainable technical solutions in developing countries. We review a number of approaches to increasing teaching modules on social and community components of international development education, from adding capstone courses and educational track seminars to integrating content from other disciplines, particularly the social sciences. After summarising recent pedagogical strategies to increase content on community-focused development, we present a case study of how one engineering programme incorporates social work students and faculty to infuse strategies for community engagement in designing and implementing student-led global engineering development projects. We outline how this interdisciplinary pedagogical approach teaches students from the two disciplines to work together in addressing power balances, economic and social issues and overall sustainability of international development projects.

  3. Proceedings of the 3. Canada-US rock mechanics symposium and 20. Canadian rock mechanics symposium : rock engineering 2009 : rock engineering in difficult conditions

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-07-01

    This conference provided a forum for geologists, mining operators and engineers to discuss the application of rock mechanics in engineering designs. Members of the scientific and engineering communities discussed challenges and interdisciplinary elements involved in rock engineering. New geological models and methods of characterizing rock masses and ground conditions in underground engineering projects were discussed along with excavation and mining methods. Papers presented at the conference discussed the role of rock mechanics in forensic engineering. Geophysics, geomechanics, and risk-based approaches to rock engineering designs were reviewed. Issues related to high pressure and high flow water conditions were discussed, and new rock physics models designed to enhance hydrocarbon recovery were presented. The conference featured 84 presentations, of which 9 have been catalogued separately for inclusion in this database. tabs., figs.

  4. Imprinting Community College Computer Science Education with Software Engineering Principles

    Science.gov (United States)

    Hundley, Jacqueline Holliday

    2012-01-01

    Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…

  5. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  6. Developing an Inhouse Database from Online Sources.

    Science.gov (United States)

    Smith-Cohen, Deborah

    1993-01-01

    Describes the development of an in-house bibliographic database by the U.S. Army Corp of Engineers Cold Regions Research and Engineering Laboratory on arctic wetlands research. Topics discussed include planning; identifying relevant search terms and commercial online databases; downloading citations; criteria for software selection; management…

  7. The Neotoma Paleoecology Database: An International Community-Curated Resource for Paleoecological and Paleoenvironmental Data

    Science.gov (United States)

    Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.

    2017-12-01

    The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.

  8. GenderMedDB: an interactive database of sex and gender-specific medical literature.

    Science.gov (United States)

    Oertelt-Prigione, Sabine; Gohlke, Björn-Oliver; Dunkel, Mathias; Preissner, Robert; Regitz-Zagrosek, Vera

    2014-01-01

    Searches for sex and gender-specific publications are complicated by the absence of a specific algorithm within search engines and by the lack of adequate archives to collect the retrieved results. We previously addressed this issue by initiating the first systematic archive of medical literature containing sex and/or gender-specific analyses. This initial collection has now been greatly enlarged and re-organized as a free user-friendly database with multiple functions: GenderMedDB (http://gendermeddb.charite.de). GenderMedDB retrieves the included publications from the PubMed database. Manuscripts containing sex and/or gender-specific analysis are continuously screened and the relevant findings organized systematically into disciplines and diseases. Publications are furthermore classified by research type, subject and participant numbers. More than 11,000 abstracts are currently included in the database, after screening more than 40,000 publications. The main functions of the database include searches by publication data or content analysis based on pre-defined classifications. In addition, registrants are enabled to upload relevant publications, access descriptive publication statistics and interact in an open user forum. Overall, GenderMedDB offers the advantages of a discipline-specific search engine as well as the functions of a participative tool for the gender medicine community.

  9. Women in Science and Engineering Building Community Online

    Science.gov (United States)

    Kleinman, Sharon S.

    This article explores the constructs of online community and online social support and discusses a naturalistic case study of a public, unmoderated, online discussion group dedicated to issues of interest to women in science and engineering. The benefits of affiliation with OURNET (a pseudonym) were explored through participant observation over a 4-year period, telephone interviews with 21 subscribers, and content analysis of e-mail messages posted to the discussion group during a 125-day period. The case study findings indicated that through affiliation with the online discussion group, women in traditionally male-dominated fields expanded their professional networks, increased their knowledge, constituted and validated positive social identities, bolstered their self-confidence, obtained social support and information from people with a wide range of experiences and areas of expertise, and, most significantly, found community.

  10. Database on wind characteristics - contents of database bank

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  11. TRENDS: The aeronautical post-test database management system

    Science.gov (United States)

    Bjorkman, W. S.; Bondi, M. J.

    1990-01-01

    TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.

  12. Influence of Precollege Experience on Self-Concept among Community College Students in Science, Mathematics, and Engineering

    Science.gov (United States)

    Starobin, Soko S.; Laanan, Frankie Santos

    Female and minority students have historically been underrepresented in the field of science, mathematics, and engineering at colleges and universities. Although a plethora of research has focused on students enrolled in 4-year colleges or universities, limited research addresses the factors that influence gender differences in community college students in science, mathematics, and engineering. Using a target population of 1,599 aspirants in science, mathematics, and engineering majors in public community colleges, this study investigates the determinants of self-concept by examining a hypothetical structural model. The findings suggest that background characteristics, high school academic performance, and attitude toward science have unique contributions to the development of self-concept among female community college students. The results add to the literature by providing new theoretical constructs and the variables that predict students' self-concept.

  13. Assessment of community noise for a medium-range airplane with open-rotor engines

    Science.gov (United States)

    Kopiev, V. F.; Shur, M. L.; Travin, A. K.; Belyaev, I. V.; Zamtfort, B. S.; Medvedev, Yu. V.

    2017-11-01

    Community noise of a hypothetical medium-range airplane equipped with open-rotor engines is assessed by numerical modeling of the aeroacoustic characteristics of an isolated open rotor with the simplest blade geometry. Various open-rotor configurations are considered at constant thrust, and the lowest-noise configuration is selected. A two-engine medium-range airplane at known thrust of bypass turbofan engines at different segments of the takeoff-landing trajectory is considered, after the replacement of those engines by the open-rotor engines. It is established that a medium-range airplane with two open-rotor engines meets the requirements of Chapter 4 of the ICAO standard with a significant margin. It is shown that airframe noise makes a significant contribution to the total noise of an airplane with open-rotor engines at landing.

  14. Astronomical databases of Nikolaev Observatory

    Science.gov (United States)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  15. Restoring rocky intertidal communities: Lessons from a benthic macroalgal ecosystem engineer

    International Nuclear Information System (INIS)

    Bellgrove, Alecia; McKenzie, Prudence F.; Cameron, Hayley; Pocklington, Jacqueline B.

    2017-01-01

    As coastal population growth increases globally, effective waste management practices are required to protect biodiversity. Water authorities are under increasing pressure to reduce the impact of sewage effluent discharged into the coastal environment and restore disturbed ecosystems. We review the role of benthic macroalgae as ecosystem engineers and focus particularly on the temperate Australasian fucoid Hormosira banksii as a case study for rocky intertidal restoration efforts. Research focussing on the roles of ecosystem engineers is lagging behind restoration research of ecosystem engineers. As such, management decisions are being made without a sound understanding of the ecology of ecosystem engineers. For successful restoration of rocky intertidal shores it is important that we assess the thresholds of engineering traits (discussed herein) and the environmental conditions under which they are important. - Highlights: • Fucoid algae can be important ecosystem engineers in rocky reef ecosystems • Sewage-effluent disposal negatively affects fucoids and associated communities • Restoring fucoid populations can improve biodiversity of degraded systems • Clarifying the roles of fucoids in ecosystem function can improve restoration efforts • Thresholds of engineering traits and associated environmental conditions important

  16. The analysis of long-term changes in plant communities using large databases: the effect of stratified resampling.

    NARCIS (Netherlands)

    Haveman, R.; Janssen, J.A.M.

    2008-01-01

    Question: Releves in large phytosociological databases used for analysing long-term changes in plant communities are biased towards easily accessible places and species-rich stands. How does this bias influence trend analysis of floristic composition within a priori determined vegetation types and

  17. Application of database management system for data-ware of work on power engineering problems in the National Nuclear Center of the Republic of Kazakhstan

    International Nuclear Information System (INIS)

    Afanas'eva, T.Yu.

    2001-01-01

    In this article there are some development tendencies of state-of-art data management system. Also it describes databases on the status of the world power engineering and power engineering in Kazakhstan created in the National Nuclear Center of the Republic of Kazakhstan. (author)

  18. Navigating Community College Transfer in Science, Technical, Engineering, and Mathematics Fields

    Science.gov (United States)

    Packard, Becky Wai-Ling; Gagnon, Janelle L.; Senas, Arleen J.

    2012-01-01

    Given financial barriers facing community college students today, and workforce projections in science, technical, engineering, and math (STEM) fields, the costs of unnecessary delays while navigating transfer pathways are high. In this phenomenological study, we analyzed the delay experiences of 172 students (65% female) navigating community…

  19. Acoustic Database for Turbofan Engine Core-Noise Sources. I; Volume

    Science.gov (United States)

    Gordon, Grant

    2015-01-01

    In this program, a database of dynamic temperature and dynamic pressure measurements were acquired inside the core of a TECH977 turbofan engine to support investigations of indirect combustion noise. Dynamic temperature and pressure measurements were recorded for engine gas dynamics up to temperatures of 3100 degrees Fahrenheit and transient responses as high as 1000 hertz. These measurements were made at the entrance of the high pressure turbine (HPT) and at the entrance and exit of the low pressure turbine (LPT). Measurements were made at two circumferential clocking positions. In the combustor and inter-turbine duct (ITD), measurements were made at two axial locations to enable the exploration of time delays. The dynamic temperature measurements were made using dual thin-wire thermocouple probes. The dynamic pressure measurements were made using semi-infinite probes. Prior to the engine test, a series of bench, oven, and combustor rig tests were conducted to characterize the performance of the dual wire temperature probes and to define and characterize the data acquisition systems. A measurement solution for acquiring dynamic temperature and pressure data on the engine was defined. A suite of hardware modifications were designed to incorporate the dynamic temperature and pressure instrumentation into the TECH977 engine. In particular, a probe actuation system was developed to protect the delicate temperature probes during engine startup and transients in order to maximize sensor life. A set of temperature probes was procured and the TECH977 engine was assembled with the suite of new and modified hardware. The engine was tested at four steady state operating speeds, with repeats. Dynamic pressure and temperature data were acquired at each condition for at least one minute. At the two highest power settings, temperature data could not be obtained at the forward probe locations since the mean temperatures exceeded the capability of the probes. The temperature data

  20. TRENDS: A flight test relational database user's guide and reference manual

    Science.gov (United States)

    Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.

    1994-01-01

    This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.

  1. Network-based Database Course

    DEFF Research Database (Denmark)

    Nielsen, J.N.; Knudsen, Morten; Nielsen, Jens Frederik Dalsgaard

    A course in database design and implementation has been de- signed, utilizing existing network facilities. The course is an elementary course for students of computer engineering. Its purpose is to give the students a theoretical database knowledge as well as practical experience with design...... and implementation. A tutorial relational database and the students self-designed databases are implemented on the UNIX system of Aalborg University, thus giving the teacher the possibility of live demonstrations in the lecture room, and the students the possibility of interactive learning in their working rooms...

  2. Solar Sail Propulsion Technology Readiness Level Database

    Science.gov (United States)

    Adams, Charles L.

    2004-01-01

    The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).

  3. Construction of a bibliographic information database and a web directory for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jeong Hoon; Kim, Tae Whan; Lee, Ji Ho; Chun, Young Chun; Yu, An Na

    2005-11-15

    The objective of this project is to construct the bibliographic information database and the web directory in the nuclear field. Its construction is very timely and important. Because nuclear science and technology has an considerable effect all over the other sciences and technologies due to its property of giant and complex engineering. We aimed to firmly build up a basis of efficient management of the bibliographic information database and the web directory in the nuclear field. The results of this project that we achieved in this year are as follows : first, construction of the bibliographic information database in the nuclear field(the target title: 1,500 titles ; research report: 1,000 titles, full-text report: 250 titles, full-text article: 250 titles). Second, completion of construction of the web directory in the nuclear field by using SWING (the total figure achieved : 2,613 titles). We plan that we will positively give more information to the general public interested in the nuclear field and to the experts of the field through this bibliographic information database on KAERI's home page, KAERI's electronic library and other related sites as well as participation at various seminars and meetings related to the nuclear field.

  4. DB90: A Fortran Callable Relational Database Routine for Scientific and Engineering Computer Programs

    Science.gov (United States)

    Wrenn, Gregory A.

    2005-01-01

    This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.

  5. A high-energy nuclear database proposal

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.; UC Davis, CA

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from the Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  6. NIMS structural materials databases and cross search engine - MatNavi

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, M.; Xu, Y.; Murata, M.; Tanaka, H.; Kamihira, K.; Kimura, K. [National Institute for Materials Science, Tokyo (Japan)

    2007-06-15

    Materials Database Station (MDBS) of National Institute for Materials Science (NIMS) owns the world's largest Internet materials database for academic and industry purpose, which is composed of twelve databases: five concerning structural materials, five concerning basic physical properties, one for superconducting materials and one for polymers. All of theses databases are opened to Internet access at the website of http://mits.nims.go.jp/en. Online tools for predicting properties of polymers and composite materials are also available. The NIMS structural materials databases are composed of structural materials data sheet online version (creep, fatigue, corrosion and space use materials strength), microstructure for crept material database, Pressure vessel materials database and CCT diagram for welding. (orig.)

  7. Abstract databases in nuclear medicine; New database for articles not indexed in PubMed

    International Nuclear Information System (INIS)

    Ugrinska, A.; Mustafa, B.

    2004-01-01

    Full text: Abstract databases available on Internet free of charge were searched for nuclear medicine contents. The only comprehensive database found was PubMed. Analysis of nuclear medicine journals included in PubMed was performed. PubMed contains 25 medical journals that contain the phrase 'nuclear medicine' in different languages in their title. Searching the Internet with the search engine 'Google' we have found four more peer-reviewed journals with the phrase 'nuclear medicine' in their title. In addition, we are fully aware that many articles related to nuclear medicine are published in national medical journals devoted to general medicine. For example in year 2000 colleagues from Institute of Pathophysiology and Nuclear Medicine, Skopje, Macedonia have published 10 articles out of which none could be found on PubMed. This suggested that a big amount of research work is not accessible for the people professionally involved in nuclear medicine. Therefore, we have created a database framework for abstracts that couldn't be found in PubMed. The database is organized in user-friendly manner. There are two main sections: 'post an abstract' and 'search for abstracts'. Authors of the articles are expected to submit their work in the section 'post an abstract'. During the submission process authors should fill the separate boxes with the Title in English, Title in original language, Country of origin, Journal name, Volume, Issue and Pages. Authors should choose up to five keywords from a drop-down menu. Authors are encouraged if the abstract is not published in English to translate it. The section 'search for abstract' is searchable according to Author, Keywords, and words and phrases incorporated in the English title. The abstract database currently resides on an MS Access back-end, with a front-end in ASP (Active Server Pages). In the future, we plan to migrate the database on a MS SQL Server, which should provide a faster and more reliable framework for hosting a

  8. Database Independent Migration of Objects into an Object-Relational Database

    CERN Document Server

    Ali, A; Munir, K; Waseem-Hassan, M; Willers, I

    2002-01-01

    CERN's (European Organization for Nuclear Research) WISDOM project [1] deals with the replication of data between homogeneous sources in a Wide Area Network (WAN) using the extensible Markup Language (XML). The last phase of the WISDOM (Wide-area, database Independent Serialization of Distributed Objects for data Migration) project [2], indicates the future directions for this work to be to incorporate heterogeneous sources as compared to homogeneous sources as described by [3]. This work will become essential for the CERN community once the need to transfer their legacy data to some other source, other then Objectivity [4], arises. Oracle 9i - an Object-Relational Database (including support for abstract data types, ADTs) appears to be a potential candidate for the physics event store in the CERN CMS experiment as suggested by [4] & [5]. Consequently this database has been selected for study. As a result of this work the HEP community will get a tool for migrating their data from Objectivity to Oracle9i.

  9. Connecting Urban Students with Engineering Design: Community-Focused, Student-Driven Projects

    Science.gov (United States)

    Parker, Carolyn; Kruchten, Catherine; Moshfeghian, Audrey

    2017-01-01

    The STEM Achievement in Baltimore Elementary Schools (SABES) program is a community partnership initiative that includes both in-school and afterschool STEM education for grades 3-5. It was designed to broaden participation and achievement in STEM education by bringing science and engineering to the lives of low-income urban elementary school…

  10. Building community partnerships to implement the new Science and Engineering component of the NGSS

    Science.gov (United States)

    Burke, M. P.; Linn, F.

    2013-12-01

    Partnerships between science professionals in the community and professional educators can help facilitate the adoption of the Next Generation Science Standards (NGSS). Classroom teachers have been trained in content areas but may be less familiar with the new required Science and Engineering component of the NGSS. This presentation will offer a successful model for building classroom and community partnerships and highlight the particulars of a collaborative lesson taught to Rapid City High School students. Local environmental issues provided a framework for learning activities that encompassed several Crosscutting Concepts and Science and Engineering Practices for a lesson focused on Life Science Ecosystems: Interactions, Energy, and Dynamics. Specifically, students studied local water quality impairments, collected and measured stream samples, and analyzed their data. A visiting hydrologist supplied additional water quality data from ongoing studies to extend the students' datasets both temporally and spatially, helping students to identify patterns and draw conclusions based on their findings. Context was provided through discussions of how science professionals collect and analyze data and communicate results to the public, using an example of a recent bacterial contamination of a local stream. Working with Rapid City High School students added additional challenges due to their high truancy and poverty rates. Creating a relevant classroom experience was especially critical for engaging these at-risk youth and demonstrating that science is a viable career path for them. Connecting science in the community with the problem-solving nature of engineering is a critical component of NGSS, and this presentation will elucidate strategies to help prospective partners maneuver through the challenges that we've encountered. We recognize that the successful implementation of the NGSS is a challenge that requires the support of the scientific community. This partnership

  11. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, David A.; Vogt, Ramona

    2005-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac and AGS to RHIC to CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews

  12. Database Description - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available on: Department of Biosciences, Faculty of Science and Engineering, Teikyo Univers...ity Contact address 1-1, Toyosatodai, Utsunomiya-shi, Tochigi 320-8551 Japan Department of Biosciences, Faculty of Science and Engine...ering, Teikyo University Tomoko Shinomura E-mail : Database classification Plant da

  13. Study of event sequence database for a nuclear power domain

    International Nuclear Information System (INIS)

    Kusumi, Yoshiaki

    1998-01-01

    A retrieval engine developed to extract event sequences from an accident information database using a time series retrieval formula expressed with ordered retrieval terms is explored. This engine outputs not only a sequence which completely matches with a time series retrieval formula, but also sequence which approximately matches the formula (fuzzy retrieval). An event sequence database in which records consist of three ordered parameters, namely the causal event, the process and result. Then the database is used to assess the feasibility of this engine and favorable results were obtained. (author)

  14. A community effort to construct a gravity database for the United States and an associated Web portal

    Science.gov (United States)

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  15. A Case for Database Filesystems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  16. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    Science.gov (United States)

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  17. The community FabLab platform: applications and implications in biomedical engineering.

    Science.gov (United States)

    Stephenson, Makeda K; Dow, Douglas E

    2014-01-01

    Skill development in science, technology, engineering and math (STEM) education present one of the most formidable challenges of modern society. The Community FabLab platform presents a viable solution. Each FabLab contains a suite of modern computer numerical control (CNC) equipment, electronics and computing hardware and design, programming, computer aided design (CAD) and computer aided machining (CAM) software. FabLabs are community and educational resources and open to the public. Development of STEM based workforce skills such as digital fabrication and advanced manufacturing can be enhanced using this platform. Particularly notable is the potential of the FabLab platform in STEM education. The active learning environment engages and supports a diversity of learners, while the iterative learning that is supported by the FabLab rapid prototyping platform facilitates depth of understanding, creativity, innovation and mastery. The product and project based learning that occurs in FabLabs develops in the student a personal sense of accomplishment, self-awareness, command of the material and technology. This helps build the interest and confidence necessary to excel in STEM and throughout life. Finally the introduction and use of relevant technologies at every stage of the education process ensures technical familiarity and a broad knowledge base needed for work in STEM based fields. Biomedical engineering education strives to cultivate broad technical adeptness, creativity, interdisciplinary thought, and an ability to form deep conceptual understanding of complex systems. The FabLab platform is well designed to enhance biomedical engineering education.

  18. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  19. Proposal for a high-energy nuclear database

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.

    2006-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, we propose periodically performing evaluations of the data and summarizing the results in topical reviews. (author)

  20. Proposal for a High Energy Nuclear Database

    International Nuclear Information System (INIS)

    Brown, D A; Vogt, R

    2005-01-01

    The authors propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. This database will be searchable and cross-indexed with relevant publications, including published detector descriptions. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This database should eventually contain all published data from Bevalac, AGS and SPS to RHIC and CERN-LHC energies, proton-proton to nucleus-nucleus collisions as well as other relevant systems, and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of old and new experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion and target and source development for upcoming facilities such as the Next Linear Collider. To enhance the utility of this database, they propose periodically performing evaluations of the data and summarizing the results in topical reviews

  1. Teaching "Community Engagement" in Engineering Education for International Development: Integration of an Interdisciplinary Social Work Curriculum

    Science.gov (United States)

    Gilbert, Dorie J.; Held, Mary Lehman; Ellzey, Janet L.; Bailey, William T.; Young, Laurie B.

    2015-01-01

    This article reviews the literature on challenges faced by engineering faculty in educating their students on community-engaged, sustainable technical solutions in developing countries. We review a number of approaches to increasing teaching modules on social and community components of international development education, from adding capstone…

  2. Investigation on structuring the human body function database; Shintai kino database no kochiku ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-03-01

    Based on the concept of human life engineering database, a study was made to know how to technically make such a database fittable to the old people in the age-advancing society. It was then proposed that the old people`s human life engineering database should be prepared to serve for the development and design of life technology to be applied into the age-advancing society. An executive method of structuring the database was established through the `bathing` and `going out` selected as an action to be casestudied in the daily life of old people. As a result of the study, the proposal was made that the old people`s human body function database should be prepared as a R and D base for the life technology in the aged society. Based on the above proposal, a master plan was mapped out to structure this database with the concrete method studied for putting it into action. At the first investigation stage of the above study, documentation was made through utilizing the existing documentary database. Enterprises were also interviewed for the investigation. Pertaining to the function of old people, about 500 documents were extracted with many vague points not clarified yet. The investigation will restart in the next fiscal year. 4 refs., 38 figs., 30 tabs.

  3. Community | College of Engineering & Applied Science

    Science.gov (United States)

    Electrical Engineering Instructional Laboratories Student Resources Industrial & Manufacturing Engineering Industrial & Manufacturing Engineering Academic Programs Industrial & Manufacturing Engineering Major Industrial & Manufacturing Engineering Minor Industrial & Manufacturing Engineering

  4. Ariadne: a database search engine for identification and chemical analysis of RNA using tandem mass spectrometry data.

    Science.gov (United States)

    Nakayama, Hiroshi; Akiyama, Misaki; Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-04-01

    We present here a method to correlate tandem mass spectra of sample RNA nucleolytic fragments with an RNA nucleotide sequence in a DNA/RNA sequence database, thereby allowing tandem mass spectrometry (MS/MS)-based identification of RNA in biological samples. Ariadne, a unique web-based database search engine, identifies RNA by two probability-based evaluation steps of MS/MS data. In the first step, the software evaluates the matches between the masses of product ions generated by MS/MS of an RNase digest of sample RNA and those calculated from a candidate nucleotide sequence in a DNA/RNA sequence database, which then predicts the nucleotide sequences of these RNase fragments. In the second step, the candidate sequences are mapped for all RNA entries in the database, and each entry is scored for a function of occurrences of the candidate sequences to identify a particular RNA. Ariadne can also predict post-transcriptional modifications of RNA, such as methylation of nucleotide bases and/or ribose, by estimating mass shifts from the theoretical mass values. The method was validated with MS/MS data of RNase T1 digests of in vitro transcripts. It was applied successfully to identify an unknown RNA component in a tRNA mixture and to analyze post-transcriptional modification in yeast tRNA(Phe-1).

  5. The Microbial Database for Danish wastewater treatment plants with nutrient removal (MiDas-DK) - a tool for understanding activated sludge population dynamics and community stability.

    Science.gov (United States)

    Mielczarek, A T; Saunders, A M; Larsen, P; Albertsen, M; Stevenson, M; Nielsen, J L; Nielsen, P H

    2013-01-01

    Since 2006 more than 50 Danish full-scale wastewater treatment plants with nutrient removal have been investigated in a project called 'The Microbial Database for Danish Activated Sludge Wastewater Treatment Plants with Nutrient Removal (MiDas-DK)'. Comprehensive sets of samples have been collected, analyzed and associated with extensive operational data from the plants. The community composition was analyzed by quantitative fluorescence in situ hybridization (FISH) supported by 16S rRNA amplicon sequencing and deep metagenomics. MiDas-DK has been a powerful tool to study the complex activated sludge ecosystems, and, besides many scientific articles on fundamental issues on mixed communities encompassing nitrifiers, denitrifiers, bacteria involved in P-removal, hydrolysis, fermentation, and foaming, the project has provided results that can be used to optimize the operation of full-scale plants and carry out trouble-shooting. A core microbial community has been defined comprising the majority of microorganisms present in the plants. Time series have been established, providing an overview of temporal variations in the different plants. Interestingly, although most microorganisms were present in all plants, there seemed to be plant-specific factors that controlled the population composition thereby keeping it unique in each plant over time. Statistical analyses of FISH and operational data revealed some correlations, but less than expected. MiDas-DK (www.midasdk.dk) will continue over the next years and we hope the approach can inspire others to make similar projects in other parts of the world to get a more comprehensive understanding of microbial communities in wastewater engineering.

  6. SU-D-BRB-02: Combining a Commercial Autoplanning Engine with Database Dose Predictions to Further Improve Plan Quality

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, SP; Moore, JA; Hui, X; Cheng, Z; McNutt, TR [Johns Hopkins University, Baltimore, MD (United States); DeWeese, TL; Tran, P; Quon, H [John Hopkins Hospital, Baltimore, MD (United States); Bzdusek, K [Philips, Fitchburg, WI (United States); Kumar, P [Philips India Limited, Bangalore, Karnataka (India)

    2016-06-15

    Purpose: Database dose predictions and a commercial autoplanning engine both improve treatment plan quality in different but complimentary ways. The combination of these planning techniques is hypothesized to further improve plan quality. Methods: Four treatment plans were generated for each of 10 head and neck (HN) and 10 prostate cancer patients, including Plan-A: traditional IMRT optimization using clinically relevant default objectives; Plan-B: traditional IMRT optimization using database dose predictions; Plan-C: autoplanning using default objectives; and Plan-D: autoplanning using database dose predictions. One optimization was used for each planning method. Dose distributions were normalized to 95% of the planning target volume (prostate: 8000 cGy; HN: 7000 cGy). Objectives used in plan optimization and analysis were the larynx (25%, 50%, 90%), left and right parotid glands (50%, 85%), spinal cord (0%, 50%), rectum and bladder (0%, 20%, 50%, 80%), and left and right femoral heads (0%, 70%). Results: All objectives except larynx 25% and 50% resulted in statistically significant differences between plans (Friedman’s χ{sup 2} ≥ 11.2; p ≤ 0.011). Maximum dose to the rectum (Plans A-D: 8328, 8395, 8489, 8537 cGy) and bladder (Plans A-D: 8403, 8448, 8527, 8569 cGy) were significantly increased. All other significant differences reflected a decrease in dose. Plans B-D were significantly different from Plan-A for 3, 17, and 19 objectives, respectively. Plans C-D were also significantly different from Plan-B for 8 and 13 objectives, respectively. In one case (cord 50%), Plan-D provided significantly lower dose than plan C (p = 0.003). Conclusion: Combining database dose predictions with a commercial autoplanning engine resulted in significant plan quality differences for the greatest number of objectives. This translated to plan quality improvements in most cases, although special care may be needed for maximum dose constraints. Further evaluation is warranted

  7. The Zebrafish Model Organism Database (ZFIN)

    Data.gov (United States)

    U.S. Department of Health & Human Services — ZFIN serves as the zebrafish model organism database. It aims to: a) be the community database resource for the laboratory use of zebrafish, b) develop and support...

  8. Database Design and Management in Engineering Optimization.

    Science.gov (United States)

    1988-02-01

    scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user

  9. The Software Engineering Community at DLR: How we got where we are

    OpenAIRE

    Haupt, Carina; Schlauch, Tobias

    2017-01-01

    Sustainable software and reproducible results become vital in research. Awareness for the topic as well as the required knowledge cannot be taken for granted. We show, how scientists at DLR are encouraged to form a self-reliant software engineering community and how we supported this by providing information resources and opportunities for collaboration and exchange.

  10. Memetic Engineering as a Basis for Learning in Robotic Communities

    Science.gov (United States)

    Truszkowski, Walter F.; Rouff, Christopher; Akhavannik, Mohammad H.

    2014-01-01

    This paper represents a new contribution to the growing literature on memes. While most memetic thought has been focused on its implications on humans, this paper speculates on the role that memetics can have on robotic communities. Though speculative, the concepts are based on proven advanced multi agent technology work done at NASA - Goddard Space Flight Center and Lockheed Martin. The paper is composed of the following sections : 1) An introductory section which gently leads the reader into the realm of memes. 2) A section on memetic engineering which addresses some of the central issues with robotic learning via memes. 3) A section on related work which very concisely identifies three other areas of memetic applications, i.e., news, psychology, and the study of human behaviors. 4) A section which discusses the proposed approach for realizing memetic behaviors in robots and robotic communities. 5) A section which presents an exploration scenario for a community of robots working on Mars. 6) A final section which discusses future research which will be required to realize a comprehensive science of robotic memetics.

  11. Educating the Engineer for Sustainable Community Development

    Science.gov (United States)

    Munoz, D. R.

    2008-12-01

    More than ever before, we are confronting the challenges of limited resources (water, food, energy and mineral), while also facing complex challenges with the environment and related social unrest. Resource access problems are exacerbated by multi-scale geopolitical instability. We seek a balance that will allow profit but also leave a world fit for our children to inherit. Many are working with small groups to make positive change through finding solutions that address these challenges. In fact, some say that in sum, it is the largest human movement that has ever existed. In this talk I will share our experiences to alleviate vulnerabilities for populations of humans in need while working with students, corporate entities and non governmental organizations. Our main focus is to educate a new cadre of engineers that have an enhanced awareness of and better communication skills for a different cultural environment than the one in which they were raised and are hungry to seek new opportunities to serve humanity at a basic level. The results of a few of the more than forty humanitarian engineering projects completed since 2003 will be superimposed on a theoretical framework for sustainable community development. This will be useful information to those seeking a social corporate position of responsibility and a world that more closely approaches a sustainable equilibrium.

  12. Gene composer: database software for protein construct design, codon engineering, and gene synthesis.

    Science.gov (United States)

    Lorimer, Don; Raymond, Amy; Walchli, John; Mixon, Mark; Barrow, Adrienne; Wallace, Ellen; Grice, Rena; Burgin, Alex; Stewart, Lance

    2009-04-21

    To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene assembly procedure with mis-match specific endonuclease

  13. Gene Composer: database software for protein construct design, codon engineering, and gene synthesis

    Directory of Open Access Journals (Sweden)

    Mixon Mark

    2009-04-01

    Full Text Available Abstract Background To improve efficiency in high throughput protein structure determination, we have developed a database software package, Gene Composer, which facilitates the information-rich design of protein constructs and their codon engineered synthetic gene sequences. With its modular workflow design and numerous graphical user interfaces, Gene Composer enables researchers to perform all common bio-informatics steps used in modern structure guided protein engineering and synthetic gene engineering. Results An interactive Alignment Viewer allows the researcher to simultaneously visualize sequence conservation in the context of known protein secondary structure, ligand contacts, water contacts, crystal contacts, B-factors, solvent accessible area, residue property type and several other useful property views. The Construct Design Module enables the facile design of novel protein constructs with altered N- and C-termini, internal insertions or deletions, point mutations, and desired affinity tags. The modifications can be combined and permuted into multiple protein constructs, and then virtually cloned in silico into defined expression vectors. The Gene Design Module uses a protein-to-gene algorithm that automates the back-translation of a protein amino acid sequence into a codon engineered nucleic acid gene sequence according to a selected codon usage table with minimal codon usage threshold, defined G:C% content, and desired sequence features achieved through synonymous codon selection that is optimized for the intended expression system. The gene-to-oligo algorithm of the Gene Design Module plans out all of the required overlapping oligonucleotides and mutagenic primers needed to synthesize the desired gene constructs by PCR, and for physically cloning them into selected vectors by the most popular subcloning strategies. Conclusion We present a complete description of Gene Composer functionality, and an efficient PCR-based synthetic gene

  14. Humanitarian engineering in the engineering curriculum

    Science.gov (United States)

    Vandersteen, Jonathan Daniel James

    There are many opportunities to use engineering skills to improve the conditions for marginalized communities, but our current engineering education praxis does not instruct on how engineering can be a force for human development. In a time of great inequality and exploitation, the desire to work with the impoverished is prevalent, and it has been proposed to adjust the engineering curriculum to include a larger focus on human needs. This proposed curriculum philosophy is called humanitarian engineering. Professional engineers have played an important role in the modern history of power, wealth, economic development, war, and industrialization; they have also contributed to infrastructure, sanitation, and energy sources necessary to meet human need. Engineers are currently at an important point in time when they must look back on their history in order to be more clear about how to move forward. The changing role of the engineer in history puts into context the call for a more balanced, community-centred engineering curriculum. Qualitative, phenomenographic research was conducted in order to understand the need, opportunity, benefits, and limitations of a proposed humanitarian engineering curriculum. The potential role of the engineer in marginalized communities and details regarding what a humanitarian engineering program could look like were also investigated. Thirty-two semi-structured research interviews were conducted in Canada and Ghana in order to collect a pool of understanding before a phenomenographic analysis resulted in five distinct outcome spaces. The data suggests that an effective curriculum design will include teaching technical skills in conjunction with instructing about issues of social justice, social location, cultural awareness, root causes of marginalization, a broader understanding of technology, and unlearning many elements about the role of the engineer and the dominant economic/political ideology. Cross-cultural engineering development

  15. A comparison of three design tree based search algorithms for the detection of engineering parts constructed with CATIA V5 in large databases

    Directory of Open Access Journals (Sweden)

    Robin Roj

    2014-07-01

    Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.

  16. Integrated Community Energy Systems: engineering analysis and design bibliography. [368 citations

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.; Sapienza, G.R.

    1979-05-01

    This bibliography cites 368 documents that may be helpful in the planning, analysis, and design of Integrated Community Energy Systems. It has been prepared for use primarily by engineers and others involved in the development and implementation of ICES concepts. These documents include products of a number of Government research, development, demonstration, and commercialization programs; selected studies and references from the literature of various technical societies and institutions; and other selected material. The key programs which have produced cited reports are the Department of Energy Community Systems Program (DOE/CSP), the Department of Housing and Urban Development Modular Integrated Utility Systems Program (HUD/MIUS), and the Department of Health, Education, and Welfare Integrated Utility Systems Program (HEW/IUS). The cited documents address experience gained both in the U.S. and in other countries. Several general engineering references and bibliographies pertaining to technologies or analytical methods that may be helpful in the analysis and design of ICES are also included. The body of relevant literature is rapidly growing and future updates are therefore planned. Each citation includes identifying information, a source, descriptive information, and an abstract. The citations are indexed both by subjects and authors, and the subject index is extensively cross-referenced to simplify its use.

  17. Databases of the marine metagenomics

    KAUST Repository

    Mineta, Katsuhiko

    2015-10-28

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  18. Computer Application Of Object Oriented Database Management ...

    African Journals Online (AJOL)

    Object Oriented Systems (OOS) have been widely adopted in software engineering because of their superiority with respect to data extensibility. The present trend in the software engineering process (SEP) towards concurrent computing raises novel concerns for the facilities and technology available in database ...

  19. The Competence Readiness of the Electrical Engineering Vocational High School Teachers in Manado towards the ASEAN Economic Community Blueprint in 2025

    Directory of Open Access Journals (Sweden)

    Fid Jantje Tasiam

    2017-08-01

    Full Text Available This paper presents the competence readiness of the electrical engineering vocational high school teachers in Manado towards ASEAN Economic Community blueprint in 2025. The objective of this study is to get the competencies readiness description of the electrical engineering vocational high school teachers in Manado towards ASEAN Economic Community blueprint in 2025. Method used quantitative and qualitative approach which the statistical analysis in quantitative and the inductive analysis used in qualitative. There were 46 teachers of the electrical engineering vocational high school in Manado observed. The results have shown that the competencies readiness of the electrical engineering vocational high school teachers in Manado such as: pedagogical, professional, personality, and social were 13.04%, 19.56%, 19.56%, and 19.56% respectively. The results were still far from the focus of the ASEAN economic community blueprint in 2025, so they need to be improved through in-house training, internship programs, school partnerships, distance learning, tiered training and special training, short courses in educational institutions, internal coaching by schools, discussion of educational issues, workshops, research and community service, textbook writing, learning media making, and the creation of technology and art.

  20. TWRS technical baseline database manager definition document

    International Nuclear Information System (INIS)

    Acree, C.D.

    1997-01-01

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager

  1. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  2. Engineering Review Information System

    Science.gov (United States)

    Grems, III, Edward G. (Inventor); Henze, James E. (Inventor); Bixby, Jonathan A. (Inventor); Roberts, Mark (Inventor); Mann, Thomas (Inventor)

    2015-01-01

    A disciplinal engineering review computer information system and method by defining a database of disciplinal engineering review process entities for an enterprise engineering program, opening a computer supported engineering item based upon the defined disciplinal engineering review process entities, managing a review of the opened engineering item according to the defined disciplinal engineering review process entities, and closing the opened engineering item according to the opened engineering item review.

  3. The EREC-STRESA database. Internet application

    International Nuclear Information System (INIS)

    Davydov, M.V.; Annunziato, A.

    2004-01-01

    A considerable amount of experimental data in the field of NPPs safety and reliability was produced and gathered in the Electrogorsk Research and Engineering Centre on NPPs Safety. In order to provide properly preservation and easy accessing to the data the EREC Database was created. This paper gives a description of the EREC Database and the supporting web-based informatic platform STRESA. (author)

  4. Database/Operating System Co-Design

    OpenAIRE

    Giceva, Jana

    2016-01-01

    We want to investigate how to improve the information flow between a database and an operating system, aiming for better scheduling and smarter resource management. We are interested in identifying the potential optimizations that can be achieved with a better interaction between a database engine and the underlying operating system, especially by allowing the application to get more control over scheduling and memory management decisions. Therefore, we explored some of the issues that arise ...

  5. HEND: A Database for High Energy Nuclear Data

    International Nuclear Information System (INIS)

    Brown, D; Vogt, R

    2007-01-01

    We propose to develop a high-energy heavy-ion experimental database and make it accessible to the scientific community through an on-line interface. The database will be searchable and cross-indexed with relevant publications, including published detector descriptions. It should eventually contain all published data from older heavy-ion programs such as the Bevalac, AGS, SPS and FNAL fixed-target programs, as well as published data from current programs at RHIC and new facilities at GSI (FAIR), KEK/Tsukuba and the LHC collider. This data includes all proton-proton, proton-nucleus to nucleus-nucleus collisions as well as other relevant systems and all measured observables. Such a database would have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models to a broad range of experiments. To enhance the utility of the database, we propose periodic data evaluations and topical reviews. These reviews would provide an alternative and impartial mechanism to resolve discrepancies between published data from rival experiments and between theory and experiment. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support

  6. Construction of a bibliographic information database for the nuclear science and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Kim, Soon Ja; Choi, Kwang [Korea Atomic Energy Reasech Institute, Taejon (Korea)

    1997-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows: (1) Materials selection and collection, (2) Indexing and abstract preparation, (3) Data input and transmission, and (4) document delivery service. In this seventh year, 45000 records, as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. (author). 1 fig., 1 tab.

  7. Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database

    Science.gov (United States)

    Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan

    2016-01-01

    Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…

  8. Technology Education; Engineering Technology and Industrial Technology in California Community Colleges: A Curriculum Guide.

    Science.gov (United States)

    Schon, James F.

    In order to identify the distinguishing characteristics of technical education programs in engineering and industrial technology currently offered by post-secondary institutions in California, a body of data was collected by visiting 25 community colleges, 5 state universities, and 8 industrial firms; by a questionnaire sampling of 72 California…

  9. The Competence Readiness of the Electrical Engineering Vocational High School Teachers in Manado towards the ASEAN Economic Community Blueprint in 2025

    OpenAIRE

    Fid Jantje Tasiam; Djoko Kustono; Purnomo Purnomo; Hakkun Elmunsyah

    2017-01-01

    This paper presents the competence readiness of the electrical engineering vocational high school teachers in Manado towards ASEAN Economic Community blueprint in 2025. The objective of this study is to get the competencies readiness description of the electrical engineering vocational high school teachers in Manado towards ASEAN Economic Community blueprint in 2025. Method used quantitative and qualitative approach which the statistical analysis in quantitative and the inductive analysis use...

  10. ProtaBank: A repository for protein design and engineering data.

    Science.gov (United States)

    Wang, Connie Y; Chang, Paul M; Ary, Marie L; Allen, Benjamin D; Chica, Roberto A; Mayo, Stephen L; Olafson, Barry D

    2018-03-25

    We present ProtaBank, a repository for storing, querying, analyzing, and sharing protein design and engineering data in an actively maintained and updated database. ProtaBank provides a format to describe and compare all types of protein mutational data, spanning a wide range of properties and techniques. It features a user-friendly web interface and programming layer that streamlines data deposition and allows for batch input and queries. The database schema design incorporates a standard format for reporting protein sequences and experimental data that facilitates comparison of results across different data sets. A suite of analysis and visualization tools are provided to facilitate discovery, to guide future designs, and to benchmark and train new predictive tools and algorithms. ProtaBank will provide a valuable resource to the protein engineering community by storing and safeguarding newly generated data, allowing for fast searching and identification of relevant data from the existing literature, and exploring correlations between disparate data sets. ProtaBank invites researchers to contribute data to the database to make it accessible for search and analysis. ProtaBank is available at https://protabank.org. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  11. CHID: a unique health information and education database.

    OpenAIRE

    Lunin, L F; Stein, R S

    1987-01-01

    The public's growing interest in health information and the health professions' increasing need to locate health education materials can be answered in part by the new Combined Health Information Database (CHID). This unique database focuses on materials and programs in professional and patient education, general health education, and community risk reduction. Accessible through BRS, CHID suggests sources for procuring brochures, pamphlets, articles, and films on community services, programs ...

  12. The eNanoMapper database for nanomaterial safety information.

    Science.gov (United States)

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly

  13. HCSD: the human cancer secretome database

    DEFF Research Database (Denmark)

    Feizi, Amir; Banaei-Esfahani, Amir; Nielsen, Jens

    2015-01-01

    The human cancer secretome database (HCSD) is a comprehensive database for human cancer secretome data. The cancer secretome describes proteins secreted by cancer cells and structuring information about the cancer secretome will enable further analysis of how this is related with tumor biology...... database is limiting the ability to query the increasing community knowledge. We therefore developed the Human Cancer Secretome Database (HCSD) to fulfil this gap. HCSD contains >80 000 measurements for about 7000 nonredundant human proteins collected from up to 35 high-throughput studies on 17 cancer...

  14. Integrating design and purchasing [in nuclear engineering] with Ingecad

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    Ingecad was developed by the Ingevision division of Framatome to overcome deficiencies in traditional computer-aided design. It was developed for nuclear power project engineering around the principle of the shared management of a common database, thus making it possible to integrate several engineering disciplines. The multiuser database is managed and accessed by the different application softwares, corresponding to particular aspects of the engineering task: electrical and process control schematics; plant piping design; pressurized equipment design etc. The use of a common database ensures coherence between the different engineering disciplines, particularly between the process engineering, the plant layout design, the piping, and the instrumentation and control engineering. (author)

  15. Report of the SRC working party on databases and database management systems

    International Nuclear Information System (INIS)

    Crennell, K.M.

    1980-10-01

    An SRC working party, set up to consider the subject of support for databases within the SRC, were asked to identify interested individuals and user communities, establish which features of database management systems they felt were desirable, arrange demonstrations of possible systems and then make recommendations for systems, funding and likely manpower requirements. This report describes the activities and lists the recommendations of the working party and contains a list of databses maintained or proposed by those who replied to a questionnaire. (author)

  16. Design and utilization of a Flight Test Engineering Database Management System at the NASA Dryden Flight Research Facility

    Science.gov (United States)

    Knighton, Donna L.

    1992-01-01

    A Flight Test Engineering Database Management System (FTE DBMS) was designed and implemented at the NASA Dryden Flight Research Facility. The X-29 Forward Swept Wing Advanced Technology Demonstrator flight research program was chosen for the initial system development and implementation. The FTE DBMS greatly assisted in planning and 'mass production' card preparation for an accelerated X-29 research program. Improved Test Plan tracking and maneuver management for a high flight-rate program were proven, and flight rates of up to three flights per day, two times per week were maintained.

  17. Ceramics Technology Project database: September 1991 summary report. [Materials for piston ring-cylinder liner for advanced heat/diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, B.L.P.

    1992-06-01

    The piston ring-cylinder liner area of the internal combustion engine must withstand very-high-temperature gradients, highly-corrosive environments, and constant friction. Improving the efficiency in the engine requires ring and cylinder liner materials that can survive this abusive environment and lubricants that resist decomposition at elevated temperatures. Wear and friction tests have been done on many material combinations in environments similar to actual use to find the right materials for the situation. This report covers tribology information produced from 1986 through July 1991 by Battelle columbus Laboratories, Caterpillar Inc., and Cummins Engine Company, Inc. for the Ceramic Technology Project (CTP). All data in this report were taken from the project's semiannual and bimonthly progress reports and cover base materials, coatings, and lubricants. The data, including test rig descriptions and material characterizations, are stored in the CTP database and are available to all project participants on request. Objective of this report is to make available the test results from these studies, but not to draw conclusions from these data.

  18. An engineering context for software engineering

    OpenAIRE

    Riehle, Richard D.

    2008-01-01

    New engineering disciplines are emerging in the late Twentieth and early Twenty-first Century. One such emerging discipline is software engineering. The engineering community at large has long harbored a sense of skepticism about the validity of the term software engineering. During most of the fifty-plus years of software practice, that skepticism was probably justified. Professional education of software developers often fell short of the standard expected for conventional engineers; so...

  19. Iowa community college Science, Engineering and Mathematics (SEM) faculty: Demographics and job satisfaction

    Science.gov (United States)

    Rogotzke, Kathy

    Community college faculty members play an increasingly important role in the educational system in the United States. However, over the past decade, concerns have arisen, especially in several high demand fields of science, technology, engineering and mathematics (STEM), that a shortage of qualified faculty in these fields exists. Furthermore, the average age of community college faculty is increasing, which creates added concern of an increased shortage of qualified faculty due to a potentially large number of faculty retiring. To help further understand the current population of community college faculty, as well as their training needs and their satisfaction with their jobs, data needs to be collected from them and examined. Currently, several national surveys are given to faculty at institutions of higher education, most notably the Higher Education Research Institute Faculty Survey, the National Study of Postsecondary Faculty, and the Community College Faculty Survey of Student Engagement. Of these surveys the Community College Faculty Survey of Student Engagement is the only survey focused solely on community college faculty. This creates a problem because community college faculty members differ from faculty at 4-year institutions in several significant ways. First, qualifications for hiring community college faculty are different at 4-year colleges or universities. Whereas universities and colleges typically require their faculty to have a Ph.D., community colleges require their arts and science faculty to have a only master's degree and their career faculty to have experience and the appropriate training and certification in their field with only a bachelor's degree. The work duties and expectations for community college faculty are also different at 4-year colleges or universities. Community college faculty typically teach 14 to 19 credit hours a semester and do little, if any research, whereas faculty at 4-year colleges typically teach 9 to 12 credit

  20. The NGDC Seafloor Sediment Geotechnical Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGDC Seafloor Sediment Geotechnical Properties Database contains test engineering properties data coded by students at NGDC from primarily U.S. Naval...

  1. Long Term Benefits for Women in a Science, Technology, Engineering, and Mathematics Living-Learning Community

    Science.gov (United States)

    Maltby, Jennifer L.; Brooks, Christopher; Horton, Marjorie; Morgan, Helen

    2016-01-01

    Science, technology, engineering and math (STEM) degrees provide opportunities for economic mobility. Yet women, underrepresented minority (URM), and first-generation college students remain disproportionately underrepresented in STEM fields. This study examined the effectiveness of a living-learning community (LLC) for URM and first-generation…

  2. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  3. A Framework for Mapping User-Designed Forms to Relational Databases

    Science.gov (United States)

    Khare, Ritu

    2011-01-01

    In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…

  4. Engineering a plant community to deliver multiple ecosystem services.

    Science.gov (United States)

    Storkey, Jonathan; Döring, Thomas; Baddeley, John; Collins, Rosemary; Roderick, Stephen; Jones, Hannah; Watson, Christine

    2015-06-01

    The sustainable delivery of multiple ecosystem services requires the management of functionally diverse biological communities. In an agricultural context, an emphasis on food production has often led to a loss of biodiversity to the detriment of other ecosystem services such as the maintenance of soil health and pest regulation. In scenarios where multiple species can be grown together, it may be possible to better balance environmental and agronomic services through the targeted selection of companion species. We used the case study of legume-based cover crops to engineer a plant community that delivered the optimal balance of six ecosystem services: early productivity, regrowth following mowing, weed suppression, support of invertebrates, soil fertility building (measured as yield of following crop), and conservation of nutrients in the soil. An experimental species pool of 12 cultivated legume species was screened for a range of functional traits and ecosystem services at five sites across a geographical gradient in the United Kingdom. All possible species combinations were then analyzed, using a process-based model of plant competition, to identify the community that delivered the best balance of services at each site. In our system, low to intermediate levels of species richness (one to four species) that exploited functional contrasts in growth habit and phenology were identified as being optimal. The optimal solution was determined largely by the number of species and functional diversity represented by the starting species pool, emphasizing the importance of the initial selection of species for the screening experiments. The approach of using relationships between functional traits and ecosystem services to design multifunctional biological communities has the potential to inform the design of agricultural systems that better balance agronomic and environmental services and meet the current objective of European agricultural policy to maintain viable food

  5. Mammalian engineers drive soil microbial communities and ecosystem functions across a disturbance gradient.

    Science.gov (United States)

    Eldridge, David J; Delgado-Baquerizo, Manuel; Woodhouse, Jason N; Neilan, Brett A

    2016-11-01

    The effects of mammalian ecosystem engineers on soil microbial communities and ecosystem functions in terrestrial ecosystems are poorly known. Disturbance from livestock has been widely reported to reduce soil function, but disturbance by animals that forage in the soil may partially offset these negative effects of livestock, directly and/or indirectly by shifting the composition and diversity of soil microbial communities. Understanding the role of disturbance from livestock and ecosystem engineers in driving soil microbes and functions is essential for formulating sustainable ecosystem management and conservation policies. We compared soil bacterial community composition and enzyme concentrations within four microsites: foraging pits of two vertebrates, the indigenous short-beaked echidna (Tachyglossus aculeatus) and the exotic European rabbit (Oryctolagus cuniculus), and surface and subsurface soils along a gradient in grazing-induced disturbance in an arid woodland. Microbial community composition varied little across the disturbance gradient, but there were substantial differences among the four microsites. Echidna pits supported a lower relative abundance of Acidobacteria and Cyanobacteria, but a higher relative abundance of Proteobacteria than rabbit pits and surface microsites. Moreover, these microsite differences varied with disturbance. Rabbit pits had a similar profile to the subsoil or the surface soils under moderate and high, but not low disturbance. Overall, echidna foraging pits had the greatest positive effect on function, assessed as mean enzyme concentrations, but rabbits had the least. The positive effects of echidna foraging on function were indirectly driven via microbial community composition. In particular, increasing activity was positively associated with increasing relative abundance of Proteobacteria, but decreasing Acidobacteria. Our study suggests that soil disturbance by animals may offset, to some degree, the oft-reported negative

  6. Relational Database Design in Information Science Education.

    Science.gov (United States)

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  7. Comparison of District-level Smoking Prevalence and Their Income Gaps from Two National Databases: the National Health Screening Database and the Community Health Survey in Korea, 2009-2014.

    Science.gov (United States)

    Kim, Ikhan; Bahk, Jinwook; Kim, Yeon Yong; Lee, Jeehye; Kang, Hee Yeon; Lee, Juyeon; Yun, Sung Cheol; Park, Jong Heon; Shin, Soon Ae; Khang, Young Ho

    2018-02-05

    We compared age-standardized prevalence of cigarette smoking and their income gaps at the district-level in Korea using the National Health Screening Database (NHSD) and the Community Health Survey (CHS). Between 2009 and 2014, 39,049,485 subjects participating in the NHSD and 989,292 participants in the CHS were analyzed. The age-standardized prevalence of smoking and their interquintile income differences were calculated for 245 districts of Korea. We examined between-period correlations for the age-standardized smoking prevalence at the district-level and investigated the district-level differences in smoking prevalence and income gaps between the two databases. The between-period correlation coefficients of smoking prevalence for both genders were 0.92-0.97 in NHSD and 0.58-0.69 in CHS, respectively. When using NHSD, we found significant income gaps in all districts for men and 244 districts for women. However, when CHS was analyzed, only 167 and 173 districts for men and women, respectively, showed significant income gaps. While correlation coefficients of district-level smoking prevalence from two databases were 0.87 for men and 0.85 for women, a relatively weak correlation between income gaps from the two databases was found. Based on two databases, income gaps in smoking prevalence were evident for nearly all districts of Korea. Because of the large sample size for each district, NHSD may provide stable district-level smoking prevalence and its income gap and thus should be considered as a valuable data source for monitoring district-level smoking prevalence and its socioeconomic inequality. © 2018 The Korean Academy of Medical Sciences.

  8. Classical databases and knowledge organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2015-01-01

    This paper considers classical bibliographic databases based on the Boolean retrieval model (such as MEDLINE and PsycInfo). This model is challenged by modern search engines and information retrieval (IR) researchers, who often consider Boolean retrieval a less efficient approach. The paper...

  9. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  10. Application of Database Approaches to the Study of Earth's Aeolian Environments: Community Needs and Goals

    Science.gov (United States)

    Scuderi, Louis A.; Weissmann, Gary S.; Hartley, Adrian J.; Yang, Xiaoping; Lancaster, Nicholas

    2017-08-01

    Aeolian science is faced with significant challenges that impact its ability to benefit from recent advances in information technology. The discipline deals with high-end systems in the form of ground and satellite based sensors, computer modeling and simulation, and wind tunnel experiments. Aeolian scientists also collect field data manually with observational methods that may differ significantly between studies with little agreement on even basic morphometric parameters and terminology. Data produced from these studies, while forming the core of research papers and reports, is rarely available to the community at large. Recent advances are also superimposed on an underlying semantic structure that dates to the 1800's or earlier that is confusing, with ambiguously defined, and at times even contradictory, meanings. The aeolian "world-view" does not always fit within neat increments nor is defined by crisp objects. Instead change is continuous and features are fuzzy. Development of an ontological framework to guide spatiotemporal research is the fundamental starting point for organizing data in aeolian science. This requires a "rethinking" of how we define, collect, process, store and share data along with the development of a community-wide collaborative approach designed to bring the discipline into a data rich future. There is also a pressing need to develop efficient methods to integrate, analyze and manage spatial and temporal data and to promote data produced by aeolian scientists so it is available for preparing diagnostic studies, as input into a range of environmental models, and for advising national and international bodies that drive research agendas. This requires the establishment of working groups within the discipline to deal with content, format, processing pipelines, knowledge discovery tools and database access issues unique to aeolian science. Achieving this goal requires the development of comprehensive and highly-organized databases, tools

  11. A Hydrate Database: Vital to the Technical Community

    Directory of Open Access Journals (Sweden)

    D Sloan

    2007-06-01

    Full Text Available Natural gas hydrates may contain more energy than all the combined other fossil fuels, causing hydrates to be a potentially vital aspect of both energy and climate change. This article is an overview of the motivation, history, and future of hydrate data management using a CODATA vehicle to connect international hydrate databases. The basis is an introduction to the Gas Hydrate Markup Language (GHML to connect various hydrate databases. The accompanying four articles on laboratory hydrate data by Smith et al., on field hydrate data by L?wner et al., on hydrate modeling by Wang et al., and on construction of a Chinese gas hydrate system by Xiao et al. provide details of GHML in their respective areas.

  12. Development of a Data Citations Database for an Interdisciplinary Data Center

    Science.gov (United States)

    Chen, R. S.; Downs, R. R.; Schumacher, J.; Gerard, A.

    2017-12-01

    The scientific community has long depended on consistent citation of the scientific literature to enable traceability, support replication, and facilitate analysis and debate about scientific hypotheses, theories, assumptions, and conclusions. However, only in the past few years has the community focused on consistent citation of scientific data, e.g., through the application of Digital Object Identifiers (DOIs) to data, the development of peer-reviewed data publications, community principles and guidelines, and other mechanisms. This means that, moving ahead, it should be easier to identify and track data citations and conduct systematic bibliometric studies. However, this still leaves the problem that many legacy datasets and past citations lack DOIs, making it difficult to develop a historical baseline or assess trends. With this in mind, the NASA Socioeconomic Data and Applications Center (SEDAC) has developed a searchable citations database, containing more than 3,400 citations of SEDAC data and information products over the past 20 years. These citations were collected through various indices and search tools and in some cases through direct contacts with authors. The citations come from a range of natural, social, health, and engineering science journals, books, reports, and other media. The database can be used to find and extract citations filtered by a range of criteria, enabling quantitative analysis of trends, intercomparisons between data collections, and categorization of citations by type. We present a preliminary analysis of citations for selected SEDAC data collections, in order to establish a baseline and assess options for ongoing metrics to track the impact of SEDAC data on interdisciplinary science. We also present an analysis of the uptake of DOIs within data citations reported in published studies that used SEDAC data.

  13. Construction of a bibliographic information database for the nuclear science and engineering (X)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Lee, Ji Ho; Oh, Jeong Hun; Choi, Kwang; Chun, Young Chun; Yoo, Jae Bok; Yu, Anna [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks.The contents of this project are as follows: 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this Tenth year, 30,500 records ,as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information . And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. 2 tabs. (Author)

  14. Heterogeneous distributed databases: A case study

    Science.gov (United States)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  15. Childhood immunization rates in rural Intibucá, Honduras: an analysis of a local database tool and community health center records for assessing and improving vaccine coverage.

    Science.gov (United States)

    He, Yuan; Zarychta, Alan; Ranz, Joseph B; Carroll, Mary; Singleton, Lori M; Wilson, Paria M; Schlaudecker, Elizabeth P

    2012-12-07

    Vaccines are highly effective at preventing infectious diseases in children, and prevention is especially important in resource-limited countries where treatment is difficult to access. In Honduras, the World Health Organization (WHO) reports very high immunization rates in children. To determine whether or not these estimates accurately depict the immunization coverage in non-urban regions of the country, we compared the WHO data to immunization rates obtained from a local database tool and community health center records in rural Intibucá, Honduras. We used data from two sources to comprehensively evaluate immunization rates in the area: 1) census data from a local database and 2) immunization data collected at health centers. We compared these rates using logistic regression, and we compared them to publicly available WHO-reported estimates using confidence interval inclusion. We found that mean immunization rates for each vaccine were high (range 84.4 to 98.8 percent), but rates recorded at the health centers were significantly higher than those reported from the census data (p ≤ 0.001). Combining the results from both databases, the mean rates of four out of five vaccines were less than WHO-reported rates (p 0.05), except for diphtheria/tetanus/pertussis vaccine (p=0.02) and oral polio vaccine (p Honduras were high across data sources, though most of the rates recorded in rural Honduras were less than WHO-reported rates. Despite geographical difficulties and barriers to access, the local database and Honduran community health workers have developed a thorough system for ensuring that children receive their immunizations on time. The successful integration of community health workers and a database within the Honduran decentralized health system may serve as a model for other immunization programs in resource-limited countries where health care is less accessible.

  16. Engineering qualifications for competence

    International Nuclear Information System (INIS)

    Levy, J.C.

    1990-01-01

    The Engineering Council has a responsibility across all fields of engineering to set standards for those who are registered as Chartered Engineers, Incorporated Engineers or Engineering Technicians. These standards amount to a basic specification of professional competence to which must be added the features needed in each different branch of engineering, such as nuclear, and each particular occupation, such as quality assurance. This article describes The Engineering Council's general standards and includes a guide to the roles and responsibilities which should lie within the domain of those who are registered with the Council. The concluding section describes the title of European Engineer and its relationship to the European Community directive governing the movement of professionals across community frontiers. (author)

  17. Educating the humanitarian engineer.

    Science.gov (United States)

    Passino, Kevin M

    2009-12-01

    The creation of new technologies that serve humanity holds the potential to help end global poverty. Unfortunately, relatively little is done in engineering education to support engineers' humanitarian efforts. Here, various strategies are introduced to augment the teaching of engineering ethics with the goal of encouraging engineers to serve as effective volunteers for community service. First, codes of ethics, moral frameworks, and comparative analysis of professional service standards lay the foundation for expectations for voluntary service in the engineering profession. Second, standard coverage of global issues in engineering ethics educates humanitarian engineers about aspects of the community that influence technical design constraints encountered in practice. Sample assignments on volunteerism are provided, including a prototypical design problem that integrates community constraints into a technical design problem in a novel way. Third, it is shown how extracurricular engineering organizations can provide a theory-practice approach to education in volunteerism. Sample completed projects are described for both undergraduates and graduate students. The student organization approach is contrasted with the service-learning approach. Finally, long-term goals for establishing better infrastructure are identified for educating the humanitarian engineer in the university, and supporting life-long activities of humanitarian engineers.

  18. International shock-wave database project : report of the requirements workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Aidun, John Bahram (Institute of Problems of chemical Physics of Russian Academy of Sciences); Lomonosov, Igor V. (Institute of Problems of chemical Physics of Russian Academy of Sciences); Levashov, Pavel R. (Joint Institute for High Temperatures of Russian Academy of Sciences)

    2012-03-01

    We report on the requirements workshop for a new project, the International Shock-Wave database (ISWdb), which was held October 31 - November 2, 2011, at GSI, Darmstadt, Germany. Participants considered the idea of this database, its structure, technical requirements, content, and principles of operation. This report presents the consensus conclusions from the workshop, key discussion points, and the goals and plan for near-term and intermediate-term development of the ISWdb. The main points of consensus from the workshop were: (1) This international database is of interest and of practical use for the shock-wave and high pressure physics communities; (2) Intermediate state information and off-Hugoniot information is important and should be included in ISWdb; (3) Other relevant high pressure and auxiliary data should be included to the database, in the future; (4) Information on the ISWdb needs to be communicated, broadly, to the research community; and (5) Operating structure will consist of an Advisory Board, subject-matter expert Moderators to vet submitted data, and the database Project Team. This brief report is intended to inform the shock-wave research community and interested funding agencies about the project, as its success, ultimately, depends on both of these groups finding sufficient value in the database to use it, contribute to it, and support it.

  19. A comparative gene expression database for invertebrates

    Directory of Open Access Journals (Sweden)

    Ormestad Mattias

    2011-08-01

    Full Text Available Abstract Background As whole genome and transcriptome sequencing gets cheaper and faster, a great number of 'exotic' animal models are emerging, rapidly adding valuable data to the ever-expanding Evo-Devo field. All these new organisms serve as a fantastic resource for the research community, but the sheer amount of data, some published, some not, makes detailed comparison of gene expression patterns very difficult to summarize - a problem sometimes even noticeable within a single lab. The need to merge existing data with new information in an organized manner that is publicly available to the research community is now more necessary than ever. Description In order to offer a homogenous way of storing and handling gene expression patterns from a variety of organisms, we have developed the first web-based comparative gene expression database for invertebrates that allows species-specific as well as cross-species gene expression comparisons. The database can be queried by gene name, developmental stage and/or expression domains. Conclusions This database provides a unique tool for the Evo-Devo research community that allows the retrieval, analysis and comparison of gene expression patterns within or among species. In addition, this database enables a quick identification of putative syn-expression groups that can be used to initiate, among other things, gene regulatory network (GRN projects.

  20. Examples how to use atomic and molecular databases

    International Nuclear Information System (INIS)

    Murakami, Izumi

    2012-01-01

    As examples how to use atomic and molecular databases, atomic spectra database (ASD) and molecular chemical kinetics database of National Institute of Standards and Technology (NIST), collision cross sections of National Institute of Fusion Science (NIFS), Open-Atomic Data and Analysis Structure (ADAS) and chemical reaction rate coefficients of GRI-Mech were presented. Sorting method differed in each database and several options were prepared. Atomic wavelengths/transition probabilities and electron collision ionization, excitation and recombination cross sections/rate coefficients were simply searched with just specifying atom or ion using a general internet search engine (GENIE) of IAEA. (T. Tanaka)

  1. Astronaut Demographic Database: Everything You Want to Know About Astronauts and More

    Science.gov (United States)

    Keeton, Kathryn; Patterson, Holly

    2011-01-01

    A wealth of information regarding the astronaut population is available that could be especially useful to researchers. However, until now, it has been difficult to obtain that information in a systematic way. Therefore, this "astronaut database" began as a way for researchers within the Behavioral Health and Performance Group to keep track of the ever growing astronaut corps population. Before our effort, compilation of such data could be found, but not in a way that was easily acquired or accessible. One would have to use internet search engines, read through lengthy and potentially inaccurate informational sites, or read through astronaut biographies compiled by NASA. Astronauts are a unique class of individuals and, by examining such information, which we dubbed "Demographics," we hoped to find some commonalities that may be useful for other research areas and future research topics. By organizing the information pertaining to astronauts1 in a formal, unified catalog, we believe we have made the information more easily accessible, readily useable, and user friendly. Our end goal is to provide this database to others as a highly functional resource within the research community. Perhaps the database can eventually be an official, published document for researchers to gain full access.

  2. Exploring the Academic and Social Experiences of Latino Engineering Community College Transfer Students at a 4-Year Institution: A Qualitative Research Study

    Science.gov (United States)

    Hagler, LaTesha R.

    As the number of historically underrepresented populations transfer from community college to university to pursue baccalaureate degrees in science, technology, engineering, and mathematics (STEM), little research exists about the challenges and successes Latino students experience as they transition from 2-year colleges to 4-year universities. Thus, institutions of higher education have limited insight to inform their policies, practices, and strategic planning in developing effective sources of support, services, and programs for underrepresented students in STEM disciplines. This qualitative research study explored the academic and social experiences of 14 Latino engineering community college transfer students at one university. Specifically, this study examined the lived experiences of minority community college transfer students' transition into and persistence at a 4-year institution. The conceptual framework applied to this study was Schlossberg's Transition Theory, which analyzed the participant's social and academic experiences that led to their successful transition from community college to university. Three themes emerged from the narrative data analysis: (a) Academic Experiences, (b) Social Experiences, and (c) Sources of Support. The findings indicate that engineering community college transfer students experience many challenges in their transition into and persistence at 4-year institutions. Some of the challenges include lack of academic preparedness, environmental challenges, lack of time management skills and faculty serving the role as institutional agents.

  3. Databases for rRNA gene profiling of microbial communities

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Matthew

    2013-07-02

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  4. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  5. Rrsm: The European Rapid Raw Strong-Motion Database

    Science.gov (United States)

    Cauzzi, C.; Clinton, J. F.; Sleeman, R.; Domingo Ballesta, J.; Kaestli, P.; Galanis, O.

    2014-12-01

    We introduce the European Rapid Raw Strong-Motion database (RRSM), a Europe-wide system that provides parameterised strong motion information, as well as access to waveform data, within minutes of the occurrence of strong earthquakes. The RRSM significantly differs from traditional earthquake strong motion dissemination in Europe, which has focused on providing reviewed, processed strong motion parameters, typically with significant delays. As the RRSM provides rapid open access to raw waveform data and metadata and does not rely on external manual waveform processing, RRSM information is tailored to seismologists and strong-motion data analysts, earthquake and geotechnical engineers, international earthquake response agencies and the educated general public. Access to the RRSM database is via a portal at http://www.orfeus-eu.org/rrsm/ that allows users to query earthquake information, peak ground motion parameters and amplitudes of spectral response; and to select and download earthquake waveforms. All information is available within minutes of any earthquake with magnitude ≥ 3.5 occurring in the Euro-Mediterranean region. Waveform processing and database population are performed using the waveform processing module scwfparam, which is integrated in SeisComP3 (SC3; http://www.seiscomp3.org/). Earthquake information is provided by the EMSC (http://www.emsc-csem.org/) and all the seismic waveform data is accessed at the European Integrated waveform Data Archive (EIDA) at ORFEUS (http://www.orfeus-eu.org/index.html), where all on-scale data is used in the fully automated processing. As the EIDA community is continually growing, the already significant number of strong motion stations is also increasing and the importance of this product is expected to also increase. Real-time RRSM processing started in June 2014, while past events have been processed in order to provide a complete database back to 2005.

  6. Overview of NoSQL and comparison with SQL database ...

    African Journals Online (AJOL)

    Overview of NoSQL and comparison with SQL database management systems. ... Abstract. The increasing need for space in the database community has caused the revolution named NoSQL ‗Not Only SQL'. ... HOW TO USE AJOL.

  7. Constructing a Geology Ontology Using a Relational Database

    Science.gov (United States)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  8. The Danish Inguinal Hernia database

    DEFF Research Database (Denmark)

    Friis-Andersen, Hans; Bisgaard, Thue

    2016-01-01

    AIM OF DATABASE: To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. STUDY POPULATION: Patients ≥18 years operated for groin hernia. MAIN VARIABLES: Type and size...... access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles...... the medical management of the database. RESULTS: The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). CONCLUSION: The Danish Inguinal Hernia...

  9. The eNanoMapper database for nanomaterial safety information

    Directory of Open Access Journals (Sweden)

    Nina Jeliazkova

    2015-07-01

    Full Text Available Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs. Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API, and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state

  10. DataCell: Exploiting the Power of Relational Databases for Efficient Stream Processing

    NARCIS (Netherlands)

    E. Liarou (Erietta); M.L. Kersten (Martin)

    2009-01-01

    htmlabstractDesigned for complex event processing, DataCell is a research prototype database system in the area of sensor stream systems. Under development at CWI, it belongs to the MonetDB database system family. CWI researchers innovatively built a stream engine directly on top of a database

  11. Exploiting the Power of Relational Databases for Efficient Stream Processing

    NARCIS (Netherlands)

    E. Liarou (Erietta); R.A. Goncalves (Romulo); S. Idreos (Stratos)

    2009-01-01

    textabstractStream applications gained significant popularity over the last years that lead to the development of specialized stream engines. These systems are designed from scratch with a different philosophy than nowadays database engines in order to cope with the stream applications

  12. The NIDDK Information Network: A Community Portal for Finding Data, Materials, and Tools for Researchers Studying Diabetes, Digestive, and Kidney Diseases.

    Directory of Open Access Journals (Sweden)

    Patricia L Whetzel

    Full Text Available The NIDDK Information Network (dkNET; http://dknet.org was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK. By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the "hidden" web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a "search engine for data", searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases

  13. User's Guide: Database of literature pertaining to the unsaturated zone and surface water-ground water interactions at the Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Hall, L.F.

    1993-05-01

    Since its beginnings in 1949, hydrogeologic investigations at the Idaho National Engineering Laboratory (INEL) have resulted in an extensive collection of technical publications providing information concerning ground water hydraulics and contaminant transport within the unsaturated zone. Funding has been provided by the Department of Energy through the Department of Energy Idaho Field Office in a grant to compile an INEL-wide summary of unsaturated zone studies based on a literature search. University of Idaho researchers are conducting a review of technical documents produced at or pertaining to the INEL, which present or discuss processes in the unsaturated zone and surface water-ground water interactions. Results of this review are being compiled as an electronic database. Fields are available in this database for document title and associated identification number, author, source, abstract, and summary of information (including types of data and parameters). AskSam reg-sign, a text-based database system, was chosen. WordPerfect 5.1 copyright is being used as a text-editor to input data records into askSam

  14. Significance of FIZ Technik Databases in nuclear safety and environmental protection

    International Nuclear Information System (INIS)

    Das, N.K.

    1993-01-01

    The language of the abstracts of the FIZ Technik databases is primarly German (e.g. DOMA 80%, SDIM 70%). Furthermore FIZ Technik offers licence databases on engineering and technology, management, manufacturers, products, contacts, standards and specifications, geosciences and natural resources. The contents and structure of the databases are described in the FIZ Technik bluesheets and the database news. With some examples the significance of the FIZ Technik databases DOMA, ZDEE, SDIM, SILI and MEDI in nuclear safety and environmental protection is shown. (orig.)

  15. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  16. A generic framework for extracting XML data from legacy databases

    NARCIS (Netherlands)

    Thiran, Ph.; Estiévenart, F.; Hainaut, J.L.; Houben, G.J.P.M.

    2005-01-01

    This paper describes a generic framework of which semantics-based XML data can be derived from legacy databases. It consists in first recovering the conceptual schema of the database through reverse engineering techniques, and then in converting this schema, or part of it, into XML-compliant data

  17. Hanford Site technical baseline database. Revision 1

    International Nuclear Information System (INIS)

    Porter, P.E.

    1995-01-01

    This report lists the Hanford specific files (Table 1) that make up the Hanford Site Technical Baseline Database. Table 2 includes the delta files that delineate the differences between this revision and revision 0 of the Hanford Site Technical Baseline Database. This information is being managed and maintained on the Hanford RDD-100 System, which uses the capabilities of RDD-100, a systems engineering software system of Ascent Logic Corporation (ALC). This revision of the Hanford Site Technical Baseline Database uses RDD-100 version 3.0.2.2 (see Table 3). Directories reflect those controlled by the Hanford RDD-100 System Administrator. Table 4 provides information regarding the platform. A cassette tape containing the Hanford Site Technical Baseline Database is available

  18. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  19. OLIO+: an osteopathic medicine database.

    Science.gov (United States)

    Woods, S E

    1991-01-01

    OLIO+ is a bibliographic database designed to meet the information needs of the osteopathic medical community. Produced by the American Osteopathic Association (AOA), OLIO+ is devoted exclusively to the osteopathic literature. The database is available only by subscription through AOA and may be accessed from any data terminal with modem or IBM-compatible personal computer with telecommunications software that can emulate VT100 or VT220. Apple access is also available, but some assistance from OLIO+ support staff may be necessary to modify the Apple keyboard.

  20. Construction of a bibliographic information database for the Nuclear Science and Engineering (IX)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Oh, Jeong Hun; Choi, Kwang; Keum, Jong Yong [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science and Technology Access Line)'s DB through the KREONet(Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows: 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this seventh year, 40,000 records, as total of inputted data, are added to the existing SATURN DB. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the the effectiveness of R and D activities, but also to obviate the duplicated research works. 1 fig., 1 tab. (Author)

  1. Construction of a bibliographic information database for the nuclear science and engineering (VIII)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Whan; Kim, Soon Ja; Kim, Young Min; Oh, Jeong Hun; Choi, Kwang [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-12-01

    The major goal of this project is to construct a database in nuclear science and engineering information materials, support the R and D activities of users in this field, finally to support KRISTAL(Korea Research Information for Science ad Technology Access Line)'s BD through the KREONet (Korea Research Environment Open Network), as one of the five national information networks. The contents of this project are as follows : 1) Materials selection and collection, 2) Indexing and abstract preparation, 3) Data input and transmission, and 4) document delivery service. In this seventh year, 35,000 record as total of inputted data, are added to the existing SATURN DB using the input system. These records are covered with the articles of nuclear-related core journals, proceedings, seminars, and research reports, etc. And using the Web, this project was important for users to get their needed information itself and to receive the materials of online requested information. And then it will give the chance users not only to promote the effectiveness of R and D activities, but also to obviate the duplicated research works. 1 tab. (Author)

  2. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  3. Djeen (Database for Joomla!'s Extensible Engine): a research information management system for flexible multi-technology project administration.

    Science.gov (United States)

    Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain

    2013-06-06

    With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.

  4. On the Compliance of Women Engineers with a Gendered Scientific System.

    Directory of Open Access Journals (Sweden)

    Gita Ghiasi

    Full Text Available There has been considerable effort in the last decade to increase the participation of women in engineering through various policies. However, there has been little empirical research on gender disparities in engineering which help underpin the effective preparation, co-ordination, and implementation of the science and technology (S&T policies. This article aims to present a comprehensive gendered analysis of engineering publications across different specialties and provide a cross-gender analysis of research output and scientific impact of engineering researchers in academic, governmental, and industrial sectors. For this purpose, 679,338 engineering articles published from 2008 to 2013 are extracted from the Web of Science database and 974,837 authorships are analyzed. The structures of co-authorship collaboration networks in different engineering disciplines are examined, highlighting the role of female scientists in the diffusion of knowledge. The findings reveal that men dominate 80% of all the scientific production in engineering. Women engineers publish their papers in journals with higher Impact Factors than their male peers, but their work receives lower recognition (fewer citations from the scientific community. Engineers-regardless of their gender-contribute to the reproduction of the male-dominated scientific structures through forming and repeating their collaborations predominantly with men. The results of this study call for integration of data driven gender-related policies in existing S&T discourse.

  5. On the Compliance of Women Engineers with a Gendered Scientific System.

    Science.gov (United States)

    Ghiasi, Gita; Larivière, Vincent; Sugimoto, Cassidy R

    2015-01-01

    There has been considerable effort in the last decade to increase the participation of women in engineering through various policies. However, there has been little empirical research on gender disparities in engineering which help underpin the effective preparation, co-ordination, and implementation of the science and technology (S&T) policies. This article aims to present a comprehensive gendered analysis of engineering publications across different specialties and provide a cross-gender analysis of research output and scientific impact of engineering researchers in academic, governmental, and industrial sectors. For this purpose, 679,338 engineering articles published from 2008 to 2013 are extracted from the Web of Science database and 974,837 authorships are analyzed. The structures of co-authorship collaboration networks in different engineering disciplines are examined, highlighting the role of female scientists in the diffusion of knowledge. The findings reveal that men dominate 80% of all the scientific production in engineering. Women engineers publish their papers in journals with higher Impact Factors than their male peers, but their work receives lower recognition (fewer citations) from the scientific community. Engineers-regardless of their gender-contribute to the reproduction of the male-dominated scientific structures through forming and repeating their collaborations predominantly with men. The results of this study call for integration of data driven gender-related policies in existing S&T discourse.

  6. Database Resources of the BIG Data Center in 2018.

    Science.gov (United States)

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Engineering success: Undergraduate Latina women's persistence in an undergradute engineering program

    Science.gov (United States)

    Rosbottom, Steven R.

    The purpose and focus of this narrative inquiry case study were to explore the personal stories of four undergraduate Latina students who persist in their engineering programs. This study was guided by two overarching research questions: a) What are the lived experiences of undergraduate Latina engineering students? b) What are the contributing factors that influence undergraduate Latina students to persist in an undergraduate engineering program? Yosso's (2005) community cultural wealth was used to the analyze data. Findings suggest through Yosso's (2005) aspirational capital, familial capital, social capital, navigational capital, and resistant capital the Latina student persisted in their engineering programs. These contributing factors brought to light five themes that emerged, the discovery of academic passions, guidance and support of family and teachers, preparation for and commitment to persistence, the power of community and collective engagement, and commitment to helping others. The themes supported their persistence in their engineering programs. Thus, this study informs policies, practices, and programs that support undergraduate Latina engineering student's persistence in engineering programs.

  8. A Study To Determine the Job Satisfaction of the Engineering/Industrial Technology Faculty at Delgado Community College.

    Science.gov (United States)

    Satterlee, Brian

    A study assessed job satisfaction among Engineering/Industrial Technology faculty at Delgado Community College (New Orleans, Louisiana). A secondary purpose was to confirm Herzberg's Two-Factor Theory of Job Satisfaction (1966) that workers derived satisfaction from the work itself and that causes of dissatisfaction stemmed from conditions…

  9. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  10. Vertical distribution of chlorophyll a concentration and phytoplankton community composition from in situ fluorescence profiles: a first database for the global ocean

    Science.gov (United States)

    Sauzède, R.; Lavigne, H.; Claustre, H.; Uitz, J.; Schmechtig, C.; D'Ortenzio, F.; Guinet, C.; Pesant, S.

    2015-10-01

    In vivo chlorophyll a fluorescence is a proxy of chlorophyll a concentration, and is one of the most frequently measured biogeochemical properties in the ocean. Thousands of profiles are available from historical databases and the integration of fluorescence sensors to autonomous platforms has led to a significant increase of chlorophyll fluorescence profile acquisition. To our knowledge, this important source of environmental data has not yet been included in global analyses. A total of 268 127 chlorophyll fluorescence profiles from several databases as well as published and unpublished individual sources were compiled. Following a robust quality control procedure detailed in the present paper, about 49 000 chlorophyll fluorescence profiles were converted into phytoplankton biomass (i.e., chlorophyll a concentration) and size-based community composition (i.e., microphytoplankton, nanophytoplankton and picophytoplankton), using a method specifically developed to harmonize fluorescence profiles from diverse sources. The data span over 5 decades from 1958 to 2015, including observations from all major oceanic basins and all seasons, and depths ranging from the surface to a median maximum sampling depth of around 700 m. Global maps of chlorophyll a concentration and phytoplankton community composition are presented here for the first time. Monthly climatologies were computed for three of Longhurst's ecological provinces in order to exemplify the potential use of the data product. Original data sets (raw fluorescence profiles) as well as calibrated profiles of phytoplankton biomass and community composition are available on open access at PANGAEA, Data Publisher for Earth and Environmental Science. Raw fluorescence profiles: http://doi.pangaea.de/10.1594/PANGAEA.844212 and Phytoplankton biomass and community composition: http://doi.pangaea.de/10.1594/PANGAEA.844485

  11. Petaminer: Using ROOT for efficient data storage in MySQL database

    Science.gov (United States)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  12. Petaminer: Using ROOT for efficient data storage in MySQL database

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Vaniachine, A; Fine, V; Lauret, J; Hamill, P

    2010-01-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  13. The design and implementation of embedded database in HIRFL-CSR

    International Nuclear Information System (INIS)

    Xu Yang; Liu Wufeng; Long Yindong; Chinese Academy of Sciences, Beijing; Qiao Weimin; Guo Yuhui

    2008-01-01

    This article introduces design and implementation of embedded database for control system in HIRFL-CSR. The control system has three level database for centralized-manage and distributed control. Level I and level II are based on Windows, connecting with each other by ODBC of Oracle database. Level III according to the control system is based on Embedded-Linux to communicate with others by SQLite database engine. The overall control room sends wave-data, event-table and other data to front embedded database in advance, the embedded database SQLite relay the data to wave generator DSP while experimentation. On the control of synchronization trigger, the DSP generates wave-data for the control of power supply and magnetic field. (authors)

  14. UNM in the Community.

    Science.gov (United States)

    Quantum: Research & Scholarship, 1998

    1998-01-01

    Profiles 10 University of New Mexico community programs: University Art Museum, Rio Grande and Four Corners Writing Projects, Blacks in the Southwest (exhibit), New Mexico Engineering Research Institute's Environmental Finance Center, Adolescent Social Action Program, Minority Engineering Programs, Rural Community College Initiative, Valencia…

  15. Construction of a bibliographic information database for the nuclear engineering

    International Nuclear Information System (INIS)

    Kim, Tae Whan; Lim, Yeon Soo; Kwac, Dong Chul

    1991-12-01

    The major goal of the project is to develop a nuclear science database of materials that have been published in Korea and to establish a network system that will give relevant information to people in the nuclear industry by linking this system with the proposed National Science Technical Information Network. This project aims to establish a database consisted of about 1,000 research reports that were prepared by KAERI from 1979 to 1990. The contents of the project are as follows: 1. Materials Selection and Collection 2. Index and Abstract Preparation 3. Data Input and Transmission. This project is intended to achieve the goal of maximum utilization of nuclear information in Korea. (Author)

  16. Deja vu: a database of highly similar citations in the scientific literature.

    Science.gov (United States)

    Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R

    2009-01-01

    In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.

  17. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  18. Research reactor records in the INIS database

    International Nuclear Information System (INIS)

    Marinkovic, N.

    2001-01-01

    This report presents a statistical analysis of more than 13,000 records of publications concerned with research and technology in the field of research and experimental reactors which are included in the INIS Bibliographic Database for the period from 1970 to 2001. The main objectives of this bibliometric study were: to make an inventory of research reactor related records in the INIS Database; to provide statistics and scientific indicators for the INIS users, namely science managers, researchers, engineers, operators, scientific editors and publishers, decision-makers in the field of research reactors related subjects; to extract other useful information from the INIS Bibliographic Database about articles published in research reactors research and technology. (author)

  19. Knowledge Based Engineering for Spatial Database Management and Use

    Science.gov (United States)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  20. Report from the 3rd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2010-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. Both the database and the map/reduce communities worldwide are working on addressing these issues. The 3rdExtremely Large Databases workshop was organized to examine the needs of scientific communities beginning to face these issues, to reach out to European communities working on extremely large scale data challenges, and to brainstorm possible solutions. The science benchmark that emerged from the 2nd workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  1. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    Science.gov (United States)

    Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn

    2016-04-01

    Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.

  2. BBGD: an online database for blueberry genomic data

    Directory of Open Access Journals (Sweden)

    Matthews Benjamin F

    2007-01-01

    Full Text Available Abstract Background Blueberry is a member of the Ericaceae family, which also includes closely related cranberry and more distantly related rhododendron, azalea, and mountain laurel. Blueberry is a major berry crop in the United States, and one that has great nutritional and economical value. Extreme low temperatures, however, reduce crop yield and cause major losses to US farmers. A better understanding of the genes and biochemical pathways that are up- or down-regulated during cold acclimation is needed to produce blueberry cultivars with enhanced cold hardiness. To that end, the blueberry genomics database (BBDG was developed. Along with the analysis tools and web-based query interfaces, the database serves both the broader Ericaceae research community and the blueberry research community specifically by making available ESTs and gene expression data in searchable formats and in elucidating the underlying mechanisms of cold acclimation and freeze tolerance in blueberry. Description BBGD is the world's first database for blueberry genomics. BBGD is both a sequence and gene expression database. It stores both EST and microarray data and allows scientists to correlate expression profiles with gene function. BBGD is a public online database. Presently, the main focus of the database is the identification of genes in blueberry that are significantly induced or suppressed after low temperature exposure. Conclusion By using the database, researchers have developed EST-based markers for mapping and have identified a number of "candidate" cold tolerance genes that are highly expressed in blueberry flower buds after exposure to low temperatures.

  3. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    Science.gov (United States)

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  4. Persistence of community college engineering science students: The impact of selected cognitive and noncognitive characteristics

    Science.gov (United States)

    Chatman, Lawrence M., Jr.

    If the United States is to remain technologically competitive, persistence in engineering programs must improve. This study on student persistence employed a mixed-method design to identify the cognitive and noncognitive factors which contribute to students remaining in an engineering science curriculum or switching from an engineering curriculum at a community college in the northeast United States. Records from 372 students were evaluated to determine the characteristics of two groups: those students that persisted with the engineering curriculum and those that switched from engineering; also, the dropout phenomenon was evaluated. The quantitative portion of the study used a logistic regression analyses on 22 independent variables, while the qualitative portion of the study used group interviews to investigate the noncognitive factors that influenced persisting or switching. The qualitative portion of the study added depth and credibility to the results from the quantitative portion. The study revealed that (1) high grades in first year calculus, physics and chemistry courses, (2) fewer number of semesters enrolled, (3) attendance with full time status, and (4) not participating in an English as a Second Language (ESL) program were significant variables used to predict student persistence. The group interviews confirmed several of these contributing factors. Students that dropped out of college began with (1) the lowest levels of remediation, (2) the lowest grade point averages, and (3) the fewest credits earned.

  5. Extended functions of the database machine FREND for interactive systems

    International Nuclear Information System (INIS)

    Hikita, S.; Kawakami, S.; Sano, K.

    1984-01-01

    Well-designed visual interfaces encourage non-expert users to use relational database systems. In those systems such as office automation systems or engineering database systems, non-expert users interactively access to database from visual terminals. Some users may want to occupy database or other users may share database according to various situations. Because, those jobs need a lot of time to be completed, concurrency control must be well designed to enhance the concurrency. The extended method of concurrency control of FREND is presented in this paper. The authors assume that systems are composed of workstations, a local area network and the database machine FREND. This paper also stresses that those workstations and FREND must cooperate to complete concurrency control for interactive applications

  6. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  7. Engineering Hybrid Learning Communities: The Case of a Regional Parent Community

    Directory of Open Access Journals (Sweden)

    Sven Strickroth

    2014-09-01

    Full Text Available We present an approach (and a corresponding system design for supporting regionally bound hybrid learning communities (i.e., communities which combine traditional face-to-face elements with web based media such as online community platforms, e-mail and SMS newsletters. The goal of the example community used to illustrate the approach was to support and motivate (especially hard-to-reach underprivileged parents in the education of their young children. The article describes the design process used and the challenges faced during the socio-technical system design. An analysis of the community over more than one year indicates that the hybrid approach works better than the two separated “traditional” approaches separately. Synergy effects like advertising effects from the offline trainings for the online platform and vice versa occurred and regular newsletters turned out to have a noticeable effect on the community.

  8. Computer Aided Design for Soil Classification Relational Database ...

    African Journals Online (AJOL)

    unique firstlady

    engineering, several developers were asked what rules they applied to identify ... classification is actually a part of all good science. As Michalski ... by a large number of soil scientists. .... and use. The calculus relational database processing is.

  9. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    Science.gov (United States)

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  10. Community Nursing Home (CNH)

    Data.gov (United States)

    Department of Veterans Affairs — The Community Nursing Home (CNH) database contains a list of all Community Nursing Home facilities under local contract to Veterans Health Administration (VHA). CNH...

  11. A Growing Opportunity: Community Gardens Affiliated with US Hospitals and Academic Health Centers.

    Science.gov (United States)

    George, Daniel R; Rovniak, Liza S; Kraschnewski, Jennifer L; Hanson, Ryan; Sciamanna, Christopher N

    Community gardens can reduce public health disparities through promoting physical activity and healthy eating, growing food for underserved populations, and accelerating healing from injury or disease. Despite their potential to contribute to comprehensive patient care, no prior studies have investigated the prevalence of community gardens affiliated with US healthcare institutions, and the demographic characteristics of communities served by these gardens. In 2013, national community garden databases, scientific abstracts, and public search engines (e.g., Google Scholar) were used to identify gardens. Outcomes included the prevalence of hospital-based community gardens by US regions, and demographic characteristics (age, race/ethnicity, education, household income, and obesity rates) of communities served by gardens. There were 110 healthcare-based gardens, with 39 in the Midwest, 25 in the South, 24 in the Northeast, and 22 in the West. Compared to US population averages, communities served by healthcare-based gardens had similar demographic characteristics, but significantly lower rates of obesity (27% versus 34%, p gardens are located in regions that are demographically representative of the US population, and are associated with lower rates of obesity in communities they serve.

  12. Metabolic Engineering X Conference

    Energy Technology Data Exchange (ETDEWEB)

    Flach, Evan [American Institute of Chemical Engineers

    2015-05-07

    The International Metabolic Engineering Society (IMES) and the Society for Biological Engineering (SBE), both technological communities of the American Institute of Chemical Engineers (AIChE), hosted the Metabolic Engineering X Conference (ME-X) on June 15-19, 2014 at the Westin Bayshore in Vancouver, British Columbia. It attracted 395 metabolic engineers from academia, industry and government from around the globe.

  13. SPIRE Data-Base Management System

    Science.gov (United States)

    Fuechsel, C. F.

    1984-01-01

    Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.

  14. Ecosystem engineers on plants: indirect facilitation of arthropod communities by leaf-rollers at different scales.

    Science.gov (United States)

    Vieira, Camila; Romero, Gustavo Q

    2013-07-01

    Ecosystem engineering is a process by which organisms change the distribution of resources and create new habitats for other species via non-trophic interactions. Leaf-rolling caterpillars can act as ecosystem engineers because they provide shelter to secondary users. In this study, we report the influence of leaf-rolling caterpillars on speciose tropical arthropod communities along both spatial scales (leaf-level and plant-level effects) and temporal scales (dry and rainy seasons). We predict that rolled leaves can amplify arthropod diversity at both the leaf and plant levels and that this effect is stronger in dry seasons, when arthropods are prone to desiccation. Our results show that the abundance, richness, and biomass of arthropods within several guilds increased up to 22-fold in naturally and artificially created leaf shelters relative to unaltered leaves. These effects were observed at similar magnitudes at both the leaf and plant scales. Variation in the shelter architecture (funnel, cylinders) did not influence arthropod parameters, as diversity, abundance, orbiomass, but rolled leaves had distinct species composition if compared with unaltered leaves. As expected, these arthropod parameters on the plants with rolled leaves were on average approximately twofold higher in the dry season. Empty leaf rolls and whole plants were rapidly recolonized by arthropods over time, implying a fast replacement of individuals; within 15-day intervals the rolls and plants reached a species saturation. This study is the first to examine the extended effects of engineering caterpillars as diversity amplifiers at different temporal and spatial scales. Because shelter-building caterpillars are ubiquitous organisms in tropical and temperate forests, they can be considered key structuring elements for arthropod communities on plants.

  15. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  16. Implementation of a consolidated, standardized database of environmental measurements data

    International Nuclear Information System (INIS)

    James, T.L.

    1996-10-01

    This report discusses the benefits of a consolidated and standardized database; reasons for resistance to the consolidation of data; implementing a consolidated database, including attempts at standardization, deciding what to include in the consolidated database, establishing lists of valid values, and addressing quality assurance/quality control (QA/QC) issues; and the evolution of a consolidated database, which includes developing and training a user community, resolving configuration control issues, incorporating historical data, identifying emerging standards, and developing pointers to other data. OREIS is used to illustrate these topics

  17. The Steward Observatory asteroid relational database

    Science.gov (United States)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  18. Configuration management program plan for Hanford site systems engineering

    International Nuclear Information System (INIS)

    Hoffman, A.G.

    1994-01-01

    This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline

  19. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  20. Metabolic pathways for the whole community.

    Science.gov (United States)

    Hanson, Niels W; Konwar, Kishori M; Hawley, Alyse K; Altman, Tomer; Karp, Peter D; Hallam, Steven J

    2014-07-22

    A convergence of high-throughput sequencing and computational power is transforming biology into information science. Despite these technological advances, converting bits and bytes of sequence information into meaningful insights remains a challenging enterprise. Biological systems operate on multiple hierarchical levels from genomes to biomes. Holistic understanding of biological systems requires agile software tools that permit comparative analyses across multiple information levels (DNA, RNA, protein, and metabolites) to identify emergent properties, diagnose system states, or predict responses to environmental change. Here we adopt the MetaPathways annotation and analysis pipeline and Pathway Tools to construct environmental pathway/genome databases (ePGDBs) that describe microbial community metabolism using MetaCyc, a highly curated database of metabolic pathways and components covering all domains of life. We evaluate Pathway Tools' performance on three datasets with different complexity and coding potential, including simulated metagenomes, a symbiotic system, and the Hawaii Ocean Time-series. We define accuracy and sensitivity relationships between read length, coverage and pathway recovery and evaluate the impact of taxonomic pruning on ePGDB construction and interpretation. Resulting ePGDBs provide interactive metabolic maps, predict emergent metabolic pathways associated with biosynthesis and energy production and differentiate between genomic potential and phenotypic expression across defined environmental gradients. This multi-tiered analysis provides the user community with specific operating guidelines, performance metrics and prediction hazards for more reliable ePGDB construction and interpretation. Moreover, it demonstrates the power of Pathway Tools in predicting metabolic interactions in natural and engineered ecosystems.

  1. Renal Gene Expression Database (RGED): a relational database of gene expression profiles in kidney disease

    Science.gov (United States)

    Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo

    2014-01-01

    We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Availability and implementation: Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. Database URL: http://rged.wall-eva.net PMID:25252782

  2. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  3. Agency, Social and Healthcare Supports for Adults with Intellectual Disability at the End of Life in Out-of-Home, Non-Institutional Community Residences in Western Nations: A Literature Review

    Science.gov (United States)

    Moro, Teresa T.; Savage, Teresa A.; Gehlert, Sarah

    2017-01-01

    Background: The nature and quality of end-of-life care received by adults with intellectual disabilities in out-of-home, non-institutional community agency residences in Western nations is not well understood. Method: A range of databases and search engines were used to locate conceptual, clinical and research articles from relevant peer-reviewed…

  4. Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research

    Science.gov (United States)

    Chanson, Hubert

    2007-01-01

    For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares…

  5. Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps

    Science.gov (United States)

    Calvo, Iñaki

    2014-01-01

    Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. PMID:25538957

  6. Design and Implementation of a Virtual Calculation Centre (VCC for Engineering Students

    Directory of Open Access Journals (Sweden)

    Alaeddine Mokri

    2010-02-01

    Full Text Available Most of the academic institutions in all over the world provide their attendees with databases where courses and other materials could be uploaded, downloaded and checked by the faculty and students. Those materials are mostly PDF files, MS Word files, MS power point presentations or downloadable computer programs. Even though those databases are very beneficial, they need to be improved to meet the students’ needs, especially in engineering faculties where students may need thermo-physical properties of some substances, charts, diagrams, conversion factors and so forth. In addition, many students see it cumbersome downloading and installing a computer program that they do not need often in their studies. As an attempt to satisfy the academic community needs in the faculty of Engineering in Abou Bekr Belkaid University (Tlemcen, Algeria, we had to devise some Web technologies and techniques to design an interactive virtual space wherein many engineering-related Web applications are accessible on-line. Students and professors can access on-line to properties of many substances, convert physical quantities from and into a variety of units, exploit computer programs on-line without installing them, generate tables and charts, and also, they can use diagrams on-line by means of the mouse. The set of those applications is called a Virtual Calculation Center. This paper goes through the different services that could be implemented in a Virtual Calculation Center, and describes the techniques and technologies used to build those applications.

  7. Community-Supported Data Repositories in Paleobiology: A 'Middle Tail' Between the Geoscientific and Informatics Communities

    Science.gov (United States)

    Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.

    2015-12-01

    Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium

  8. Microorganisms in heavy metal bioremediation: strategies for applying microbial-community engineering to remediate soils

    Directory of Open Access Journals (Sweden)

    Jennifer L. Wood

    2016-06-01

    Full Text Available The remediation of heavy-metal-contaminated soils is essential as heavy metals persist and do not degrade in the environment. Remediating heavy-metal-contaminated soils requires metals to be mobilized for extraction whilst, at the same time, employing strategies to avoid mobilized metals leaching into ground-water or aquatic systems. Phytoextraction is a bioremediation strategy that extracts heavy metals from soils by sequestration in plant tissues and is currently the predominant bioremediation strategy investigated for remediating heavy-metal-contaminated soils. Although the efficiency of phytoextraction remains a limiting feature of the technology, there are numerous reports that soil microorganisms can improve rates of heavy metal extraction.This review highlights the unique challenges faced when remediating heavy-metal-contaminated soils as compared to static aquatic systems and suggests new strategies for using microorganisms to improve phytoextraction. We compare how microorganisms are used in soil bioremediation (i.e. phytoextraction and water bioremediation processes, discussing how the engineering of microbial communities, used in water remediation, could be applied to phytoextraction. We briefly outline possible approaches for the engineering of soil communities to improve phytoextraction either by mobilizing metals in the rhizosphere of the plant or by promoting plant growth to increase the root-surface area available for uptake of heavy metals. We highlight the technological advances that make this research direction possible and how these technologies could be employed in future research.

  9. Software engineers and nuclear engineers: teaming up to do testing

    International Nuclear Information System (INIS)

    Kelly, D.; Cote, N.; Shepard, T.

    2007-01-01

    The software engineering community has traditionally paid little attention to the specific needs of engineers and scientists who develop their own software. Recently there has been increased recognition that specific software engineering techniques need to be found for this group of developers. In this case study, a software engineering group teamed with a nuclear engineering group to develop a software testing strategy. This work examines the types of testing that proved to be useful and examines what each discipline brings to the table to improve the quality of the software product. (author)

  10. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  11. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  12. Governing Engineering

    DEFF Research Database (Denmark)

    Buch, Anders

    2012-01-01

    Most people agree that our world face daunting problems and, correctly or not, technological solutions are seen as an integral part of an overall solution. But what exactly are the problems and how does the engineering ‘mind set’ frame these problems? This chapter sets out to unravel dominant...... perspectives in challenge per-ception in engineering in the US and Denmark. Challenge perception and response strategies are closely linked through discursive practices. Challenge perceptions within the engineering community and the surrounding society are thus critical for the shaping of engineering education...... and the engineering profession. Through an analysis of influential reports and position papers on engineering and engineering education the chapter sets out to identify how engineering is problematized and eventually governed. Drawing on insights from governmentality studies the chapter strives to elicit the bodies...

  13. Governing Engineering

    DEFF Research Database (Denmark)

    Buch, Anders

    2011-01-01

    Abstract: Most people agree that our world faces daunting problems and, correctly or not, technological solutions are seen as an integral part of an overall solution. But what exactly are the problems and how does the engineering ‘mind set’ frame these problems? This chapter sets out to unravel...... dominant perspectives in challenge perception in engineering in the US and Denmark. Challenge perception and response strategies are closely linked through discursive practices. Challenge perceptions within the engineering community and the surrounding society are thus critical for the shaping...... of engineering education and the engineering profession. Through an analysis of influential reports and position papers on engineering and engineering education the chapter sets out to identify how engineering is problematized and eventually governed. Drawing on insights from governmentality studies the chapter...

  14. Nuclear plant operations, maintenance, and configuration management using three-dimensional computer graphics and databases

    International Nuclear Information System (INIS)

    Tutos, N.C.; Reinschmidt, K.F.

    1987-01-01

    Stone and Webster Engineering Corporation has developed the Plant Digital Model concept as a new approach to Configuration Mnagement of nuclear power plants. The Plant Digital Model development is a step-by-step process, based on existing manual procedures and computer applications, and is fully controllable by the plant managers and engineers. The Plant Digital Model is based on IBM computer graphics and relational database management systems, and therefore can be easily integrated with existing plant databases and corporate management-information systems

  15. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    Science.gov (United States)

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  16. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  17. The standardization of data relational mode in the materials database for nuclear power engineering

    International Nuclear Information System (INIS)

    Wang Xinxuan

    1996-01-01

    A relational database needs standard data relation ships. The data relation ships include hierarchical structures and repeat set records. Code database is created and the relational database is created between spare parts and materials and properties of the materials. The data relation ships which are not standard are eliminated and all the relation modes are made to meet the demands of the 3NF (Third Norm Form)

  18. Capstone Engineering Design Projects for Community Colleges

    Science.gov (United States)

    Walz, Kenneth A.; Christian, Jon R.

    2017-01-01

    Capstone engineering design courses have been a feature at research universities and four-year schools for many years. Although such classes are less common at two-year colleges, the experience is equally beneficial for this population of students. With this in mind, Madison College introduced a project-based Engineering Design course in 2007.…

  19. The gerontechnology engineer

    NARCIS (Netherlands)

    Bronswijk, van J.E.M.H.; Brink, M.; Vlies, van der R.D.

    2011-01-01

    Pushing supportive technology to serve an aging society originated from the social sciences. Only about 20 years ago did engineers discover the field and formulated it as gerontechnology. The question arises whether engineers and social scientists have succeeded to form a community of practice with

  20. Trends in Environmental Health Engineering

    Science.gov (United States)

    Rowe, D. R.

    1972-01-01

    Reviews the trends in environmental health engineering and describes programs in environmental engineering technology and the associated environmental engineering courses at Western Kentucky University (four-year program), Wytheville Community College (two-year program), and Rensselaer Polytechnic Institute (four-year program). (PR)

  1. In-depth analysis of protein inference algorithms using multiple search engines and well-defined metrics.

    Science.gov (United States)

    Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset

    2017-01-06

    In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein

  2. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  3. Development of a voice database to aid children with hearing impairments

    International Nuclear Information System (INIS)

    Kuzman, M G; Agüero, P D; Tulli, J C; Gonzalez, E L; Cervellini, M P; Uriz, A J

    2011-01-01

    In the development of software for voice analysis or training, for people with hearing impairments, a database having sounds of properly pronounced words is of paramount importance. This paper shows the advantage that will be obtained from getting an own voice database, rather than using those coming from other countries, even having the same language, in the development of speech training software aimed to people with hearing impairments. This database will be used by software developers at the School of Engineering of Mar del Plata National University.

  4. Retrieving high-resolution images over the Internet from an anatomical image database

    Science.gov (United States)

    Strupp-Adams, Annette; Henderson, Earl

    1999-12-01

    The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.

  5. SISMA (Site of Italian Strong Motion Accelerograms): a Web-Database of Ground Motion Recordings for Engineering Applications

    International Nuclear Information System (INIS)

    Scasserra, Giuseppe; Lanzo, Giuseppe; D'Elia, Beniamino; Stewart, Jonathan P.

    2008-01-01

    The paper describes a new website called SISMA, i.e. Site of Italian Strong Motion Accelerograms, which is an Internet portal intended to provide natural records for use in engineering applications for dynamic analyses of structural and geotechnical systems. SISMA contains 247 three-component corrected motions recorded at 101 stations from 89 earthquakes that occurred in Italy in the period 1972-2002. The database of strong motion accelerograms was developed in the framework of a joint project between Sapienza University of Rome and University of California at Los Angeles (USA) and is described elsewhere. Acceleration histories and pseudo-acceleration response spectra (5% damping) are available for download from the website. Recordings can be located using simple search parameters related to seismic source and the recording station (e.g., magnitude, V s30 , etc) as well as ground motion characteristics (e.g. peak ground acceleration, peak ground velocity, peak ground displacement, Arias intensity, etc.)

  6. Quality assurance database for the CBM silicon tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Lymanets, Anton [Physikalisches Institut, Universitaet Tuebingen (Germany); Collaboration: CBM-Collaboration

    2015-07-01

    The Silicon Tracking System is a main tracking device of the CBM Experiment at FAIR. Its construction includes production, quality assurance and assembly of large number of components, e.g., 106 carbon fiber support structures, 1300 silicon microstrip sensors, 16.6k readout chips, analog microcables, etc. Detector construction is distributed over several production and assembly sites and calls for a database that would be extensible and allow tracing the components, integrating the test data, monitoring the component statuses and data flow. A possible implementation of the above-mentioned requirements is being developed at GSI (Darmstadt) based on the FAIR DB Virtual Database Library that provides connectivity to common SQL-Database engines (PostgreSQL, Oracle, etc.). Data structure, database architecture as well as status of implementation are discussed.

  7. The GEM Global Active Faults Database: The growth and synthesis of a worldwide database of active structures for PSHA, research, and education

    Science.gov (United States)

    Styron, R. H.; Garcia, J.; Pagani, M.

    2017-12-01

    A global catalog of active faults is a resource of value to a wide swath of the geoscience, earthquake engineering, and hazards risk communities. Though construction of such a dataset has been attempted now and again through the past few decades, success has been elusive. The Global Earthquake Model (GEM) Foundation has been working on this problem, as a fundamental step in its goal of making a global seismic hazard model. Progress on the assembly of the database is rapid, with the concatenation of many national—, orogen—, and continental—scale datasets produced by different research groups throughout the years. However, substantial data gaps exist throughout much of the deforming world, requiring new mapping based on existing publications as well as consideration of seismicity, geodesy and remote sensing data. Thus far, new fault datasets have been created for the Caribbean and Central America, North Africa, and northeastern Asia, with Madagascar, Canada and a few other regions in the queue. The second major task, as formidable as the initial data concatenation, is the 'harmonization' of data. This entails the removal or recombination of duplicated structures, reconciliation of contrastinginterpretations in areas of overlap, and the synthesis of many different types of attributes or metadata into a consistent whole. In a project of this scale, the methods used in the database construction are as critical to project success as the data themselves. After some experimentation, we have settled on an iterative methodology that involves rapid accumulation of data followed by successive episodes of data revision, and a computer-scripted data assembly using GIS file formats that is flexible, reproducible, and as able as possible to cope with updates to the constituent datasets. We find that this approach of initially maximizing coverage and then increasing resolution is the most robust to regional data problems and the most amenable to continued updates and

  8. The Danish Inguinal Hernia Database

    Directory of Open Access Journals (Sweden)

    Friis-Andersen H

    2016-10-01

    Full Text Available Hans Friis-Andersen1,2, Thue Bisgaard2,3 1Surgical Department, Horsens Regional Hospital, Horsens, Denmark; 2Steering Committee, Danish Hernia Database, 3Surgical Gastroenterological Department 235, Copenhagen University Hospital, Hvidovre, Denmark Aim of database: To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. Study population: Patients ≥18 years operated for groin hernia. Main variables: Type and size of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. Descriptive data: According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time. All institutions have continuous access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles the medical management of the database. Results: The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015. A total of 49 peer-reviewed national and international publications have been published from the database (June 2015. Conclusion: The Danish Inguinal Hernia Database is fully active monitoring surgical quality and contributes to the national and international surgical society to improve outcome after groin hernia repair. Keywords: nation-wide, recurrence, chronic pain, femoral hernia, surgery, quality improvement

  9. Reflections on Software Engineering Education

    NARCIS (Netherlands)

    van Vliet, H.

    2006-01-01

    In recent years, the software engineering community has focused on organizing its existing knowledge and finding opportunities to transform that knowledge into a university curriculum. SWEBOK (the Guide to the Software Engineering Body of Knowledge) and Software Engineering 2004 are two initiatives

  10. Where Is "Community"?: Engineering Education and Sustainable Community Development

    Science.gov (United States)

    Schneider, J.; Leydens, J. A.; Lucena, J.

    2008-01-01

    Sustainable development initiatives are proliferating in the US and Europe as engineering educators seek to provide students with knowledge and skills to design technologies that are environmentally sustainable. Many such initiatives involve students from the "North," or "developed" world building projects for villages or…

  11. The Danish Inguinal Hernia database.

    Science.gov (United States)

    Friis-Andersen, Hans; Bisgaard, Thue

    2016-01-01

    To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. Patients ≥18 years operated for groin hernia. Type and size of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time). All institutions have continuous access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles the medical management of the database. The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). The Danish Inguinal Hernia Database is fully active monitoring surgical quality and contributes to the national and international surgical society to improve outcome after groin hernia repair.

  12. DRUMS: a human disease related unique gene mutation search engine.

    Science.gov (United States)

    Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan

    2011-10-01

    With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.

  13. Key Techniques for Dynamic Updating of National Fundamental Geographic Information Database

    Directory of Open Access Journals (Sweden)

    WANG Donghua

    2015-07-01

    Full Text Available One of the most important missions of fundamental surveying and mapping work is to keep the fundamental geographic information fresh. In this respect, National Administration of Surveying, Mapping and Geoinformation has launched the project of dynamic updating of national fundamental geographic information database since 2012, which aims to update 1:50 000, 1:250 000 and 1:1 000 000 national fundamental geographic information database continuously and quickly, by updating and publishing once a year. This paper introduces the general technical thinking of dynamic updating, states main technical methods, such as dynamic updating of fundamental database, linkage updating of derived databases, and multi-tense database management and service and so on, and finally introduces main technical characteristics and engineering applications.

  14. Automatic sorting of toxicological information into the IUCLID (International Uniform Chemical Information Database) endpoint-categories making use of the semantic search engine Go3R.

    Science.gov (United States)

    Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert

    2014-06-01

    The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. From data repositories to submission portals: rethinking the role of domain-specific databases in CollecTF.

    Science.gov (United States)

    Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan

    2016-01-01

    Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  17. The ChArMEx database

    Science.gov (United States)

    Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2013-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products

  18. The ChArMEx database

    Science.gov (United States)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document

  19. TRY – a global database of plant traits

    DEFF Research Database (Denmark)

    Kattge, J.; Diaz, S.; Lavorel, S.

    2011-01-01

    species richness to ecosystem functional diversity. Trait data thus represent the raw material for a wide range of research from evolutionary biology, community and functional ecology to biogeography. Here we present the global database initiative named TRY, which has united a wide range of the plant...... trait research community worldwide and gained an unprecedented buy‐in of trait data: so far 93 trait databases have been contributed. The data repository currently contains almost three million trait entries for 69 000 out of the world's 300 000 plant species, with a focus on 52 groups of traits...... is between species (interspecific), but significant intraspecific variation is also documented, up to 40% of the overall variation. Plant functional types (PFTs), as commonly used in vegetation models, capture a substantial fraction of the observed variation – but for several traits most variation occurs...

  20. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  1. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  2. A new online database of nuclear electromagnetic moments

    Science.gov (United States)

    Mertzimekis, Theo J.

    2017-09-01

    Nuclear electromagnetic (EM) moments, i.e., the magnetic dipole and the electric quadrupole moments, provide important information of nuclear structure. As in other types of experimental data available to the community, measurements of nuclear EM moments have been organized systematically in compilations since the dawn of nuclear science. However, the wealth of recent moments measurements with radioactive beams, as well as earlier existing measurements, lack an online, easy-to-access, systematically organized presence to disseminate information to researchers. In addition, available printed compilations suffer a rather long life cycle, being left behind experimental measurements published in journals or elsewhere. A new, online database (http://magneticmoments.info) focusing on nuclear EM moments has been recently developed to disseminate experimental data to the community. The database includes non-evaluated experimental data of nuclear EM moments, giving strong emphasis on frequent updates (life cycle is 3 months) and direct connection to the sources via DOI and NSR hyperlinks. It has been recently integrated in IAEA LiveChart [1], but can also be found as a standalone webapp [2]. A detailed review of the database features, as well as plans for further development and expansion in the near future is discussed.

  3. Pertukaran Data Antar Database Dengan Menggunakan Teknologi API

    Directory of Open Access Journals (Sweden)

    Ahmad Hanafi

    2017-03-01

    Full Text Available Electronically data interchange between institutions or companies must be supported with appropriate data storage media capacity. MySQL is a database engine that is used to perform data storage in the form of information, where the data can be utilized as needed. MYSQL has the advantage of which is to provide convenience in terms of usage, and able to work on different platforms. System requirements that must be reliable and multitasking capable of making the database not only as a data storage medium, but can also be utilized as a means of data exchange. Dropbox API is the best solution that can be utilized as a technology that supports the database to be able to Exchange data. The combination of the Dropbox API and database can be used as a very cheap solution for small companies to implement data exchange, because it only requires a relatively small Internet connection.

  4. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.

  5. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory's (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project's current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access

  6. Using Internet search engines to estimate word frequency.

    Science.gov (United States)

    Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E

    2002-05-01

    The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.

  7. Loopedia, a database for loop integrals

    Science.gov (United States)

    Bogner, C.; Borowka, S.; Hahn, T.; Heinrich, G.; Jones, S. P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.

    2018-04-01

    Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of INSPIRE or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.

  8. ATLAS database application enhancements using Oracle 11g

    International Nuclear Information System (INIS)

    Dimitrov, G; Canali, L; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  9. Utilizing Civil Engineering Senior Design Capstone Projects to Evaluate Students' Sustainability Education across Engineering Curriculum

    Science.gov (United States)

    Dancz, Claire L. A.; Ketchman, Kevin J.; Burke, Rebekah D.; Hottle, Troy A.; Parrish, Kristen; Bilec, Melissa M.; Landis, Amy E.

    2017-01-01

    While many institutions express interest in integrating sustainability into their civil engineering curriculum, the engineering community lacks consensus on established methods for infusing sustainability into curriculum and verified approaches to assess engineers' sustainability knowledge. This paper presents the development of a sustainability…

  10. Database on wind characteristics - Analyses of wind turbine design loads

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, K.S.

    2004-06-01

    The main objective of IEA R and D Wind Annex XVII - Database on Wind Characteristics - has been to provide wind energy planners, designers and researchers, as well as the international wind engineering community in general, with a source of actual wind field data (time series and resource data) observed in a wide range of different wind climates and terrain types. Connected to an extension of the initial Annex period, the scope for the continuation was widened to include also support to the international wind turbine standardisation efforts.. The project partners are Sweden, Norway, U.S.A., The Netherlands and Denmark, with Denmark as the Operating Agent. The reporting of the continuation of Annex XVII falls in two separate parts. Part one accounts in details for the available data in the established database bank, and part two describes various data analyses performed with the overall purpose of improving the design load cases with relevance for to wind turbine structures. The present report constitutes the second part of the Annex XVII reporting. Both fatigue and extreme load aspects are dealt with, however, with the main emphasis on the latter. The work has been supported by The Ministry of Environment and Energy, Danish Energy Agency, The Netherlands Agency for Energy and the Environment (NOVEM), The Norwegian Water Resources and Energy Administration (NVE), The Swedish National Energy Administration (STEM) and The Government of the United States of America. (au)

  11. Memory aware query scheduling in a database cluster

    NARCIS (Netherlands)

    F. Waas; M.L. Kersten (Martin)

    2000-01-01

    textabstractQuery throughput is one of the primary optimization goals in interactive web-based information systems in order to achieve the performance necessary to serve large user communities. Queries in this application domain differ significantly from those in traditional database applications:

  12. ALFRED: An Allele Frequency Database for Microevolutionary Studies

    Directory of Open Access Journals (Sweden)

    Kenneth K Kidd

    2005-01-01

    Full Text Available Many kinds of microevolutionary studies require data on multiple polymorphisms in multiple populations. Increasingly, and especially for human populations, multiple research groups collect relevant data and those data are dispersed widely in the literature. ALFRED has been designed to hold data from many sources and make them available over the web. Data are assembled from multiple sources, curated, and entered into the database. Multiple links to other resources are also established by the curators. A variety of search options are available and additional geographic based interfaces are being developed. The database can serve the human anthropologic genetic community by identifying what loci are already typed on many populations thereby helping to focus efforts on a common set of markers. The database can also serve as a model for databases handling similar DNA polymorphism data for other species.

  13. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  14. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  15. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  16. Biomaterials for tissue engineering applications.

    Science.gov (United States)

    Keane, Timothy J; Badylak, Stephen F

    2014-06-01

    With advancements in biological and engineering sciences, the definition of an ideal biomaterial has evolved over the past 50 years from a substance that is inert to one that has select bioinductive properties and integrates well with adjacent host tissue. Biomaterials are a fundamental component of tissue engineering, which aims to replace diseased, damaged, or missing tissue with reconstructed functional tissue. Most biomaterials are less than satisfactory for pediatric patients because the scaffold must adapt to the growth and development of the surrounding tissues and organs over time. The pediatric community, therefore, provides a distinct challenge for the tissue engineering community. Copyright © 2014. Published by Elsevier Inc.

  17. Implementation of a fuzzy relational database. Case study: academic tutoring

    Directory of Open Access Journals (Sweden)

    Ciro Saguay

    2017-02-01

    Full Text Available This paper describes the process of implementation of a diffused relational database in the practical case of the academic tutorials of the Faculty of Engineering Sciences of the Equinoctial Technological University (UTE. For the implementation, the ANSI-SPARC database architecture was used as the methodology, which abstracts the information into levels, at the external level the functional requirements were obtained, at the conceptual level, the diffused relational model was obtained. To achieve this model, we performed the transformation of the diffuse data through mathematical models using the Fuzzy-Lookup tool and at the physical level the diffused relational database was implemented. In addition, an user interface was developed using Java through which data is entered and queries are made to the diffused relational database to verify its operation.

  18. A Components Database Design and Implementation for Accelerators and Detectors

    International Nuclear Information System (INIS)

    Chan, A.; Meyer, S.

    2011-01-01

    Many accelerator and detector systems being fabricated for the PEP-II Accelerator and BABAR Detector needed configuration control and calibration measurements tracked for their components. Instead of building a database for each distinct system, a Components Database was designed and implemented that can encompass any type of component and any type of measurement. In this paper we describe this database design that is especially suited for the engineering and fabrication processes of the accelerator and detector environments where there are thousands of unique component types. We give examples of information stored in the Components Database, which includes accelerator configuration, calibration measurements, fabrication history, design specifications, inventory, etc. The World Wide Web interface is used to access the data, and templates are available for international collaborations to collect data off-line.

  19. Present and future status of distributed database for nuclear materials (Data-Free-Way)

    International Nuclear Information System (INIS)

    Fujita, Mitsutane; Xu, Yibin; Kaji, Yoshiyuki; Tsukada, Takashi

    2004-01-01

    Data-Free-Way (DFW) is a distributed database for nuclear materials. DFW has been developed by three organizations such as National Institute for Materials Science (NIMS), Japan Atomic Energy Research Institute (JAERI) and Japan Nuclear Cycle Development Institute (JNC) since 1990. Each organization constructs each materials database in the strongest field and the member of three organizations can use these databases by internet. Construction of DFW, stored data, outline of knowledge data system, data manufacturing of knowledge note, activities of three organizations are described. On NIMS, nuclear reaction database for materials are explained. On JAERI, data analysis using IASCC data in JMPD is contained. Main database of JNC is experimental database of coexistence of engineering ceramics in liquid sodium at high temperature' and 'Tensile test database of irradiated 304 stainless steel' and 'Technical information database'. (S.Y.)

  20. WGDB: Wood Gene Database with search interface.

    Science.gov (United States)

    Goyal, Neha; Ginwal, H S

    2014-01-01

    Wood quality can be defined in terms of particular end use with the involvement of several traits. Over the last fifteen years researchers have assessed the wood quality traits in forest trees. The wood quality was categorized as: cell wall biochemical traits, fibre properties include the microfibril angle, density and stiffness in loblolly pine [1]. The user friendly and an open-access database has been developed named Wood Gene Database (WGDB) for describing the wood genes along the information of protein and published research articles. It contains 720 wood genes from species namely Pinus, Deodar, fast growing trees namely Poplar, Eucalyptus. WGDB designed to encompass the majority of publicly accessible genes codes for cellulose, hemicellulose and lignin in tree species which are responsive to wood formation and quality. It is an interactive platform for collecting, managing and searching the specific wood genes; it also enables the data mining relate to the genomic information specifically in Arabidopsis thaliana, Populus trichocarpa, Eucalyptus grandis, Pinus taeda, Pinus radiata, Cedrus deodara, Cedrus atlantica. For user convenience, this database is cross linked with public databases namely NCBI, EMBL & Dendrome with the search engine Google for making it more informative and provides bioinformatics tools named BLAST,COBALT. The database is freely available on www.wgdb.in.

  1. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    International Nuclear Information System (INIS)

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule≥3 mm,''''nodule<3 mm,'' and ''non-nodule≥3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule≥3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all

  2. Community participation in rural health: a scoping review

    Directory of Open Access Journals (Sweden)

    Kenny Amanda

    2013-02-01

    Full Text Available Abstract Background Major health inequities between urban and rural populations have resulted in rural health as a reform priority across a number of countries. However, while there is some commonality between rural areas, there is increasing recognition that a one size fits all approach to rural health is ineffective as it fails to align healthcare with local population need. Community participation is proposed as a strategy to engage communities in developing locally responsive healthcare. Current policy in several countries reflects a desire for meaningful, high level community participation, similar to Arnstein’s definition of citizen power. There is a significant gap in understanding how higher level community participation is best enacted in the rural context. The aim of our study was to identify examples, in the international literature, of higher level community participation in rural healthcare. Methods A scoping review was designed to map the existing evidence base on higher level community participation in rural healthcare planning, design, management and evaluation. Key search terms were developed and mapped. Selected databases and internet search engines were used that identified 99 relevant studies. Results We identified six articles that most closely demonstrated higher level community participation; Arnstein’s notion of citizen power. While the identified studies reflected key elements for effective higher level participation, little detail was provided about how groups were established and how the community was represented. The need for strong partnerships was reiterated, with some studies identifying the impact of relational interactions and social ties. In all studies, outcomes from community participation were not rigorously measured. Conclusions In an environment characterised by increasing interest in community participation in healthcare, greater understanding of the purpose, process and outcomes is a priority for

  3. Function and organization of CPC database system

    International Nuclear Information System (INIS)

    Yoshida, Tohru; Tomiyama, Mineyoshi.

    1986-02-01

    It is very time-consuming and expensive work to develop computer programs. Therefore, it is desirable to effectively use the existing program. For this purpose, it is required for researchers and technical staffs to obtain the relevant informations easily. CPC (Computer Physics Communications) is a journal published to facilitate the exchange of physics programs and of the relevant information about the use of computers in the physics community. There are about 1300 CPC programs in JAERI computing center, and the number of programs is increasing. A new database system (CPC database) has been developed to manage the CPC programs and their information. Users obtain information about all the programs stored in the CPC database. Also users can find and copy the necessary program by inputting the program name, the catalogue number and the volume number. In this system, each operation is done by menu selection. Every CPC program is compressed and stored in the database; the required storage size is one third of the non-compressed format. Programs unused for a long time are moved to magnetic tape. The present report describes the CPC database system and the procedures for its use. (author)

  4. Engineering solutions for sustainability materials and resources II

    CERN Document Server

    Mishra, Brajendra; Anderson, Dayan; Sarver, Emily; Neelameggham, Neale

    2016-01-01

    With impending and burgeoning societal issues affecting both developed and emerging nations, the global engineering community has a responsibility and an opportunity to truly make a difference and contribute. The papers in this collection address what materials and resources are integral to meeting basic societal sustainability needs in critical areas of energy, transportation, housing, and recycling. Contributions focus on the engineering answers for cost-effective, sustainable pathways; the strategies for effective use of engineering solutions; and the role of the global engineering community. Authors share perspectives on the major engineering challenges that face our world today; identify, discuss, and prioritize engineering solution needs; and establish how these fit into developing global-demand pressures for materials and human resources.

  5. Pathbase: A new reference resource and database for laboratory mouse pathology

    International Nuclear Information System (INIS)

    Schofield, P. N.; Bard, J. B. L.; Boniver, J.; Covelli, V.; Delvenne, P.; Ellender, M.; Engstrom, W.; Goessner, W.; Gruenberger, M.; Hoefler, H.; Hopewell, J. W.; Mancuso, M.; Mothersill, C.; Quintanilla-Martinez, L.; Rozell, B.; Sariola, H.; Sundberg, J. P.; Ward, A.

    2004-01-01

    Pathbase (http:/www.pathbase.net) is a web accessible database of histopathological images of laboratory mice, developed as a resource for the coding and archiving of data derived from the analysis of mutant or genetically engineered mice and their background strains. The metadata for the images, which allows retrieval and inter-operability with other databases, is derived from a series of orthogonal ontologies, and controlled vocabularies. One of these controlled vocabularies, MPATH, was developed by the Pathbase Consortium as a formal description of the content of mouse histopathological images. The database currently has over 1000 images on-line with 2000 more under curation and presents a paradigm for the development of future databases dedicated to aspects of experimental biology. (authors)

  6. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA

    Directory of Open Access Journals (Sweden)

    Jyotsna Choubey

    2017-01-01

    Results and Conclusions: RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  7. WAsP engineering 2000

    Energy Technology Data Exchange (ETDEWEB)

    Mann, J.; Ott, S.; Hoffmann Joergensen, B.; Frank, H.P.

    2002-08-01

    This report summarizes the findings of the EFP project WAsP Engineering Version 2000. The main product of this project is the computer program WAsP Engineering which is used for the estimation of extreme wind speeds, wind shears, profiles, and turbulence in complex terrain. At the web page http://www.waspengineering.dk more information of the program can be obtained and a copy of the manual can be downloaded. The reports contains a complete description of the turbulence modelling in moderately complex terrain, implemented in WAsP Engineering. Also experimental validation of the model together with comparison with spectra from engineering codes is done. Some shortcomings of the linear flow model LINCOM, which is at the core of WAsP Engineering, is pointed out and modifications to eliminate the problem are presented. The global database of meteorological 'reanalysis' data from NCAP/NCEP are used to estimate the extreme wind climate around Denmark. Among various alternative physical parameters in the database, such as surface winds, wind at various pressure levels or geostrophic winds at various heights, the surface geostrophic wind seems to give the most realistic results. Because of spatial filtering and intermittent temporal sampling the 50 year winds are underestimated by approximately 12%. Whether the method applies to larger areas of the world remains to be seen. The 50 year winds in Denmark is estimated from data using the flow model inWAsP Engineering and the values are approximately 1 m/s larger than previous analysis (Kristensen et al. 2000). A tool is developed to estimate crudely an extreme wind climate from a WAsP lib file. (au)

  8. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  9. Bridging the Engineering and Medicine Gap

    Science.gov (United States)

    Walton, M.; Antonsen, E.

    2018-01-01

    A primary challenge NASA faces is communication between the disparate entities of engineers and human system experts in life sciences. Clear communication is critical for exploration mission success from the perspective of both risk analysis and data handling. The engineering community uses probabilistic risk assessment (PRA) models to inform their own risk analysis and has extensive experience managing mission data, but does not always fully consider human systems integration (HSI). The medical community, as a part of HSI, has been working 1) to develop a suite of tools to express medical risk in quantitative terms that are relatable to the engineering approaches commonly in use, and 2) to manage and integrate HSI data with engineering data. This talk will review the development of the Integrated Medical Model as an early attempt to bridge the communication gap between the medical and engineering communities in the language of PRA. This will also address data communication between the two entities in the context of data management considerations of the Medical Data Architecture. Lessons learned from these processes will help identify important elements to consider in future communication and integration of these two groups.

  10. A Review of Stellar Abundance Databases and the Hypatia Catalog Database

    Science.gov (United States)

    Hinkel, Natalie Rose

    2018-01-01

    The astronomical community is interested in elements from lithium to thorium, from solar twins to peculiarities of stellar evolution, because they give insight into different regimes of star formation and evolution. However, while some trends between elements and other stellar or planetary properties are well known, many other trends are not as obvious and are a point of conflict. For example, stars that host giant planets are found to be consistently enriched in iron, but the same cannot be definitively said for any other element. Therefore, it is time to take advantage of large stellar abundance databases in order to better understand not only the large-scale patterns, but also the more subtle, small-scale trends within the data.In this overview to the special session, I will present a review of large stellar abundance databases that are both currently available (i.e. RAVE, APOGEE) and those that will soon be online (i.e. Gaia-ESO, GALAH). Additionally, I will discuss the Hypatia Catalog Database (www.hypatiacatalog.com) -- which includes abundances from individual literature sources that observed stars within 150pc. The Hypatia Catalog currently contains 72 elements as measured within ~6000 stars, with a total of ~240,000 unique abundance determinations. The online database offers a variety of solar normalizations, stellar properties, and planetary properties (where applicable) that can all be viewed through multiple interactive plotting interfaces as well as in a tabular format. By analyzing stellar abundances for large populations of stars and from a variety of different perspectives, a wealth of information can be revealed on both large and small scales.

  11. Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.

    Science.gov (United States)

    Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.

  12. An Overview of the Literature: Research in P-12 Engineering Education

    Science.gov (United States)

    Mendoza Díaz, Noemi V.; Cox, Monica F.

    2012-01-01

    This paper presents an extensive overview of preschool to 12th grade (P-12) engineering education literature published between 2001 and 2011. Searches were conducted through education and engineering library engines and databases as well as queries in established publications in engineering education. More than 50 publications were found,…

  13. The Fragment Network: A Chemistry Recommendation Engine Built Using a Graph Database.

    Science.gov (United States)

    Hall, Richard J; Murray, Christopher W; Verdonk, Marcel L

    2017-07-27

    The hit validation stage of a fragment-based drug discovery campaign involves probing the SAR around one or more fragment hits. This often requires a search for similar compounds in a corporate collection or from commercial suppliers. The Fragment Network is a graph database that allows a user to efficiently search chemical space around a compound of interest. The result set is chemically intuitive, naturally grouped by substitution pattern and meaningfully sorted according to the number of observations of each transformation in medicinal chemistry databases. This paper describes the algorithms used to construct and search the Fragment Network and provides examples of how it may be used in a drug discovery context.

  14. Advanced SPARQL querying in small molecule databases.

    Science.gov (United States)

    Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří

    2016-01-01

    In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.

  15. Pembangunan Database Destinasi Pariwisata Indonesia Pengumpulan dan Pengolahan Data Tahap I

    Directory of Open Access Journals (Sweden)

    Yosafati Hulu

    2014-12-01

    Full Text Available Considering the increasing need for local (government and community in developing tourism destinations in the era of autonomy, considering the need to select the appropriate attraction according to the respective criteria, and considering the needs of businessmen travel / hotel to offer the appropriate attraction with the needs of potential tourists, it is necessary to develop a database of tourist destinations in Indonesia that is able to facilitate these needs. The database is built is a web-based database that is widely accessible and capable of storing complete information about Indonesian tourism destination as a whole, systematic and structured. Attractions in the database already classifiable based attributes: location (the name of the island, province, district, type/tourism products, how to achieve these attractions, the cost, and also a variety of informal information such as: the ins and outs of local attractions by local communities or tourists. This study is a continuation of previous studies or research phase two of three phases planned. Phase two will focus on the collection and processing of data as well as testing and refinement of the model design and database structure that has been created in Phase I. The study was conducted in stages: 1 Design Model and Structure Database,2 Making a Web-based program, 3 Installation and Hosting, 4 Data Collection, 5 Data Processing and Data entry, and 6 Evaluation and improvement/Completion.

  16. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  17. Mitigating Climate Change at the Carbon Water Nexus: A Call to Action for the Environmental Engineering Community.

    Science.gov (United States)

    Clarens, Andres F; Peters, Catherine A

    2016-10-01

    Environmental engineers have played a critical role in improving human and ecosystem health over the past several decades. These contributions have focused on providing clean water and air as well as managing waste streams and remediating polluted sites. As environmental problems have become more global in scale and more deeply entrenched in sociotechnical systems, the discipline of environmental engineering must grow to be ready to respond to the challenges of the coming decades. Here we make the case that environmental engineers should play a leadership role in the development of climate change mitigation technologies at the carbon-water nexus (CWN). Climate change, driven largely by unfettered emissions of fossil carbon into the atmosphere, is a far-reaching and enormously complex environmental risk with the potential to negatively affect food security, human health, infrastructure, and other systems. Solving this problem will require a massive mobilization of existing and innovative new technology. The environmental engineering community is uniquely positioned to do pioneering work at the CWN using a skillset that has been honed, solving related problems. The focus of this special issue, on "The science and innovation of emerging subsurface energy technologies," provides one example domain within which environmental engineers and related disciplines are beginning to make important contributions at the CWN. In this article, we define the CWN and describe how environmental engineers can bring their considerable expertise to bear in this area. Then we review some of the topics that appear in this special issue, for example, mitigating the impacts of hydraulic fracturing and geologic carbon storage, and we provide perspective on emergent research directions, for example, enhanced geothermal energy, energy storage in sedimentary formations, and others.

  18. The INGV Real Time Strong Motion Database

    Science.gov (United States)

    Massa, Marco; D'Alema, Ezio; Mascandola, Claudia; Lovati, Sara; Scafidi, Davide; Gomez, Antonio; Carannante, Simona; Franceschina, Gianlorenzo; Mirenna, Santi; Augliera, Paolo

    2017-04-01

    The INGV real time strong motion data sharing is assured by the INGV Strong Motion Database. ISMD (http://ismd.mi.ingv.it) was designed in the last months of 2011 in cooperation among different INGV departments, with the aim to organize the distribution of the INGV strong-motion data using standard procedures for data acquisition and processing. The first version of the web portal was published soon after the occurrence of the 2012 Emilia (Northern Italy), Mw 6.1, seismic sequence. At that time ISMD was the first European real time web portal devoted to the engineering seismology community. After four years of successfully operation, the thousands of accelerometric waveforms collected in the archive need necessary a technological improvement of the system in order to better organize the new data archiving and to make more efficient the answer to the user requests. ISMD 2.0 was based on PostgreSQL (www.postgresql.org), an open source object- relational database. The main purpose of the web portal is to distribute few minutes after the origin time the accelerometric waveforms and related metadata of the Italian earthquakes with ML≥3.0. Data are provided both in raw SAC (counts) and automatically corrected ASCII (gal) formats. The web portal also provide, for each event, a detailed description of the ground motion parameters (i.e. Peak Ground Acceleration, Velocity and Displacement, Arias and Housner Intensities) data converted in velocity and displacement, response spectra up to 10.0 s and general maps concerning the recent and the historical seismicity of the area together with information about its seismic hazard. The focal parameters of the events are provided by the INGV National Earthquake Center (CNT, http://cnt.rm.ingv.it). Moreover, the database provides a detailed site characterization section for each strong motion station, based on geological, geomorphological and geophysical information. At present (i.e. January 2017), ISMD includes 987 (121

  19. Knowledge base technology for CT-DIMS: Report 1. [CT-DIMS (Cutting Tool - Database and Information Management System)

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, E.E.

    1993-05-01

    This report discusses progress on the Cutting Tool-Database and Information Management System (CT-DIMS) project being conducted by the University of Illinois Urbana-Champaign (UIUC) under contract to the Department of Energy. This project was initiated in October 1991 by UIUC. The Knowledge-Based Engineering Systems Research Laboratory (KBESRL) at UIUC is developing knowledge base technology and prototype software for the presentation and manipulation of the cutting tool databases at Allied-Signal Inc., Kansas City Division (KCD). The graphical tool selection capability being developed for CT-DIMS in the Intelligent Design Environment for Engineering Automation (IDEEA) will provide a concurrent environment for simultaneous access to tool databases, tool standard libraries, and cutting tool knowledge.

  20. Databases in the Central Government : State-of-the-art and the Future

    Science.gov (United States)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  1. Deconstructing Engineering Education Programmes: The DEEP Project to Reform the Mechanical Engineering Curriculum

    Science.gov (United States)

    Busch-Vishniac, Ilene; Kibler, Tom; Campbell, Patricia B.; Patterson, Eann; Guillaume, Darrell; Jarosz, Jeffrey; Chassapis, Constantin; Emery, Ashley; Ellis, Glenn; Whitworth, Horace; Metz, Susan; Brainard, Suzanne; Ray, Pradosh

    2011-01-01

    The goal of the Deconstructing Engineering Education Programmes project is to revise the mechanical engineering undergraduate curriculum to make the discipline more able to attract and retain a diverse community of students. The project seeks to reduce and reorder the prerequisite structure linking courses to offer greater flexibility for…

  2. Biogas composition and engine performance, including database and biogas property model

    NARCIS (Netherlands)

    Bruijstens, A.J.; Beuman, W.P.H.; Molen, M. van der; Rijke, J. de; Cloudt, R.P.M.; Kadijk, G.; Camp, O.M.G.C. op den; Bleuanus, W.A.J.

    2008-01-01

    In order to enable this evaluation of the current biogas quality situation in the EU; results are presented in a biogas database. Furthermore the key gas parameter Sonic Bievo Index (influence on open loop A/F-ratio) is defined and other key gas parameters like the Methane Number (knock resistance)

  3. Exploring Counseling Services and Their Impact on Female, Underrepresented Minority Community College Students in Science, Technology, Engineering, and Math: A Qualitative Study

    Science.gov (United States)

    Strother, Elizabeth

    The economic future of the United States depends on developing a workforce of professionals in science, technology, engineering and mathematics (Adkins, 2012; Mokter Hossain & Robinson, 2012). In California, the college population is increasingly female and underrepresented minority, a population that has historically chosen to study majors other than STEM. In California, community colleges provide a major inroad for students seeking to further their education in one of the many universities in the state. The recent passage of Senate Bill 1456 and the Student Success and Support Program mandate increased counseling services for all California community college students (California Community College Chancellors Office, 2014). This dissertation is designed to explore the perceptions of female, underrepresented minority college students who are majoring in an area of science, technology, engineering and math, as they relate to community college counseling services. Specifically, it aims to understand what counseling services are most effective, and what community college counselors can do to increase the level of interest in STEM careers in this population. This is a qualitative study. Eight participants were interviewed for the case study, all of whom are current or former community college students who have declared a major in a STEM discipline. The semi-structured interviews were designed to help understand what community college counselors can do to better serve this population, and to encourage more students to pursue STEM majors and careers. Through the interviews, themes emerged to explain what counseling services are the most helpful. Successful STEM students benefited from counselors who showed empathy and support. Counselors who understood the intricacies of educational planning for STEM majors were considered the most efficacious. Counselors who could connect students with enrichment activities, such as internships, were highly valued, as were counseling

  4. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  5. THE Be STAR SPECTRA (BeSS) DATABASE

    International Nuclear Information System (INIS)

    Neiner, C.; De Batz, B.; Cochard, F.; Floquet, M.; Mekkas, A.; Desnoux, V.

    2011-01-01

    Be stars vary on many timescales, from hours to decades. A long time base of observations to analyze certain phenomena in these stars is therefore necessary. Collecting all existing and future Be star spectra into one database has thus emerged as an important tool for the Be star community. Moreover, for statistical studies, it is useful to have centralized information on all known Be stars via an up-to-date catalog. These two goals are what the Be Star Spectra (BeSS, http://basebe.obspm.fr) database proposes to achieve. The database contains an as-complete-as-possible catalog of known Be stars with stellar parameters, as well as spectra of Be stars from all origins (any wavelength, any epoch, any resolution, etc.). It currently contains over 54,000 spectra of more than 600 different Be stars among the ∼2000 Be stars in the catalog. A user can access and query this database to retrieve information on Be stars or spectra. Registered members can also upload spectra to enrich the database. Spectra obtained by professional as well as amateur astronomers are individually validated in terms of format and science before being included in BeSS. In this paper, we present the database itself as well as examples of the use of BeSS data in terms of statistics and the study of individual stars.

  6. Discourse Communities and Communities of Practice

    DEFF Research Database (Denmark)

    Pogner, Karl-Heinz

    2005-01-01

    This paper aims at giving a more detailed description and discussion of two concepts of `community' developed in the research areas of text production/ writing and social learning / information management / knowledge sharing and comparing them with each other. The purpose of this theoretical exer...... production at different Danish workplaces (a consulting engi-neering company, a university department and a bank) and discusses their significance in the context of co-located as well as geographically distrib-uted communities....

  7. Statistical models of petrol engines vehicles dynamics

    Science.gov (United States)

    Ilie, C. O.; Marinescu, M.; Alexa, O.; Vilău, R.; Grosu, D.

    2017-10-01

    This paper focuses on studying statistical models of vehicles dynamics. It was design and perform a one year testing program. There were used many same type cars with gasoline engines and different mileage. Experimental data were collected of onboard sensors and those on the engine test stand. A database containing data of 64th tests was created. Several mathematical modelling were developed using database and the system identification method. Each modelling is a SISO or a MISO linear predictive ARMAX (AutoRegressive-Moving-Average with eXogenous inputs) model. It represents a differential equation with constant coefficients. It were made 64th equations for each dependency like engine torque as output and engine’s load and intake manifold pressure, as inputs. There were obtained strings with 64 values for each type of model. The final models were obtained using average values of the coefficients. The accuracy of models was assessed.

  8. Community Drive

    DEFF Research Database (Denmark)

    Magnussen, Rikke

    2018-01-01

    Schools and educational institutions are challenged by not adequately educating students for independent knowledge collaboration and solving of complex societal challenges (Bundsgaard & Hansen, 2016; Slot et al., 2017). As an alternative strategy to formal learning has Community-driven research...... opportunity to break boundaries between research institutions and surrounding communities through the involvement of new types of actors, knowledge forms and institutions (OECD, 2011). This paper presents the project Community Drive a three year cross disciplinary community-driven game– and data-based project....... In the paper we present how the project Community Drive initiated in May 2018 is based on results from pilot projects conducted from 2014 – 2017. Overall these studies showed that it is a strong motivational factor for students to be given the task to change their living conditions through redesign...

  9. Database of Legal Terms for Communicative and Knowledge Information Tools

    DEFF Research Database (Denmark)

    Nielsen, Sandro

    2014-01-01

    foundations of online dictionaries in light of the technical options available for online information tools combined with modern lexicographic principles. The above discussion indicates that the legal database is a repository of structured data serving online dictionaries that search for data in databases......, retrieve the relevant data, and present them to users in predetermined ways. Lawyers, students and translators can thus access the data through targeted searches relating directly to the problems they need to solve, because search engines are designed according to dictionary functions, i.e. the type...

  10. WAsP engineering 2000

    DEFF Research Database (Denmark)

    Mann, J.; Ott, Søren; Jørgensen, B.H.

    2002-01-01

    This report summarizes the findings of the EFP project WAsP Engineering Version 2000. The main product of this project is the computer program WAsP Engineering which is used for the estimation of extreme wind speeds, wind shears, profiles, and turbulencein complex terrain. At the web page http......://www.waspengineering.dk more information of the program can be obtained and a copy of the manual can be downloaded. The reports contains a complete description of the turbulence modelling in moderately complexterrain, implemented in WAsP Engineering. Also experimental validation of the model together with comparison...... with spectra from engineering codes is done. Some shortcomings of the linear flow model LINCOM, which is at the core of WAsP Engineering, ispointed out and modifications to eliminate the problem are presented. The global database of meteorological "reanalysis" data from NCAP/NCEP are used to estimate...

  11. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  12. Shedding light on the subject: introduction to illumination engineering and design for multidiscipline engineering students

    Science.gov (United States)

    Ronen, Ram S.; Smith, R. Frank

    1995-10-01

    Educating engineers and architects in Illumination Engineering and related subjects has become a very important field and a very satisfying and rewarding one. Main reasons include the need to significantly conserve lighting energy and meet government regulations while supplying appropriate light levels and achieving aesthetical requirements. The proliferation of new lamps, luminaries and lighting controllers many of which are 'energy savers' also helps a trend to seek help from lighting engineers when designing new commercial and residential buildings. That trend is believed to continue and grow as benefits become attractive and new government conservation regulations take affect. To make things even better one notices that Engineering and Science students in most disciplines make excellent candidates for Illumination Engineers because of their background and teaching them can move ahead at a brisk pace and be a rewarding experience nevertheless. In the past two years, Cal Poly Pomona College of Engineering has been the beneficiary of a DOE/California grant. Its purpose was to precipitate and oversee light curricula in various California community colleges and also develop and launch an Illumination Engineering minor at Cal Poly University. Both objectives have successfully been met. Numerous community colleges throughout California developed and are offering a sequence of six lighting courses leading to a certificate; the first graduating class is now coming out of both Cypress and Consumnes Community Colleges. At Cal Poly University a four course/laboratory sequence leading to a minor in Illumination Engineering (ILE) is now offered to upper division students in the College of Engineering, College of Science and College of Architecture and Design. The ILE sequence will briefly be described. The first course, Introduction to Illumination Engineering and its laboratory are described in more detail alter. Various methods of instruction including lectures, self work

  13. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  14. Renal Gene Expression Database (RGED): a relational database of gene expression profiles in kidney disease.

    Science.gov (United States)

    Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo

    2014-01-01

    We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. http://rged.wall-eva.net. © The Author(s) 2014. Published by Oxford University Press.

  15. Biomedical Engineering curriculum at UAM-I: a critical review.

    Science.gov (United States)

    Martinez Licona, Fabiola; Azpiroz-Leehan, Joaquin; Urbina Medal, E Gerardo; Cadena Mendez, Miguel

    2014-01-01

    The Biomedical Engineering (BME) curriculum at Universidad Autónoma Metropolitana (UAM) has undergone at least four major transformations since the founding of the BME undergraduate program in 1974. This work is a critical assessment of the curriculum from the point of view of its results as derived from an analysis of, among other resources, institutional databases on students, graduates and their academic performance. The results of the evaluation can help us define admission policies as well as reasonable limits on the maximum duration of undergraduate studies. Other results linked to the faculty composition and the social environment can be used to define a methodology for the evaluation of teaching and the implementation of mentoring and tutoring programs. Changes resulting from this evaluation may be the only way to assure and maintain leadership and recognition from the BME community.

  16. Phytophthora database 2.0: update and future direction.

    Science.gov (United States)

    Park, Bongsoo; Martin, Frank; Geiser, David M; Kim, Hye-Seon; Mansfield, Michele A; Nikolaeva, Ekaterina; Park, Sook-Young; Coffey, Michael D; Russo, Joseph; Kim, Seong H; Balci, Yilmaz; Abad, Gloria; Burgess, Treena; Grünwald, Niklaus J; Cheong, Kyeongchae; Choi, Jaeyoung; Lee, Yong-Hwan; Kang, Seogchan

    2013-12-01

    The online community resource Phytophthora database (PD) was developed to support accurate and rapid identification of Phytophthora and to help characterize and catalog the diversity and evolutionary relationships within the genus. Since its release in 2008, the sequence database has grown to cover 1 to 12 loci for ≈2,600 isolates (representing 138 described and provisional species). Sequences of multiple mitochondrial loci were added to complement nuclear loci-based phylogenetic analyses and diagnostic tool development. Key characteristics of most newly described and provisional species have been summarized. Other additions to improve the PD functionality include: (i) geographic information system tools that enable users to visualize the geographic origins of chosen isolates on a global-scale map, (ii) a tool for comparing genetic similarity between isolates via microsatellite markers to support population genetic studies, (iii) a comprehensive review of molecular diagnostics tools and relevant references, (iv) sequence alignments used to develop polymerase chain reaction-based diagnostics tools to support their utilization and new diagnostic tool development, and (v) an online community forum for sharing and preserving experience and knowledge accumulated in the global Phytophthora community. Here we present how these improvements can support users and discuss the PD's future direction.

  17. Protein engineering for metabolic engineering: current and next-generation tools

    Science.gov (United States)

    Marcheschi, Ryan J.; Gronenberg, Luisa S.; Liao, James C.

    2014-01-01

    Protein engineering in the context of metabolic engineering is increasingly important to the field of industrial biotechnology. As the demand for biologically-produced food, fuels, chemicals, food additives, and pharmaceuticals continues to grow, the ability to design and modify proteins to accomplish new functions will be required to meet the high productivity demands for the metabolism of engineered organisms. This article reviews advances of selecting, modeling, and engineering proteins to improve or alter their activity. Some of the methods have only recently been developed for general use and are just beginning to find greater application in the metabolic engineering community. We also discuss methods of generating random and targeted diversity in proteins to generate mutant libraries for analysis. Recent uses of these techniques to alter cofactor use, produce non-natural amino acids, alcohols, and carboxylic acids, and alter organism phenotypes are presented and discussed as examples of the successful engineering of proteins for metabolic engineering purposes. PMID:23589443

  18. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked &apos

  19. Sustainable Development in Engineering Education

    Science.gov (United States)

    Taoussanidis, Nikolaos N.; Antoniadou, Myrofora A.

    2006-01-01

    The principles and practice of environmentally and socially sustainable engineering are in line with growing community expectations and the strengthening voice of civil society in engineering interventions. Pressures towards internationalization and globalization are reflected in new course accreditation criteria and higher education structures.…

  20. MaizeGDB: The Maize Genetics and Genomics Database.

    Science.gov (United States)

    Harper, Lisa; Gardiner, Jack; Andorf, Carson; Lawrence, Carolyn J

    2016-01-01

    MaizeGDB is the community database for biological information about the crop plant Zea mays. Genomic, genetic, sequence, gene product, functional characterization, literature reference, and person/organization contact information are among the datatypes stored at MaizeGDB. At the project's website ( http://www.maizegdb.org ) are custom interfaces enabling researchers to browse data and to seek out specific information matching explicit search criteria. In addition, pre-compiled reports are made available for particular types of data and bulletin boards are provided to facilitate communication and coordination among members of the community of maize geneticists.

  1. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  2. Ergonomics, Engineering, and Business: Repairing a Tricky Divorce

    DEFF Research Database (Denmark)

    Jensen, Per Langaa; Broberg, Ole; Møller, Niels

    2009-01-01

    This paper discusses how the ergonomics community can contribute to make ergonomics a strategic element in business decisions on strategy and implementation of strategy. The ergonomics community is seen as a heterogeneous entity made up of educational and research activities in universities......, ergonomists and engineers with ergonomic skills, professional ergonomics and engineering societies, and the complex of occupational health and safety regulation. This community interacts in different ways with companies and hereby influences how companies are dealing with ergonomics. The paper argues...

  3. The AMMA database

    Science.gov (United States)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  4. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    Science.gov (United States)

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. The STEP database through the end-users eyes--USABILITY STUDY.

    Science.gov (United States)

    Salunke, Smita; Tuleu, Catherine

    2015-08-15

    The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A database on electric vehicle use in Sweden. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Fridstrand, Niklas [Lund Univ. (Sweden). Dept. of Industrial Electrical Engineering and Automation

    2000-05-01

    The Department of Industrial Electrical Engineering and Automation (IEA) at the Lund Institute of Technology (LTH), has taken responsibility for developing and maintaining a database on electric and hybrid road vehicles in Sweden. The Swedish Transport and Communications Research Board, (KFB) initiated the development of this database. Information is collected from three major cities in Sweden: Malmoe, Gothenburg and Stockholm, as well as smaller cities such as Skellefteaa and Haernoesand in northern Sweden. This final report summarises the experience gained during the development and maintenance of the database from February 1996 to December 1999. Our aim was to construct a well-functioning database for the evaluation of electric and hybrid road vehicles in Sweden. The database contains detailed information on several years' use of electric vehicles (EVs) in Sweden (for example, 220 million driving records). Two data acquisition systems were used, one less and one more complex with respect to the number of quantities logged. Unfortunately, data collection was not complete, due to malfunctioning of the more complex system, and due to human factors for the less complex system.

  7. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  8. Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.

    Science.gov (United States)

    Wang, Jianqi; Zhang, Yajie; Yu, Yonghao

    2015-07-01

    A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.

  9. Search Engine : an effective tool for exploring the Internet

    OpenAIRE

    Ranasinghe, W.M. Tharanga Dilruk

    2006-01-01

    The Internet has become the largest source of information. Today, millions of Websites exist and this number continuous to grow. Finding the right information at the right time is the challenge in the Internet age. Search engine is searchable database which allows locating the information on the Internet by submitting the keywords. Search engines can be divided into two categories as the Individual and Meta Search engines. This article discusses the features of these search engines in detail.

  10. The Engineering 4 Health Challenge - an interdisciplinary and intercultural initiative to foster student engagement in B.C. and improve health care for children in under-serviced communities.

    Science.gov (United States)

    Price, Morgan; Weber-Jahnke, Jens H

    2009-01-01

    This paper describes the Engineering 4 Health (E4H) Challenge, an interdisciplinary and intercultural initiative that, on the one hand, seeks to improve health education of children in under-serviced communities and, on the other, seeks to attract students in British Columbia to professions in engineering and health. The E4H Challenge engages high school and university students in BC to cooperatively design and develop health information and communication technology (ICT) to educate children living in under-serviced communities. The E4H Challenge works with the One Laptop Per Child (OLPC) program to integrate applications for health awareness into the school programs of communities in developing countries. Although applications developed by the E4H Challenge use the low-cost, innovative XO laptop (the "$100 laptop" developed by the OLPC foundation) the software can also be used with other inexpensive hardware.

  11. Civil Engineering: Improving the Quality of Life.

    Science.gov (United States)

    One Feather, Sandra

    2002-01-01

    American Indian civil engineers describe the educational paths that led them to their engineering careers, applications of civil engineering in reservation communities, necessary job skills, opportunities afforded by internship programs, continuing education, and the importance of early preparation in math and science. Addresses of 12 resource Web…

  12. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  13. Interactive Multi-Instrument Database of Solar Flares

    Science.gov (United States)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  14. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  15. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  16. HRGFish: A database of hypoxia responsive genes in fishes

    Science.gov (United States)

    Rashid, Iliyas; Nagpure, Naresh Sahebrao; Srivastava, Prachi; Kumar, Ravindra; Pathak, Ajey Kumar; Singh, Mahender; Kushwaha, Basdeo

    2017-02-01

    Several studies have highlighted the changes in the gene expression due to the hypoxia response in fishes, but the systematic organization of the information and the analytical platform for such genes are lacking. In the present study, an attempt was made to develop a database of hypoxia responsive genes in fishes (HRGFish), integrated with analytical tools, using LAMPP technology. Genes reported in hypoxia response for fishes were compiled through literature survey and the database presently covers 818 gene sequences and 35 gene types from 38 fishes. The upstream fragments (3,000 bp), covered in this database, enables to compute CG dinucleotides frequencies, motif finding of the hypoxia response element, identification of CpG island and mapping with the reference promoter of zebrafish. The database also includes functional annotation of genes and provides tools for analyzing sequences and designing primers for selected gene fragments. This may be the first database on the hypoxia response genes in fishes that provides a workbench to the scientific community involved in studying the evolution and ecological adaptation of the fish species in relation to hypoxia.

  17. Drinking water quality in Indigenous communities in Canada and health outcomes: a scoping review.

    Science.gov (United States)

    Bradford, Lori E A; Okpalauwaekwe, Udoka; Waldner, Cheryl L; Bharadwaj, Lalita A

    2016-01-01

    Many Indigenous communities in Canada live with high-risk drinking water systems and drinking water advisories and experience health status and water quality below that of the general population. A scoping review of research examining drinking water quality and its relationship to Indigenous health was conducted. The study was undertaken to identify the extent of the literature, summarize current reports and identify research needs. A scoping review was designed to identify peer-reviewed literature that examined challenges related to drinking water and health in Indigenous communities in Canada. Key search terms were developed and mapped on five bibliographic databases (MEDLINE/PubMED, Web of Knowledge, SciVerse Scopus, Taylor and Francis online journal and Google Scholar). Online searches for grey literature using relevant government websites were completed. Sixteen articles (of 518; 156 bibliographic search engines, 362 grey literature) met criteria for inclusion (contained keywords; publication year 2000-2015; peer-reviewed and from Canada). Studies were quantitative (8), qualitative (5) or mixed (3) and included case, cohort, cross-sectional and participatory designs. In most articles, no definition of "health" was given (14/16), and the primary health issue described was gastrointestinal illness (12/16). Challenges to the study of health and well-being with respect to drinking water in Indigenous communities included irregular funding, remote locations, ethical approval processes, small sample sizes and missing data. Research on drinking water and health outcomes in Indigenous communities in Canada is limited and occurs on an opportunistic basis. There is a need for more research funding, and inquiry to inform policy decisions for improvements of water quality and health-related outcomes in Indigenous communities. A coordinated network looking at First Nations water and health outcomes, a database to store and create access to research findings, increased

  18. Drinking water quality in Indigenous communities in Canada and health outcomes: a scoping review

    Directory of Open Access Journals (Sweden)

    Lori E. A. Bradford

    2016-07-01

    Full Text Available Background: Many Indigenous communities in Canada live with high-risk drinking water systems and drinking water advisories and experience health status and water quality below that of the general population. A scoping review of research examining drinking water quality and its relationship to Indigenous health was conducted. Objective: The study was undertaken to identify the extent of the literature, summarize current reports and identify research needs. Design: A scoping review was designed to identify peer-reviewed literature that examined challenges related to drinking water and health in Indigenous communities in Canada. Key search terms were developed and mapped on five bibliographic databases (MEDLINE/PubMED, Web of Knowledge, SciVerse Scopus, Taylor and Francis online journal and Google Scholar. Online searches for grey literature using relevant government websites were completed. Results: Sixteen articles (of 518; 156 bibliographic search engines, 362 grey literature met criteria for inclusion (contained keywords; publication year 2000–2015; peer-reviewed and from Canada. Studies were quantitative (8, qualitative (5 or mixed (3 and included case, cohort, cross-sectional and participatory designs. In most articles, no definition of “health” was given (14/16, and the primary health issue described was gastrointestinal illness (12/16. Challenges to the study of health and well-being with respect to drinking water in Indigenous communities included irregular funding, remote locations, ethical approval processes, small sample sizes and missing data. Conclusions: Research on drinking water and health outcomes in Indigenous communities in Canada is limited and occurs on an opportunistic basis. There is a need for more research funding, and inquiry to inform policy decisions for improvements of water quality and health-related outcomes in Indigenous communities. A coordinated network looking at First Nations water and health outcomes, a

  19. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  20. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  1. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    Science.gov (United States)

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  2. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  3. Engineering Values Into Genetic Engineering: A Proposed Analytic Framework for Scientific Social Responsibility.

    Science.gov (United States)

    Sankar, Pamela L; Cho, Mildred K

    2015-01-01

    Recent experiments have been used to "edit" genomes of various plant, animal and other species, including humans, with unprecedented precision. Furthermore, editing the Cas9 endonuclease gene with a gene encoding the desired guide RNA into an organism, adjacent to an altered gene, could create a "gene drive" that could spread a trait through an entire population of organisms. These experiments represent advances along a spectrum of technological abilities that genetic engineers have been working on since the advent of recombinant DNA techniques. The scientific and bioethics communities have built substantial literatures about the ethical and policy implications of genetic engineering, especially in the age of bioterrorism. However, recent CRISPr/Cas experiments have triggered a rehashing of previous policy discussions, suggesting that the scientific community requires guidance on how to think about social responsibility. We propose a framework to enable analysis of social responsibility, using two examples of genetic engineering experiments.

  4. Carbon dioxide (CO 2 ) utilizing strain database | Saini | African ...

    African Journals Online (AJOL)

    Culling of excess carbon dioxide from our environment is one of the major challenges to scientific communities. Many physical, chemical and biological methods have been practiced to overcome this problem. The biological means of CO2 fixation using various microorganisms is gaining importance because database of ...

  5. Gas Turbine Engine Control Design Using Fuzzy Logic and Neural Networks

    Directory of Open Access Journals (Sweden)

    M. Bazazzadeh

    2011-01-01

    Full Text Available This paper presents a successful approach in designing a Fuzzy Logic Controller (FLC for a specific Jet Engine. At first, a suitable mathematical model for the jet engine is presented by the aid of SIMULINK. Then by applying different reasonable fuel flow functions via the engine model, some important engine-transient operation parameters (such as thrust, compressor surge margin, turbine inlet temperature, etc. are obtained. These parameters provide a precious database, which train a neural network. At the second step, by designing and training a feedforward multilayer perceptron neural network according to this available database; a number of different reasonable fuel flow functions for various engine acceleration operations are determined. These functions are used to define the desired fuzzy fuel functions. Indeed, the neural networks are used as an effective method to define the optimum fuzzy fuel functions. At the next step, we propose a FLC by using the engine simulation model and the neural network results. The proposed control scheme is proved by computer simulation using the designed engine model. The simulation results of engine model with FLC illustrate that the proposed controller achieves the desired performance and stability.

  6. The Mars Climate Database (MCD version 5.3)

    Science.gov (United States)

    Millour, Ehouarn; Forget, Francois; Spiga, Aymeric; Vals, Margaux; Zakharov, Vladimir; Navarro, Thomas; Montabone, Luca; Lefevre, Franck; Montmessin, Franck; Chaufray, Jean-Yves; Lopez-Valverde, Miguel; Gonzalez-Galindo, Francisco; Lewis, Stephen; Read, Peter; Desjean, Marie-Christine; MCD/GCM Development Team

    2017-04-01

    Our Global Circulation Model (GCM) simulates the atmospheric environment of Mars. It is developped at LMD (Laboratoire de Meteorologie Dynamique, Paris, France) in close collaboration with several teams in Europe (LATMOS, France, University of Oxford, The Open University, the Instituto de Astrofisica de Andalucia), and with the support of ESA (European Space Agency) and CNES (French Space Agency). GCM outputs are compiled to build a Mars Climate Database, a freely available tool useful for the scientific and engineering communities. The Mars Climate Database (MCD) has over the years been distributed to more than 300 teams around the world. The latest series of reference simulations have been compiled in a new version (v5.3) of the MCD, released in the first half of 2017. To summarize, MCD v5.3 provides: - Climatologies over a series of synthetic dust scenarios: standard (climatology) year, cold (ie: low dust), warm (ie: dusty atmosphere) and dust storm, all topped by various cases of Extreme UV solar inputs (low, mean or maximum). These scenarios have been derived from home-made, instrument-derived (TES, THEMIS, MCS, MERs), dust climatology of the last 8 Martian years. The MCD also provides simulation outputs (MY24-31) representative of these actual years. - Mean values and statistics of main meteorological variables (atmospheric temperature, density, pressure and winds), as well as surface pressure and temperature, CO2 ice cover, thermal and solar radiative fluxes, dust column opacity and mixing ratio, [H20] vapor and ice columns, concentrations of many species: [CO], [O2], [O], [N2], [H2], [O3], ... - A high resolution mode which combines high resolution (32 pixel/degree) MOLA topography records and Viking Lander 1 pressure records with raw lower resolution GCM results to yield, within the restriction of the procedure, high resolution values of atmospheric variables. - The possibility to reconstruct realistic conditions by combining the provided climatology with

  7. Model Driven Engineering

    Science.gov (United States)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  8. Development and implementation of an institutional repository within a Science, Engineering and Technology (SET) environment

    CSIR Research Space (South Africa)

    Van der Merwe, Adèle

    2008-10-01

    Full Text Available -based searches. The scholarly federated search engine of Google (http://scholar.google.com) has been used extensively but not exclusively. Subscription databases such as ISI’s Web of Knowledge were also used. An analysis of the exiting proprietary database... internal controls to prevent unauthorized changes. • Registration of the IR with search engines and service providers such as Google, OAIster and DOAR demands that the IR manager keep abreast with developments in terms of suitable search engines...

  9. M4FT-16LL080302052-Update to Thermodynamic Database Development and Sorption Database Integration

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Glenn T. Seaborg Inst.. Physical and Life Sciences; Wolery, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Akima Infrastructure Services, LLC; Atkins-Duffin, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Global Security

    2016-08-16

    This progress report (Level 4 Milestone Number M4FT-16LL080302052) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number FT-16LL08030205. The focus of this research is the thermodynamic modeling of Engineered Barrier System (EBS) materials and properties and development of thermodynamic databases and models to evaluate the stability of EBS materials and their interactions with fluids at various physico-chemical conditions relevant to subsurface repository environments. The development and implementation of equilibrium thermodynamic models are intended to describe chemical and physical processes such as solubility, sorption, and diffusion.

  10. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  11. OCL2Trigger: Deriving active mechanisms for relational databases using Model-Driven Architecture

    OpenAIRE

    Al-Jumaily, Harith T.; Cuadra, Dolores; Martínez, Paloma

    2008-01-01

    16 pages, 10 figures.-- Issue title: "Best papers from the 2007 Australian Software Engineering Conference (ASWEC 2007), Melbourne, Australia, April 10-13, 2007, Australian Software Engineering Conference 2007". Transforming integrity constraints into active rules or triggers for verifying database consistency produces a serious and complex problem related to real time behaviour that must be considered for any implementation. Our main contribution to this work is to provide a complete appr...

  12. Writing-to-learn in undergraduate science education: a community-based, conceptually driven approach.

    Science.gov (United States)

    Reynolds, Julie A; Thaiss, Christopher; Katkin, Wendy; Thompson, Robert J

    2012-01-01

    Despite substantial evidence that writing can be an effective tool to promote student learning and engagement, writing-to-learn (WTL) practices are still not widely implemented in science, technology, engineering, and mathematics (STEM) disciplines, particularly at research universities. Two major deterrents to progress are the lack of a community of science faculty committed to undertaking and applying the necessary pedagogical research, and the absence of a conceptual framework to systematically guide study designs and integrate findings. To address these issues, we undertook an initiative, supported by the National Science Foundation and sponsored by the Reinvention Center, to build a community of WTL/STEM educators who would undertake a heuristic review of the literature and formulate a conceptual framework. In addition to generating a searchable database of empirically validated and promising WTL practices, our work lays the foundation for multi-university empirical studies of the effectiveness of WTL practices in advancing student learning and engagement.

  13. The Eruption Forecasting Information System (EFIS) database project

    Science.gov (United States)

    Ogburn, Sarah; Harpel, Chris; Pesicek, Jeremy; Wellik, Jay; Pallister, John; Wright, Heather

    2016-04-01

    The Eruption Forecasting Information System (EFIS) project is a new initiative of the U.S. Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) with the goal of enhancing VDAP's ability to forecast the outcome of volcanic unrest. The EFIS project seeks to: (1) Move away from relying on the collective memory to probability estimation using databases (2) Create databases useful for pattern recognition and for answering common VDAP questions; e.g. how commonly does unrest lead to eruption? how commonly do phreatic eruptions portend magmatic eruptions and what is the range of antecedence times? (3) Create generic probabilistic event trees using global data for different volcano 'types' (4) Create background, volcano-specific, probabilistic event trees for frequently active or particularly hazardous volcanoes in advance of a crisis (5) Quantify and communicate uncertainty in probabilities A major component of the project is the global EFIS relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest, including the timing of phreatic eruptions, column heights, eruptive products, etc. and will be initially populated using chronicles of eruptive activity from Alaskan volcanic eruptions in the GeoDIVA database (Cameron et al. 2013). This database module allows us to query across other global databases such as the WOVOdat database of monitoring data and the Smithsonian Institution's Global Volcanism Program (GVP) database of eruptive histories and volcano information. The EFIS database is in the early stages of development and population; thus, this contribution also serves as a request for feedback from the community.

  14. High Energy Nuclear Database: A Testbed for Nuclear Data Information Technology

    International Nuclear Information System (INIS)

    Brown, D A; Vogt, R; Beck, B; Pruet, J

    2007-01-01

    We describe the development of an on-line high-energy heavy-ion experimental database. When completed, the database will be searchable and cross-indexed with relevant publications, including published detector descriptions. While this effort is relatively new, it will eventually contain all published data from older heavy-ion programs as well as published data from current and future facilities. These data include all measured observables in proton-proton, proton-nucleus and nucleus-nucleus collisions. Once in general use, this database will have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models for a broad range of experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion, target and source development for upcoming facilities such as the International Linear Collider and homeland security. This database is part of a larger proposal that includes the production of periodic data evaluations and topical reviews. These reviews would provide an alternative and impartial mechanism to resolve discrepancies between published data from rival experiments and between theory and experiment. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This project serves as a testbed for the further development of an object-oriented nuclear data format and database system. By using ''off-the-shelf'' software tools and techniques, the system is simple, robust, and extensible. Eventually we envision a ''Grand Unified Nuclear Format'' encapsulating data types used in the ENSDF, ENDF/B, EXFOR, NSR and other formats, including processed data formats

  15. ECOS: a configurable, multi-terabyte database supporting engineering and technical computing at Sizewell B

    International Nuclear Information System (INIS)

    Binns, F.; Fish, A.

    1992-01-01

    One of the three main classes of computing support systems is concerned with the technical and engineering aspects of Sizewell-B power station. These aspects are primarily concerned with engineering means to optimise plant use to maximise power output by increasing availability and efficiency. At Sizewell-B the Engineering Computer system (ECOS) will provide the necessary support facilities, and is described. ECOS is being used by the station commissioning team and for monitoring the state of some plant already in service. (Author)

  16. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  17. An ontology-based search engine for protein-protein interactions.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  18. Invention through bricolage: epistemic engineering in scientific communities

    Directory of Open Access Journals (Sweden)

    Alexander James Gillett

    2018-03-01

    Full Text Available It is widely recognised that knowledge accumulation is an important aspect of scientific communities. In this essay, drawing on a range of material from theoretical biology and behavioural science, I discuss a particular aspect of the intergenerational nature of human communities – “virtual collaboration” (Tomasello 1999 – and how it can lead to epistemic progress without any explicit intentional creativity (Henrich 2016. My aim in this paper is to make this work relevant to theorists working on the social structures of science so that these processes can be utilised and optimised in scientific communities.

  19. Application of ABWR construction database to nuclear power plant project

    International Nuclear Information System (INIS)

    Takashima, Atsushi; Katsube, Yasuhiko

    1999-01-01

    Tokyo Electric Power Company (TEPCO) completed the construction of Kashiwazaki-Kariwa Nuclear Power Station Unit No. 6 and No. 7 (K-6/7) as the first advanced boiling water reactors (ABWR) in the world successfully. K-6 and K-7 started their commercial operations in November, 1996 and in July, 1997 respectively. We consider ABWR as a standard BWR in the world as well as in Japan because ABWR is highly reputed. However, because the interval of our nuclear power plant construction is going to be longer, our engineering level on plant construction will be declining. Hence it is necessary for us to maintain our engineering level. In addition to this circumstance, we are planning to wide application of separated purchase orders for further cost reduction. Also there is an expectation for our contribution to ABWR plant constructions overseas. As facing these circumstances, we have developed a construction database based on our experience for ABWR construction. As the first step of developing the database for these use, we analyzed our own activities in the previous ABWR construction. Through this analysis, we could define activity units of which the project consists. As the second step, we clarified the data which are treated in each activity unit and the interface among them. By taking these steps, we could develop our database efficiently. (author)

  20. OxDBase: a database of oxygenases involved in biodegradation

    Directory of Open Access Journals (Sweden)

    Raghava Gajendra PS

    2009-04-01

    Full Text Available Abstract Background Oxygenases belong to the oxidoreductive group of enzymes (E.C. Class 1, which oxidize the substrates by transferring oxygen from molecular oxygen (O2 and utilize FAD/NADH/NADPH as the co-substrate. Oxygenases can further be grouped into two categories i.e. monooxygenases and dioxygenases on the basis of number of oxygen atoms used for oxidation. They play a key role in the metabolism of organic compounds by increasing their reactivity or water solubility or bringing about cleavage of the aromatic ring. Findings We compiled a database of biodegradative oxygenases (OxDBase which provides a compilation of the oxygenase data as sourced from primary literature in the form of web accessible database. There are two separate search engines for searching into the database i.e. mono and dioxygenases database respectively. Each enzyme entry contains its common name and synonym, reaction in which enzyme is involved, family and subfamily, structure and gene link and literature citation. The entries are also linked to several external database including BRENDA, KEGG, ENZYME and UM-BBD providing wide background information. At present the database contains information of over 235 oxygenases including both dioxygenases and monooxygenases. This database is freely available online at http://www.imtech.res.in/raghava/oxdbase/. Conclusion OxDBase is the first database that is dedicated only to oxygenases and provides comprehensive information about them. Due to the importance of the oxygenases in chemical synthesis of drug intermediates and oxidation of xenobiotic compounds, OxDBase database would be very useful tool in the field of synthetic chemistry as well as bioremediation.

  1. Model-Based Systems Engineering in Concurrent Engineering Centers

    Science.gov (United States)

    Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  2. An unlikely suitor: Industrial Engineering in health promotion

    Directory of Open Access Journals (Sweden)

    Hattingh, T. S.

    2013-05-01

    Full Text Available Primary healthcare forms the foundation for transforming healthcare in South Africa. The primary healthcare system is based on five pillars, one of them being health promotion. The principles of health promotion advocate that promoting health and wellness within communities will reduce the burden of disease at both primary and higher levels of the healthcare system. The challenge in South Africa, is that the factors affecting communities often inhibit their ability to control their health. In addition, the health promotion function within clinics is underresourced: each health promoter serves impoverished communities of up to 50,000 people. This study aims to identify how industrial engineering principles can be applied to assess and improve the impact of health promotion on communities, and ultimately on the health care system as a whole. An industrial engineering approach has analysed five clinics within the Ekurhuleni Municipality in Gauteng. The results show a distinct lack of consistency between clinics. Common issues include a lack of standard processes, structures, measures, resources, and training to support health promotion. The problems identified are commonly analysed and addressed by industrial engineering in organisations, and industrial engineering could be a useful method for evaluating and improving the impact of health promotion on communities. Recommendations for improvement and further work were made based on the findings.

  3. The Development of a Combined Search for a Heterogeneous Chemistry Database

    Directory of Open Access Journals (Sweden)

    Lulu Jiang

    2015-05-01

    Full Text Available A combined search, which joins a slow molecule structure search with a fast compound property search, results in more accurate search results and has been applied in several chemistry databases. However, the problems of search speed differences and combining the two separate search results are two major challenges. In this paper, two kinds of search strategies, synchronous search and asynchronous search, are proposed to solve these problems in the heterogeneous structure database and the property database found in ChemDB, a chemistry database owned by the Institute of Process Engineering, CAS. Their advantages and disadvantages under different conditions are discussed in detail. Furthermore, we applied these two searches to ChemDB and used them to screen for potential molecules that can work as CO2 absorbents. The results reveal that this combined search discovers reasonable target molecules within an acceptable time frame.

  4. Integrated spent nuclear fuel database system

    International Nuclear Information System (INIS)

    Henline, S.P.; Klingler, K.G.; Schierman, B.H.

    1994-01-01

    The Distributed Information Systems software Unit at the Idaho National Engineering Laboratory has designed and developed an Integrated Spent Nuclear Fuel Database System (ISNFDS), which maintains a computerized inventory of all US Department of Energy (DOE) spent nuclear fuel (SNF). Commercial SNF is not included in the ISNFDS unless it is owned or stored by DOE. The ISNFDS is an integrated, single data source containing accurate, traceable, and consistent data and provides extensive data for each fuel, extensive facility data for every facility, and numerous data reports and queries

  5. Uniform standards for genome databases in forest and fruit trees

    Science.gov (United States)

    TreeGenes and tfGDR serve the international forestry and fruit tree genomics research communities, respectively. These databases hold similar sequence data and provide resources for the submission and recovery of this information in order to enable comparative genomics research. Large-scale genotype...

  6. VAMDC - The Virtual Atomic and Molecular Data Centre: A New Era in Database Collaboration

    International Nuclear Information System (INIS)

    Mason, N.J.

    2011-01-01

    Atomic and molecular data (A and M) are of critical importance in developing models of radiation chemistry including track structures. Currently these vital and fundamental A and M data resources are highly fragmented and only available through a variety of often poorly documented interfaces. The Virtual Atomic and Molecular Data Centre (VAMDC) is an EU funded e-infrastructure (www.vamdc.eu) that aims to provide the scientific community with access to a comprehensive, federated set of Atomic and Molecular (A and M) data. These structures have been created by initiatives such as the Euro-VO (http://www.euro-vo.org) and EGEE (Enabling Grids for E-sciencE, (http://www.eu-egee.org/). VAMDC will be built upon existing A and M databases. It has the specific aim of creating an infrastructure that on the one hand can directly extract data from the existing depositories while on the other hand is sufficiently flexible to be tuned to the needs of a wide variety of users from academic, governmental, industrial communities or even the general public. Central to VAMDC is the task of overcoming the current fragmentation of the A and M database community. VAMDC will alleviate this by: - developing the largest and most comprehensive atomic and molecular e-infrastructure to be shared, fed and expanded by A and M scientists; - providing a major distributed infrastructure which can be accessed, referenced and exploited by the wider research community. In fulfilling these aims, the VAMDC project will organise a series of Networking Activities (NAs). NAs are specifically aimed at: Engaging data providers; Coordinating activities among existing database providers; Ascertaining and responding to the needs of different user communities; Providing training and awareness of the VAMDC across the international A and M community and other use communities such as the radiation chemistry community. In this talk I will therefore outline the aims, methodology and mechanisms of the VAMDC project

  7. Using communities that care for community child maltreatment prevention.

    Science.gov (United States)

    Salazar, Amy M; Haggerty, Kevin P; de Haan, Benjamin; Catalano, Richard F; Vann, Terri; Vinson, Jean; Lansing, Michaele

    2016-03-01

    The prevention of mental, emotional, and behavioral (MEB) disorders among children and adolescents is a national priority. One mode of implementing community-wide MEB prevention efforts is through evidence-based community mobilization approaches such as Communities That Care (CTC). This article provides an overview of the CTC framework and discusses the adaptation process of CTC to prevent development of MEBs through preventing child abuse and neglect and bolstering child well-being in children aged 0 to 10. Adaptations include those to the intervention itself as well as those to the evaluation approach. Preliminary findings from the Keeping Families Together pilot study of this evolving approach suggest that the implementation was manageable for sites, and community board functioning and community adoption of a science-based approach to prevention in pilot sites looks promising. Implications and next steps are outlined. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Engineering education research: Impacts of an international network of female engineers on the persistence of Liberian undergraduate women studying engineering

    Science.gov (United States)

    Rimer, Sara; Reddivari, Sahithya; Cotel, Aline

    2015-11-01

    As international efforts to educate and empower women continue to rise, engineering educators are in a unique position to be a part of these efforts by encouraging and supporting women across the world at the university level through STEM education and outreach. For the past two years, the University of Michigan has been a part of a grassroots effort to encourage and support the persistence of engineering female students at University of Liberia. This effort has led to the implementation of a leadership camp this past August for Liberian engineering undergraduate women, meant to: (i) to empower engineering students with the skills, support, and inspiration necessary to become successful and well-rounded engineering professionals in a global engineering market; and (ii) to strengthen the community of Liberian female engineers by building cross-cultural partnerships among students resulting in a international network of women engineers. This session will present qualitative research findings on the impact of this grassroots effort on Liberian female students? persistence in engineering, and the future directions of this work.

  9. PAGES-Powell North America 2k database

    Science.gov (United States)

    McKay, N.

    2014-12-01

    Syntheses of paleoclimate data in North America are essential for understanding long-term spatiotemporal variability in climate and for properly assessing risk on decadal and longer timescales. Existing reconstructions of the past 2,000 years rely almost exclusively on tree-ring records, which can underestimate low-frequency variability and rarely extend beyond the last millennium. Meanwhile, many records from the full spectrum of paleoclimate archives are available and hold the potential of enhancing our understanding of past climate across North America over the past 2000 years. The second phase of the Past Global Changes (PAGES) North America 2k project began in 2014, with a primary goal of assembling these disparate paleoclimate records into a unified database. This effort is currently supported by the USGS Powell Center together with PAGES. Its success requires grassroots support from the community of researchers developing and interpreting paleoclimatic evidence relevant to the past 2000 years. Most likely, fewer than half of the published records appropriate for this database are publicly archived, and far fewer include the data needed to quantify geochronologic uncertainty, or to concisely describe how best to interpret the data in context of a large-scale paleoclimatic synthesis. The current version of the database includes records that (1) have been published in a peer-reviewed journal (including evidence of the record's relationship to climate), (2) cover a substantial portion of the past 2000 yr (>300 yr for annual records, >500 yr for lower frequency records) at relatively high resolution (<50 yr/observation), and (3) have reasonably small and quantifiable age uncertainty. Presently, the database includes records from boreholes, ice cores, lake and marine sediments, speleothems, and tree rings. This poster presentation will display the site locations and basic metadata of the records currently in the database. We invite anyone with interest in

  10. CORE: a phylogenetically-curated 16S rDNA database of the core oral microbiome.

    Directory of Open Access Journals (Sweden)

    Ann L Griffen

    2011-04-01

    Full Text Available Comparing bacterial 16S rDNA sequences to GenBank and other large public databases via BLAST often provides results of little use for identification and taxonomic assignment of the organisms of interest. The human microbiome, and in particular the oral microbiome, includes many taxa, and accurate identification of sequence data is essential for studies of these communities. For this purpose, a phylogenetically curated 16S rDNA database of the core oral microbiome, CORE, was developed. The goal was to include a comprehensive and minimally redundant representation of the bacteria that regularly reside in the human oral cavity with computationally robust classification at the level of species and genus. Clades of cultivated and uncultivated taxa were formed based on sequence analyses using multiple criteria, including maximum-likelihood-based topology and bootstrap support, genetic distance, and previous naming. A number of classification inconsistencies for previously named species, especially at the level of genus, were resolved. The performance of the CORE database for identifying clinical sequences was compared to that of three publicly available databases, GenBank nr/nt, RDP and HOMD, using a set of sequencing reads that had not been used in creation of the database. CORE offered improved performance compared to other public databases for identification of human oral bacterial 16S sequences by a number of criteria. In addition, the CORE database and phylogenetic tree provide a framework for measures of community divergence, and the focused size of the database offers advantages of efficiency for BLAST searching of large datasets. The CORE database is available as a searchable interface and for download at http://microbiome.osu.edu.

  11. A Community Patient Demographic System

    OpenAIRE

    Gabler, James M.; Simborg, Donald W.

    1985-01-01

    A Community Patient Demographic System is described. Its purpose is to link patient identification, demographic and insurance information among multiple organizations in a community or among multiple registration systems within the same organization. This function requires that there be a competent patient identification methodology and clear definition of local responsibilities for number assignment and database editing.

  12. The BioMart community portal: an innovative alternative to large, centralized data repositories

    Science.gov (United States)

    The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...

  13. A Model-driven Role-based Access Control for SQL Databases

    Directory of Open Access Journals (Sweden)

    Raimundas Matulevičius

    2015-07-01

    Full Text Available Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC, which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.

  14. PCACE-Personal-Computer-Aided Cabling Engineering

    Science.gov (United States)

    Billitti, Joseph W.

    1987-01-01

    PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0

  15. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  16. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    Science.gov (United States)

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the

  17. Civil Engineering Technology Needs Assessment.

    Science.gov (United States)

    Oakland Community Coll., Farmington, MI. Office of Institutional Planning and Analysis.

    In 1991, a study was conducted by Oakland Community College (OCC) to evaluate the need for a proposed Civil Engineering Technology program. An initial examination of the literature focused on industry needs and the job market for civil engineering technicians. In order to gather information on local area employers' hiring practices and needs, a…

  18. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    DEFF Research Database (Denmark)

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T

    2009-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database...... to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature...

  19. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  20. Updated Palaeotsunami Database for Aotearoa/New Zealand

    Science.gov (United States)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a

  1. SoyFN: a knowledge database of soybean functional networks.

    Science.gov (United States)

    Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.

  2. Software Engineering Improvement Plan

    Science.gov (United States)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  3. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  4. Functional Enzyme-Based Approach for Linking Microbial Community Functions with Biogeochemical Process Kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Li, Minjing [School; Qian, Wei-jun [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Gao, Yuqian [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Shi, Liang [School; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; School

    2017-09-28

    The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes as time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates

  5. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    Science.gov (United States)

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  6. Kalium: a database of potassium channel toxins from scorpion venom.

    Science.gov (United States)

    Kuzmenkov, Alexey I; Krylov, Nikolay A; Chugunov, Anton O; Grishin, Eugene V; Vassilevski, Alexander A

    2016-01-01

    Kalium (http://kaliumdb.org/) is a manually curated database that accumulates data on potassium channel toxins purified from scorpion venom (KTx). This database is an open-access resource, and provides easy access to pages of other databases of interest, such as UniProt, PDB, NCBI Taxonomy Browser, and PubMed. General achievements of Kalium are a strict and easy regulation of KTx classification based on the unified nomenclature supported by researchers in the field, removal of peptides with partial sequence and entries supported by transcriptomic information only, classification of β-family toxins, and addition of a novel λ-family. Molecules presented in the database can be processed by the Clustal Omega server using a one-click option. Molecular masses of mature peptides are calculated and available activity data are compiled for all KTx. We believe that Kalium is not only of high interest to professional toxinologists, but also of general utility to the scientific community.Database URL:http://kaliumdb.org/. © The Author(s) 2016. Published by Oxford University Press.

  7. The UKNG database: a simple audit tool for interventional neuroradiology

    International Nuclear Information System (INIS)

    Millar, J.S.; Burke, M.

    2007-01-01

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  8. The UKNG database: a simple audit tool for interventional neuroradiology

    Energy Technology Data Exchange (ETDEWEB)

    Millar, J.S.; Burke, M. [Southampton General Hospital, Departments of Neuroradiology and IT, Wessex Neurological Centre, Southampton (United Kingdom)

    2007-06-15

    The UK Neurointerventional Group (UKNG) has developed a unified database for the purposes of recording, analysis and clinical audit of neuroangiography and neurointerventional procedures. It has been in use since January 2002. The database utilizes an Access platform (Microsoft) comprising separate but linked programs for data collection and analysis. The program that analyses aneurysm therapy has been designed to mirror the criteria used in the International Subarachnoid Aneurysm Trial (ISAT). Data entered into the main database immediately update the analysis program producing clinical outcome scores in the form of a report. Our local database (Wessex) now contains records on more than 1,750 patients including nearly 350 aneurysm coilings and a total of approximately 500 neurointerventional, vascular procedures. Every time a new piece of information is added to the main database the reporting database is automatically updated which allows 'real-time' audit and analysis of one's clinical practice. The clinical outcome scores for aneurysm treatment are presented in such a way that we can directly compare our results with the 'Clinical Standard' set by ISAT. This database provides a unique opportunity to monitor and review practice at national level. The UKNG wishes to share this database with the wider neurointerventional community and a copy of the software can be obtained free of charge from the authors. (orig.)

  9. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Braams, Bastiaan J.; Chung, Hyun-Kyung [Nuclear Data Section, NAPC Division, International Atomic Energy Agency P. O. Box 100, Vienna International Centre, AT-1400 Vienna (Austria)

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  10. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Science.gov (United States)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  11. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    International Nuclear Information System (INIS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-01-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  12. Spent nuclear fuel project cold vacuum drying facility supporting data and calculation database

    International Nuclear Information System (INIS)

    IRWIN, J.J.

    1999-01-01

    This document provides a database of supporting calculations for the Cold Vacuum Drying Facility (CVDF). The database was developed in conjunction with HNF-SD-SNF-SAR-002, ''Safety Analysis Report for the Cold Vacuum Drying Facility'', Phase 2, ''Supporting Installation of Processing Systems'' (Garvin 1998). The HNF-SD-SNF-DRD-002, 1997, ''Cold Vacuum Drying Facility Design Requirements'', Rev. 2, and the CVDF Summary Design Report. The database contains calculation report entries for all process, safety and facility systems in the CVDF, a general CVD operations sequence and the CVDF System Design Descriptions (SDDs). This database has been developed for the SNFP CVDF Engineering Organization and shall be updated, expanded, and revised in accordance with future design, construction and startup phases of the CVDF until the CVDF final ORR is approved

  13. Moving to Google Cloud: Renovation of Global Borehole Temperature Database for Climate Research

    Science.gov (United States)

    Xiong, Y.; Huang, S.

    2013-12-01

    Borehole temperature comprises an independent archive of information on climate change which is complementary to the instrumental and other proxy climate records. With support from the international geothermal community, a global database of borehole temperatures has been constructed for the specific purpose of the study on climate change. Although this database has become an important data source in climate research, there are certain limitations partially because the framework of the existing borehole temperature database was hand-coded some twenty years ago. A database renovation work is now underway to take the advantages of the contemporary online database technologies. The major intended improvements include 1) dynamically linking a borehole site to Google Earth to allow for inspection of site specific geographical information; 2) dynamically linking an original key reference of a given borehole site to Google Scholar to allow for a complete list of related publications; and 3) enabling site selection and data download based on country, coordinate range, and contributor. There appears to be a good match between the enhancement requirements for this database and the functionalities of the newly released Google Fusion Tables application. Google Fusion Tables is a cloud-based service for data management, integration, and visualization. This experimental application can consolidate related online resources such as Google Earth, Google Scholar, and Google Drive for sharing and enriching an online database. It is user friendly, allowing users to apply filters and to further explore the internet for additional information regarding the selected data. The users also have ways to map, to chart, and to calculate on the selected data, and to download just the subset needed. The figure below is a snapshot of the database currently under Google Fusion Tables renovation. We invite contribution and feedback from the geothermal and climate research community to make the

  14. A proposal for a drug information database and text templates for generating package inserts

    Directory of Open Access Journals (Sweden)

    Okuya R

    2013-07-01

    Full Text Available Ryo Okuya,1 Masaomi Kimura,2 Michiko Ohkura,2 Fumito Tsuchiya3 1Graduate School of Engineering and Science, 2Faculty of Engineering, Shibaura Institute of Technology, Tokyo, 3School of Pharmacy, International University of Health and Welfare, Tokyo, Japan Abstract: To prevent prescription errors caused by information systems, a database to store complete and accurate drug information in a user-friendly format is needed. In previous studies, the primary method for obtaining data stored in a database is to extract drug information from package inserts by employing pattern matching or more sophisticated methods such as text mining. However, it is difficult to obtain a complete database because there is no strict rule concerning expressions used to describe drug information in package inserts. The authors' strategy was to first build a database and then automatically generate package inserts by embedding data in the database using templates. To create this database, the support of pharmaceutical companies to input accurate data is required. It is expected that this system will work, because these companies can earn merit for newly developed drugs to decrease the effort to create package inserts from scratch. This study designed the table schemata for the database and text templates to generate the package inserts. To handle the variety of drug-specific information in the package inserts, this information in drug composition descriptions was replaced with labels and the replacement descriptions utilizing cluster analysis were analyzed. To improve the method by which frequently repeated ingredient information and/or supplementary information are stored, the method was modified by introducing repeat tags in the templates to indicate repetition and improving the insertion of data into the database. The validity of this method was confirmed by inputting the drug information described in existing package inserts and checking that the method could

  15. Development of Tsunami Trace Database with reliability evaluation on Japan coasts

    International Nuclear Information System (INIS)

    Iwabuchi, Yoko; Sugino, Hideharu; Imamura, Fumihiko; Imai, Kentaro; Tsuji, Yoshinobu; Matsuoka, Yuya; Shuto, Nobuo

    2012-01-01

    The purpose of this research is to develop a Tsunami Trace Database by collecting historical materials as well as documents concerning tsunamis which had hit Japan and, of which the reliability of tsunami run-up and related data is taken into account. Based on acquisition and surveying of references, tsunami trace data over past 400 years of Japan has collected into a database, and reliability of each trace data was evaluated according to categorization of Japan Society of Civil Engineers (2002). As a result, trace data can now be searched and filtered with reliability levels accordingly whilst utilizing it for verification of tsunami numerical analysis and estimation of tsunami sources. By analyzing this database, we have quantitatively revealed the fact that the amount of reliable data tends to diminish as it goes older. (author)

  16. Software engineering knowledge at your fingertips: Experiences with a software engineering-portal

    OpenAIRE

    Punter, T.; Kalmar, R.

    2003-01-01

    In order to keep up the pace with technology development, knowledge on Software Engineering (SE) methods, techniques, and tools is required. For an effective and efficient knowledge transfer, especially Small and Medium-sized Enterprises (SMEs) might benefit from Software Engineering Portals (SE-Portals). This paper provides an analysis of SE-Portals by distinguishing two types: 1) the Knowledge Portal and 2) the Knowledge & Community Portal. On behalf of the analysis we conclude that most SE...

  17. Women In Engineering Learning Community: What We Learned The First Year

    OpenAIRE

    LaBoone, Kimberly; Lazar, Maureen; Watford, Bevlee

    2007-01-01

    The College of Engineering at Virginia Tech reflects national trends with respect to women in engineering. With first year enrollments hovering around 17%, the retention through graduation of these women is critical to increasing the number of women in the engineering profession. When examining year to year retention rates, it is observed that the largest percentage of women drop out of engineering during or immediately following their first year. It is therefore believed that efforts to incr...

  18. Web geoprocessing services on GML with a fast XML database ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  19. Database architecture evolution: Mammals flourished long before dinosaurs became extinct

    NARCIS (Netherlands)

    S. Manegold (Stefan); M.L. Kersten (Martin); P.A. Boncz (Peter)

    2009-01-01

    textabstractThe holy grail for database architecture research is to find a solution that is Scalable & Speedy, to run on anything from small ARM processors up to globally distributed compute clusters, Stable & Secure, to service a broad user community, Small & Simple, to be comprehensible to a small

  20. Forensic utilization of familial searches in DNA databases.

    Science.gov (United States)

    Gershaw, Cassandra J; Schweighardt, Andrew J; Rourke, Linda C; Wallace, Margaret M

    2011-01-01

    DNA evidence is widely recognized as an invaluable tool in the process of investigation and identification, as well as one of the most sought after types of evidence for presentation to a jury. In the United States, the development of state and federal DNA databases has greatly impacted the forensic community by creating an efficient, searchable system that can be used to eliminate or include suspects in an investigation based on matching DNA profiles - the profile already in the database to the profile of the unknown sample in evidence. Recent changes in legislation have begun to allow for the possibility to expand the parameters of DNA database searches, taking into account the possibility of familial searches. This article discusses prospective positive outcomes of utilizing familial DNA searches and acknowledges potential negative outcomes, thereby presenting both sides of this very complicated, rapidly evolving situation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. An extensible database architecture for nationwide power quality monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Kuecuek, Dilek; Inan, Tolga; Salor, Oezguel; Demirci, Turan; Buhan, Serkan; Boyrazoglu, Burak [TUBITAK Uzay, Power Electronics Group, TR 06531 Ankara (Turkey); Akkaya, Yener; Uensar, Oezguer; Altintas, Erinc; Haliloglu, Burhan [Turkish Electricity Transmission Co. Inc., TR 06490 Ankara (Turkey); Cadirci, Isik [TUBITAK Uzay, Power Electronics Group, TR 06531 Ankara (Turkey); Hacettepe University, Electrical and Electronics Eng. Dept., TR 06532 Ankara (Turkey); Ermis, Muammer [METU, Electrical and Electronics Eng. Dept., TR 06531 Ankara (Turkey)

    2010-07-15

    Electrical power quality (PQ) data is one of the prevalent types of engineering data. Its measurement at relevant sampling rates leads to large volumes of PQ data to be managed and analyzed. In this paper, an extensible database architecture is presented based on a novel generic data model for PQ data. The proposed architecture is operated on the nationwide PQ data of the Turkish Electricity Transmission System measured in the field by mobile PQ monitoring systems. The architecture is extensible in the sense that it can be used to store and manage PQ data collected by any means with little or no customization. The architecture has three modules: a PQ database corresponding to the implementation of the generic data model, a visual user query interface to enable its users to specify queries to the PQ database and a query processor acting as a bridge between the query interface and the database. The operation of the architecture is illustrated on the field PQ data with several query examples through the visual query interface. The execution of the architecture on this data of considerable volume supports its applicability and convenience for PQ data. (author)

  2. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  3. Engineering the object-relation database model in O-Raid

    Science.gov (United States)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  4. High energy nuclear database: a test-bed for nuclear data information technology

    International Nuclear Information System (INIS)

    Brown, D.A.; Vogt, R.; Beck, B.; Pruet, J.; Vogt, R.

    2008-01-01

    We describe the development of an on-line high-energy heavy-ion experimental database. When completed, the database will be searchable and cross-indexed with relevant publications, including published detector descriptions. While this effort is relatively new, it will eventually contain all published data from older heavy-ion programs as well as published data from current and future facilities. These data include all measured observables in proton-proton, proton-nucleus and nucleus-nucleus collisions. Once in general use, this database will have tremendous scientific payoff as it makes systematic studies easier and allows simpler benchmarking of theoretical models for a broad range of experiments. Furthermore, there is a growing need for compilations of high-energy nuclear data for applications including stockpile stewardship, technology development for inertial confinement fusion, target and source development for upcoming facilities such as the International Linear Collider and homeland security. This database is part of a larger proposal that includes the production of periodic data evaluations and topical reviews. These reviews would provide an alternative and impartial mechanism to resolve discrepancies between published data from rival experiments and between theory and experiment. Since this database will be a community resource, it requires the high-energy nuclear physics community's financial and manpower support. This project serves as a test-bed for the further development of an object-oriented nuclear data format and database system. By using 'off-the-shelf' software tools and techniques, the system is simple, robust, and extensible. Eventually we envision a 'Grand Unified Nuclear Format' encapsulating data types used in the ENSDF, Endf/B, EXFOR, NSR and other formats, including processed data formats. (authors)

  5. Comparison of Problem Solving from Engineering Design to Software Design

    DEFF Research Database (Denmark)

    Ahmed-Kristensen, Saeema; Babar, Muhammad Ali

    2012-01-01

    Observational studies of engineering design activities can inform the research community on the problem solving models that are employed by professional engineers. Design is defined as an ill-defined problem which includes both engineering design and software design, hence understanding problem...... solving models from other design domains is of interest to the engineering design community. For this paper an observational study of two software design sessions performed for the workshop on “Studying professional Software Design” is compared to analysis from engineering design. These findings provide...... useful insights of how software designers move from a problem domain to a solution domain and the commonalities between software designers’ and engineering designers’ design activities. The software designers were found to move quickly to a detailed design phase, employ co-.evolution and adopt...

  6. Comparison of Problem Solving from Engineering Design to Software Design

    DEFF Research Database (Denmark)

    Ahmed-Kristensen, Saeema; Babar, Muhammad Ali

    2012-01-01

    solving models from other design domains is of interest to the engineering design community. For this paper an observational study of two software design sessions performed for the workshop on “Studying professional Software Design” is compared to analysis from engineering design. These findings provide......Observational studies of engineering design activities can inform the research community on the problem solving models that are employed by professional engineers. Design is defined as an ill-defined problem which includes both engineering design and software design, hence understanding problem...... useful insights of how software designers move from a problem domain to a solution domain and the commonalities between software designers’ and engineering designers’ design activities. The software designers were found to move quickly to a detailed design phase, employ co-.evolution and adopt...

  7. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  8. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  9. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  10. Post-Inpatient Brain Injury Rehabilitation Outcomes: Report from the National OutcomeInfo Database

    OpenAIRE

    Malec, James F.; Kean, Jacob

    2016-01-01

    This study examined outcomes for intensive residential and outpatient/community-based post-inpatient brain injury rehabilitation (PBIR) programs compared with supported living programs. The goal of supported living programs was stable functioning (no change). Data were obtained for a large cohort of adults with acquired brain injury (ABI) from the OutcomeInfo national database, a web-based database system developed through National Institutes of Health (NIH) Small Business Technology Transfer...

  11. SoyDB: a knowledge database of soybean transcription factors

    Directory of Open Access Journals (Sweden)

    Valliyodan Babu

    2010-01-01

    Full Text Available Abstract Background Transcription factors play the crucial rule of regulating gene expression and influence almost all biological processes. Systematically identifying and annotating transcription factors can greatly aid further understanding their functions and mechanisms. In this article, we present SoyDB, a user friendly database containing comprehensive knowledge of soybean transcription factors. Description The soybean genome was recently sequenced by the Department of Energy-Joint Genome Institute (DOE-JGI and is publicly available. Mining of this sequence identified 5,671 soybean genes as putative transcription factors. These genes were comprehensively annotated as an aid to the soybean research community. We developed SoyDB - a knowledge database for all the transcription factors in the soybean genome. The database contains protein sequences, predicted tertiary structures, putative DNA binding sites, domains, homologous templates in the Protein Data Bank (PDB, protein family classifications, multiple sequence alignments, consensus protein sequence motifs, web logo of each family, and web links to the soybean transcription factor database PlantTFDB, known EST sequences, and other general protein databases including Swiss-Prot, Gene Ontology, KEGG, EMBL, TAIR, InterPro, SMART, PROSITE, NCBI, and Pfam. The database can be accessed via an interactive and convenient web server, which supports full-text search, PSI-BLAST sequence search, database browsing by protein family, and automatic classification of a new protein sequence into one of 64 annotated transcription factor families by hidden Markov models. Conclusions A comprehensive soybean transcription factor database was constructed and made publicly accessible at http://casp.rnet.missouri.edu/soydb/.

  12. Vesiclepedia: a compendium for extracellular vesicles with continuous community annotation.

    Directory of Open Access Journals (Sweden)

    Hina Kalra

    Full Text Available Extracellular vesicles (EVs are membraneous vesicles released by a variety of cells into their microenvironment. Recent studies have elucidated the role of EVs in intercellular communication, pathogenesis, drug, vaccine and gene-vector delivery, and as possible reservoirs of biomarkers. These findings have generated immense interest, along with an exponential increase in molecular data pertaining to EVs. Here, we describe Vesiclepedia, a manually curated compendium of molecular data (lipid, RNA, and protein identified in different classes of EVs from more than 300 independent studies published over the past several years. Even though databases are indispensable resources for the scientific community, recent studies have shown that more than 50% of the databases are not regularly updated. In addition, more than 20% of the database links are inactive. To prevent such database and link decay, we have initiated a continuous community annotation project with the active involvement of EV researchers. The EV research community can set a gold standard in data sharing with Vesiclepedia, which could evolve as a primary resource for the field.

  13. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  14. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  15. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  16. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  17. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  18. The "GeneTrustee": a universal identification system that ensures privacy and confidentiality for human genetic databases.

    Science.gov (United States)

    Burnett, Leslie; Barlow-Stewart, Kris; Proos, Anné L; Aizenberg, Harry

    2003-05-01

    This article describes a generic model for access to samples and information in human genetic databases. The model utilises a "GeneTrustee", a third-party intermediary independent of the subjects and of the investigators or database custodians. The GeneTrustee model has been implemented successfully in various community genetics screening programs and has facilitated research access to genetic databases while protecting the privacy and confidentiality of research subjects. The GeneTrustee model could also be applied to various types of non-conventional genetic databases, including neonatal screening Guthrie card collections, and to forensic DNA samples.

  19. StraPep: a structure database of bioactive peptides

    Science.gov (United States)

    Wang, Jian; Yin, Tailang; Xiao, Xuwen; He, Dan; Xue, Zhidong; Jiang, Xinnong; Wang, Yan

    2018-01-01

    Abstract Bioactive peptides, with a variety of biological activities and wide distribution in nature, have attracted great research interest in biological and medical fields, especially in pharmaceutical industry. The structural information of bioactive peptide is important for the development of peptide-based drugs. Many databases have been developed cataloguing bioactive peptides. However, to our knowledge, database dedicated to collect all the bioactive peptides with known structure is not available yet. Thus, we developed StraPep, a structure database of bioactive peptides. StraPep holds 3791 bioactive peptide structures, which belong to 1312 unique bioactive peptide sequences. About 905 out of 1312 (68%) bioactive peptides in StraPep contain disulfide bonds, which is significantly higher than that (21%) of PDB. Interestingly, 150 out of 616 (24%) bioactive peptides with three or more disulfide bonds form a structural motif known as cystine knot, which confers considerable structural stability on proteins and is an attractive scaffold for drug design. Detailed information of each peptide, including the experimental structure, the location of disulfide bonds, secondary structure, classification, post-translational modification and so on, has been provided. A wide range of user-friendly tools, such as browsing, sequence and structure-based searching and so on, has been incorporated into StraPep. We hope that this database will be helpful for the research community. Database URL: http://isyslab.info/StraPep PMID:29688386

  20. Community health workers and mobile technology: a systematic review of the literature.

    Science.gov (United States)

    Braun, Rebecca; Catalani, Caricia; Wimbush, Julian; Israelski, Dennis

    2013-01-01

    In low-resource settings, community health workers are frontline providers who shoulder the health service delivery burden. Increasingly, mobile technologies are developed, tested, and deployed with community health workers to facilitate tasks and improve outcomes. We reviewed the evidence for the use of mobile technology by community health workers to identify opportunities and challenges for strengthening health systems in resource-constrained settings. We conducted a systematic review of peer-reviewed literature from health, medical, social science, and engineering databases, using PRISMA guidelines. We identified a total of 25 unique full-text research articles on community health workers and their use of mobile technology for the delivery of health services. Community health workers have used mobile tools to advance a broad range of health aims throughout the globe, particularly maternal and child health, HIV/AIDS, and sexual and reproductive health. Most commonly, community health workers use mobile technology to collect field-based health data, receive alerts and reminders, facilitate health education sessions, and conduct person-to-person communication. Programmatic efforts to strengthen health service delivery focus on improving adherence to standards and guidelines, community education and training, and programmatic leadership and management practices. Those studies that evaluated program outcomes provided some evidence that mobile tools help community health workers to improve the quality of care provided, efficiency of services, and capacity for program monitoring. Evidence suggests mobile technology presents promising opportunities to improve the range and quality of services provided by community health workers. Small-scale efforts, pilot projects, and preliminary descriptive studies are increasing, and there is a trend toward using feasible and acceptable interventions that lead to positive program outcomes through operational improvements and

  1. Community health workers and mobile technology: a systematic review of the literature.

    Directory of Open Access Journals (Sweden)

    Rebecca Braun

    Full Text Available In low-resource settings, community health workers are frontline providers who shoulder the health service delivery burden. Increasingly, mobile technologies are developed, tested, and deployed with community health workers to facilitate tasks and improve outcomes. We reviewed the evidence for the use of mobile technology by community health workers to identify opportunities and challenges for strengthening health systems in resource-constrained settings.We conducted a systematic review of peer-reviewed literature from health, medical, social science, and engineering databases, using PRISMA guidelines. We identified a total of 25 unique full-text research articles on community health workers and their use of mobile technology for the delivery of health services.Community health workers have used mobile tools to advance a broad range of health aims throughout the globe, particularly maternal and child health, HIV/AIDS, and sexual and reproductive health. Most commonly, community health workers use mobile technology to collect field-based health data, receive alerts and reminders, facilitate health education sessions, and conduct person-to-person communication. Programmatic efforts to strengthen health service delivery focus on improving adherence to standards and guidelines, community education and training, and programmatic leadership and management practices. Those studies that evaluated program outcomes provided some evidence that mobile tools help community health workers to improve the quality of care provided, efficiency of services, and capacity for program monitoring.Evidence suggests mobile technology presents promising opportunities to improve the range and quality of services provided by community health workers. Small-scale efforts, pilot projects, and preliminary descriptive studies are increasing, and there is a trend toward using feasible and acceptable interventions that lead to positive program outcomes through operational

  2. Andromeda - a peptide search engine integrated into the MaxQuant environment

    DEFF Research Database (Denmark)

    Cox, Jurgen; Neuhauser, Nadin; Michalski, Annette

    2011-01-01

    A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data Andromeda performs as well as Mascot......, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly...... phosphorylated peptides and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination...

  3. MICA: desktop software for comprehensive searching of DNA databases

    Directory of Open Access Journals (Sweden)

    Glick Benjamin S

    2006-10-01

    Full Text Available Abstract Background Molecular biologists work with DNA databases that often include entire genomes. A common requirement is to search a DNA database to find exact matches for a nondegenerate or partially degenerate query. The software programs available for such purposes are normally designed to run on remote servers, but an appealing alternative is to work with DNA databases stored on local computers. We describe a desktop software program termed MICA (K-Mer Indexing with Compact Arrays that allows large DNA databases to be searched efficiently using very little memory. Results MICA rapidly indexes a DNA database. On a Macintosh G5 computer, the complete human genome could be indexed in about 5 minutes. The indexing algorithm recognizes all 15 characters of the DNA alphabet and fully captures the information in any DNA sequence, yet for a typical sequence of length L, the index occupies only about 2L bytes. The index can be searched to return a complete list of exact matches for a nondegenerate or partially degenerate query of any length. A typical search of a long DNA sequence involves reading only a small fraction of the index into memory. As a result, searches are fast even when the available RAM is limited. Conclusion MICA is suitable as a search engine for desktop DNA analysis software.

  4. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  5. ArcGIS 9.3 ed i database spaziali: gli scenari di utilizzo

    Directory of Open Access Journals (Sweden)

    Francesco Bartoli

    2009-03-01

    Full Text Available ArcGis 9.3 and spatial databases: application sceneriesThe latest news from ESRI suggests that it will soon be possible to link to the PostgreSQL database. This resulted in a collaboration between the PostGis geometry model with SDOGEOMETRY - the oracle database - a hierarchial and spatial design database. This had a direct impact on the PMI review and the business models of local governments. ArcSdewould be replaced by Zig-Gis 2.0 providing greater offerings to the GIS community. Harnessing this system will take advantage of human resources to aid in the design of potentconceptual data models. Further funds are still requiredto promote the product under a prominent license.

  6. Conceptual study of nuclear power generation facilities life-cycle support versatile engineering database. Procedure of development and consideration of fundamental functions

    International Nuclear Information System (INIS)

    Endo, Hidetoshi

    2009-05-01

    International Atomic Energy Agency (IAEA) stands out the activity of the knowledge management of nuclear safety and the movement to introduce the idea of the life cycle management into the quality control of maintenance of the nuclear power generation facilities to assure the knowledge preservation and to succeed the technology of facilities. Japan Atomic Energy Agency (JAEA) also has such activities as the knowledge preservation of research and development, and related information. The facilities' performance reliability can be easily checked with the technology of data processing in the general industry and the results of the knowledge repository, transmitting technology and knowledge management by referring to the information and knowledge if the information and knowledge at each step of the life-cycle of facilities can be built. This report shows the strategy of the construction of the engineering database to support the life cycle of facilities and the basic function of the management system. (author)

  7. The Development of a Graphical User Interface Engine for the Convenient Use of the HL7 Version 2.x Interface Engine.

    Science.gov (United States)

    Kim, Hwa Sun; Cho, Hune; Lee, In Keun

    2011-12-01

    The Health Level Seven Interface Engine (HL7 IE), developed by Kyungpook National University, has been employed in health information systems, however users without a background in programming have reported difficulties in using it. Therefore, we developed a graphical user interface (GUI) engine to make the use of the HL7 IE more convenient. The GUI engine was directly connected with the HL7 IE to handle the HL7 version 2.x messages. Furthermore, the information exchange rules (called the mapping data), represented by a conceptual graph in the GUI engine, were transformed into program objects that were made available to the HL7 IE; the mapping data were stored as binary files for reuse. The usefulness of the GUI engine was examined through information exchange tests between an HL7 version 2.x message and a health information database system. Users could easily create HL7 version 2.x messages by creating a conceptual graph through the GUI engine without requiring assistance from programmers. In addition, time could be saved when creating new information exchange rules by reusing the stored mapping data. The GUI engine was not able to incorporate information types (e.g., extensible markup language, XML) other than the HL7 version 2.x messages and the database, because it was designed exclusively for the HL7 IE protocol. However, in future work, by including additional parsers to manage XML-based information such as Continuity of Care Documents (CCD) and Continuity of Care Records (CCR), we plan to ensure that the GUI engine will be more widely accessible for the health field.

  8. Report from the 4th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2011-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. The 4th Extremely Large Databases workshop was organized to examine the needs of communities under-represented at the past workshops facing these issues. Approaches to big data statistical analytics as well as emerging opportunities related to emerging hardware technologies were also debated. Writable extreme scale databases and the science benchmark were discussed. This paper is the final report of the discussions and activities at this workshop.

  9. Do nuclear engineering educators have a special responsibility

    International Nuclear Information System (INIS)

    Weinberg, A.M.

    1977-01-01

    Each 1000 MW(e) reactor in equilibrium contains 15 x 10 9 Ci of radioactivity. To handle this material safety requires an extremely high level of expertise and commitment - in many respects, an expertise that goes beyond what is demanded of any other technology. If one grants that nuclear engineering is more demanding than other engineering because the price of failure is greater, one must ask how can we inculcate into the coming generations of nuclear engineers a full sense of the responsibility they bear in practising their profession. Clearly a first requirement is that all elements of the nuclear community -utility executives, equipment engineers, operating engineers, nuclear engineers, administrators - must recognize and accept the idea that nuclear energy is something special, and that therefore its practitioners must be special. This sense must be instilled into young nuclear engineers during their education. A special responsibility therefore devolves upon nuclear engineering educators: first, to recognize the special character of their profession, and second, to convey this sense to their students. The possibility of institutionalizing this sense of responsibility by establishing a nuclear Hippocratic Oath or special canon of ethics for nuclear engineers ought to be discussed within the nuclear community. (author)

  10. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  11. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  12. Spent nuclear fuel project cold vacuum drying facility supporting data and calculation database

    Energy Technology Data Exchange (ETDEWEB)

    IRWIN, J.J.

    1999-02-26

    This document provides a database of supporting calculations for the Cold Vacuum Drying Facility (CVDF). The database was developed in conjunction with HNF-SD-SNF-SAR-002, ''Safety Analysis Report for the Cold Vacuum Drying Facility'', Phase 2, ''Supporting Installation of Processing Systems'' (Garvin 1998). The HNF-SD-SNF-DRD-002, 1997, ''Cold Vacuum Drying Facility Design Requirements'', Rev. 2, and the CVDF Summary Design Report. The database contains calculation report entries for all process, safety and facility systems in the CVDF, a general CVD operations sequence and the CVDF System Design Descriptions (SDDs). This database has been developed for the SNFP CVDF Engineering Organization and shall be updated, expanded, and revised in accordance with future design, construction and startup phases of the CVDF until the CVDF final ORR is approved.

  13. ProCarDB: a database of bacterial carotenoids.

    Science.gov (United States)

    Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani

    2016-05-26

    Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.

  14. VKCDB: Voltage-gated potassium channel database

    Directory of Open Access Journals (Sweden)

    Gallin Warren J

    2004-01-01

    Full Text Available Abstract Background The family of voltage-gated potassium channels comprises a functionally diverse group of membrane proteins. They help maintain and regulate the potassium ion-based component of the membrane potential and are thus central to many critical physiological processes. VKCDB (Voltage-gated potassium [K] Channel DataBase is a database of structural and functional data on these channels. It is designed as a resource for research on the molecular basis of voltage-gated potassium channel function. Description Voltage-gated potassium channel sequences were identified by using BLASTP to search GENBANK and SWISSPROT. Annotations for all voltage-gated potassium channels were selectively parsed and integrated into VKCDB. Electrophysiological and pharmacological data for the channels were collected from published journal articles. Transmembrane domain predictions by TMHMM and PHD are included for each VKCDB entry. Multiple sequence alignments of conserved domains of channels of the four Kv families and the KCNQ family are also included. Currently VKCDB contains 346 channel entries. It can be browsed and searched using a set of functionally relevant categories. Protein sequences can also be searched using a local BLAST engine. Conclusions VKCDB is a resource for comparative studies of voltage-gated potassium channels. The methods used to construct VKCDB are general; they can be used to create specialized databases for other protein families. VKCDB is accessible at http://vkcdb.biology.ualberta.ca.

  15. A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information

    Directory of Open Access Journals (Sweden)

    Emmanouil Papadakis

    2017-10-01

    Full Text Available This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics.

  16. A performance study of grid workflow engines

    NARCIS (Netherlands)

    Stratan, C.; Iosup, A.; Epema, D.H.J.

    2008-01-01

    To benefit from grids, scientists require grid workflow engines that automatically manage the execution of inter-related jobs on the grid infrastructure. So far, the workflows community has focused on scheduling algorithms and on interface tools. Thus, while several grid workflow engines have been

  17. Restoring rocky intertidal communities: Lessons from a benthic macroalgal ecosystem engineer.

    Science.gov (United States)

    Bellgrove, Alecia; McKenzie, Prudence F; Cameron, Hayley; Pocklington, Jacqueline B

    2017-04-15

    As coastal population growth increases globally, effective waste management practices are required to protect biodiversity. Water authorities are under increasing pressure to reduce the impact of sewage effluent discharged into the coastal environment and restore disturbed ecosystems. We review the role of benthic macroalgae as ecosystem engineers and focus particularly on the temperate Australasian fucoid Hormosira banksii as a case study for rocky intertidal restoration efforts. Research focussing on the roles of ecosystem engineers is lagging behind restoration research of ecosystem engineers. As such, management decisions are being made without a sound understanding of the ecology of ecosystem engineers. For successful restoration of rocky intertidal shores it is important that we assess the thresholds of engineering traits (discussed herein) and the environmental conditions under which they are important. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The Microbial Database for Danish wastewater treatment plants with nutrient removal (MiDas-DK) – a tool for understanding activated sludge population dynamics and community stability

    DEFF Research Database (Denmark)

    Mielczarek, Artur Tomasz; Saunders, Aaron Marc; Larsen, Poul

    2013-01-01

    Since 2006 more than 50 Danish full-scale wastewater treatment plants with nutrient removal have been investigated in a project called ‘The Microbial Database for Danish Activated Sludge Wastewater Treatment Plants with Nutrient Removal (MiDas-DK)’. Comprehensive sets of samples have been collected......, analyzed and associated with extensive operational data from the plants. The community composition was analyzed by quantitative fluorescence in situ hybridization (FISH) supported by 16S rRNA amplicon sequencing and deep metagenomics. MiDas-DK has been a powerful tool to study the complex activated sludge...

  19. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  20. Using Bibliographic Knowledge for Ranking in Scientific Publication Databases

    CERN Document Server

    Vesely, Martin; Le Meur, Jean-Yves

    2008-01-01

    Document ranking for scientific publications involves a variety of specialized resources (e.g. author or citation indexes) that are usually difficult to use within standard general purpose search engines that usually operate on large-scale heterogeneous document collections for which the required specialized resources are not always available for all the documents present in the collections. Integrating such resources into specialized information retrieval engines is therefore important to cope with community-specific user expectations that strongly influence the perception of relevance within the considered community. In this perspective, this paper extends the notion of ranking with various methods exploiting different types of bibliographic knowledge that represent a crucial resource for measuring the relevance of scientific publications. In our work, we experimentally evaluated the adequacy of two such ranking methods (one based on freshness, i.e. the publication date, and the other on a novel index, the ...

  1. Development of a New Web Portal for the Database on Demand Service

    CERN Document Server

    Altinigne, Can Yilmaz

    2017-01-01

    The Database on Demand service allows members of CERN communities to provision and manage database instances of different flavours (MySQL, Oracle, PostgreSQL and InfluxDB). Users can create and edit these instances using the web interface of DB On Demand. This web front end is currently on Java technologies and the ZK web framework, for which is generally difficult to find experienced developers and which has gotten to lack behind more modern web stacks in capabilities and usability.

  2. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  3. RaftProt: mammalian lipid raft proteome database.

    Science.gov (United States)

    Shah, Anup; Chen, David; Boda, Akash R; Foster, Leonard J; Davis, Melissa J; Hill, Michelle M

    2015-01-01

    RaftProt (http://lipid-raft-database.di.uq.edu.au/) is a database of mammalian lipid raft-associated proteins as reported in high-throughput mass spectrometry studies. Lipid rafts are specialized membrane microdomains enriched in cholesterol and sphingolipids thought to act as dynamic signalling and sorting platforms. Given their fundamental roles in cellular regulation, there is a plethora of information on the size, composition and regulation of these membrane microdomains, including a large number of proteomics studies. To facilitate the mining and analysis of published lipid raft proteomics studies, we have developed a searchable database RaftProt. In addition to browsing the studies, performing basic queries by protein and gene names, searching experiments by cell, tissue and organisms; we have implemented several advanced features to facilitate data mining. To address the issue of potential bias due to biochemical preparation procedures used, we have captured the lipid raft preparation methods and implemented advanced search option for methodology and sample treatment conditions, such as cholesterol depletion. Furthermore, we have identified a list of high confidence proteins, and enabled searching only from this list of likely bona fide lipid raft proteins. Given the apparent biological importance of lipid raft and their associated proteins, this database would constitute a key resource for the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Fluids engineering

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    Fluids engineering has played an important role in many applications, from ancient flood control to the design of high-speed compact turbomachinery. New applications of fluids engineering, such as in high-technology materials processing, biotechnology, and advanced combustion systems, have kept up unwaining interest in the subject. More accurate and sophisticated computational and measurement techniques are also constantly being developed and refined. On a more fundamental level, nonlinear dynamics and chaotic behavior of fluid flow are no longer an intellectual curiosity and fluid engineers are increasingly interested in finding practical applications for these emerging sciences. Applications of fluid technology to new areas, as well as the need to improve the design and to enhance the flexibility and reliability of flow-related machines and devices will continue to spur interest in fluids engineering. The objectives of the present seminar were: to exchange current information on arts, science, and technology of fluids engineering; to promote scientific cooperation between the fluids engineering communities of both nations, and to provide an opportunity for the participants and their colleagues to explore possible joint research programs in topics of high priority and mutual interest to both countries. The Seminar provided an excellent forum for reviewing the current state and future needs of fluids engineering for the two nations. With the Seminar ear-marking the first formal scientific exchange between Korea and the United States in the area of fluids engineering, the scope was deliberately left broad and general

  5. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  6. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  7. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  8. Ceramics Technology Project database: September 1991 summary report

    Energy Technology Data Exchange (ETDEWEB)

    Keyes, B.L.P.

    1992-06-01

    The piston ring-cylinder liner area of the internal combustion engine must withstand very-high-temperature gradients, highly-corrosive environments, and constant friction. Improving the efficiency in the engine requires ring and cylinder liner materials that can survive this abusive environment and lubricants that resist decomposition at elevated temperatures. Wear and friction tests have been done on many material combinations in environments similar to actual use to find the right materials for the situation. This report covers tribology information produced from 1986 through July 1991 by Battelle columbus Laboratories, Caterpillar Inc., and Cummins Engine Company, Inc. for the Ceramic Technology Project (CTP). All data in this report were taken from the project`s semiannual and bimonthly progress reports and cover base materials, coatings, and lubricants. The data, including test rig descriptions and material characterizations, are stored in the CTP database and are available to all project participants on request. Objective of this report is to make available the test results from these studies, but not to draw conclusions from these data.

  9. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  10. Evaluating Air Force Civil Engineer's Current Automated Information Systems

    National Research Council Canada - National Science Library

    Phillips, Edward

    2002-01-01

    ...) to the Automated Civil Engineer System (ACES). This research focused on users perceptions of both database and data importance to determine if significant differences existed between various user sub-groups...

  11. Engineering geological mapping in Wallonia (Belgium) : present state and recent computerized approach

    Science.gov (United States)

    Delvoie, S.; Radu, J.-P.; Ruthy, I.; Charlier, R.

    2012-04-01

    An engineering geological map can be defined as a geological map with a generalized representation of all the components of a geological environment which are strongly required for spatial planning, design, construction and maintenance of civil engineering buildings. In Wallonia (Belgium) 24 engineering geological maps have been developed between the 70s and the 90s at 1/5,000 or 1/10,000 scale covering some areas of the most industrialized and urbanized cities (Liège, Charleroi and Mons). They were based on soil and subsoil data point (boring, drilling, penetration test, geophysical test, outcrop…). Some displayed data present the depth (with isoheights) or the thickness (with isopachs) of the different subsoil layers up to about 50 m depth. Information about geomechanical properties of each subsoil layer, useful for engineers and urban planners, is also synthesized. However, these maps were built up only on paper and progressively needed to be updated with new soil and subsoil data. The Public Service of Wallonia and the University of Liège have recently initiated a study to evaluate the feasibility to develop engineering geological mapping with a computerized approach. Numerous and various data (about soil and subsoil) are stored into a georelational database (the geotechnical database - using Access, Microsoft®). All the data are geographically referenced. The database is linked to a GIS project (using ArcGIS, ESRI®). Both the database and GIS project consist of a powerful tool for spatial data management and analysis. This approach involves a methodology using interpolation methods to update the previous maps and to extent the coverage to new areas. The location (x, y, z) of each subsoil layer is then computed from data point. The geomechanical data of these layers are synthesized in an explanatory booklet joined to maps.

  12. An open-source, mobile-friendly search engine for public medical knowledge.

    Science.gov (United States)

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  13. Automated knowledge base development from CAD/CAE databases

    Science.gov (United States)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  14. Effect of engineered environment on microbial community structure in biofilter and biofilm on reverse osmosis membrane.

    Science.gov (United States)

    Jeong, Sanghyun; Cho, Kyungjin; Jeong, Dawoon; Lee, Seockheon; Leiknes, TorOve; Vigneswaran, Saravanamuthu; Bae, Hyokwan

    2017-11-01

    Four dual media filters (DMFs) were operated in a biofiltration mode with different engineered environments (DMF I and II: coagulation with/without acidification and DMF III and IV: without/with chlorination). Designed biofilm enrichment reactors (BERs) containing the removable reverse osmosis (RO) coupons, were connected at the end of the DMFs in parallel to analyze the biofilm on the RO membrane by DMF effluents. Filtration performances were evaluated in terms of dissolved organic carbon (DOC) and assimilable organic carbon (AOC). Organic foulants on the RO membrane were also quantified and fractionized. The bacterial community structures in liquid (seawater and effluent) and biofilm (DMF and RO) samples were analyzed using 454-pyrosequencing. The DMF IV fed with the chlorinated seawater demonstrated the highest reductions of DOC including LMW-N as well as AOC among the other DMFs. The DMF IV was also effective in reducing organic foulants on the RO membrane surface. The bacterial community structure was grouped according to the sample phase (i.e., liquid and biofilm samples), sampling location (i.e., DMF and RO samples), and chlorination (chlorinated and non-chlorinated samples). In particular, the biofilm community in the DMF IV differed from the other DMF treatments, suggesting that chlorination exerted as stronger selective pressure than pH adjustment or coagulation on the biofilm community. In the DMF IV, several chemoorganotrophic chlorine-resistant biofilm-forming bacteria such as Hyphomonas, Erythrobacter, and Sphingomonas were predominant, and they may enhance organic carbon degradation efficiency. Diverse halophilic or halotolerant organic degraders were also found in other DMFs (i.e., DMF I, II, and III). Various kinds of dominant biofilm-forming bacteria were also investigated in RO membrane samples; the results provided possible candidates that cause biofouling when DMF process is applied as the pretreatment option for the RO process. Copyright

  15. Effect of engineered environment on microbial community structure in biofilter and biofilm on reverse osmosis membrane

    KAUST Repository

    Jeong, Sanghyun

    2017-07-25

    Four dual media filters (DMFs) were operated in a biofiltration mode with different engineered environments (DMF I and II: coagulation with/without acidification and DMF III and IV: without/with chlorination). Designed biofilm enrichment reactors (BERs) containing the removable reverse osmosis (RO) coupons, were connected at the end of the DMFs in parallel to analyze the biofilm on the RO membrane by DMF effluents. Filtration performances were evaluated in terms of dissolved organic carbon (DOC) and assimilable organic carbon (AOC). Organic foulants on the RO membrane were also quantified and fractionized. The bacterial community structures in liquid (seawater and effluent) and biofilm (DMF and RO) samples were analyzed using 454-pyrosequencing. The DMF IV fed with the chlorinated seawater demonstrated the highest reductions of DOC including LMW-N as well as AOC among the other DMFs. The DMF IV was also effective in reducing organic foulants on the RO membrane surface. The bacterial community structure was grouped according to the sample phase (i.e., liquid and biofilm samples), sampling location (i.e., DMF and RO samples), and chlorination (chlorinated and non-chlorinated samples). In particular, the biofilm community in the DMF IV differed from the other DMF treatments, suggesting that chlorination exerted as stronger selective pressure than pH adjustment or coagulation on the biofilm community. In the DMF IV, several chemoorganotrophic chlorine-resistant biofilm-forming bacteria such as Hyphomonas, Erythrobacter, and Sphingomonas were predominant, and they may enhance organic carbon degradation efficiency. Diverse halophilic or halotolerant organic degraders were also found in other DMFs (i.e., DMF I, II, and III). Various kinds of dominant biofilm-forming bacteria were also investigated in RO membrane samples; the results provided possible candidates that cause biofouling when DMF process is applied as the pretreatment option for the RO process.

  16. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    Science.gov (United States)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  17. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  18. Improvement of Engineering Work Efficiency through System Integration

    International Nuclear Information System (INIS)

    Lee, Sangdae; Jo, Sunghan; Hyun, Jinwoo

    2016-01-01

    This paper presents the concept of developing an integrated engineering system for ER to improve efficiency and utilization of engineering system. Each process including computer system and database was introduced separately by each department at that different time. Each engineering process has a close relation with other engineering processes. The introduction of processes in a different time has caused the several problems such as lack of interrelationship between engineering processes, lack of integration fleet-wide statistical data, lack of the function of data comparison among plants and increase of access time by different access location on internet. These problems have caused inefficiency of engineering system utilization to get proper information and degraded engineering system utilization. KHNP has introduced and conducted advanced engineering processes to maintain equipment effectively in a highly reliable condition since 2000s. But engineering systems for process implementation have been developed in each department at a different time. This has caused the problems of process inefficiency and data discordance. Integrated Engineering System(IES) to integrate dispersed engineering processes will improve work efficiency and utilization of engineering system because integration system would enable engineer to get total engineering information easily and do engineering work efficiently

  19. Improvement of Engineering Work Efficiency through System Integration

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangdae; Jo, Sunghan; Hyun, Jinwoo [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    This paper presents the concept of developing an integrated engineering system for ER to improve efficiency and utilization of engineering system. Each process including computer system and database was introduced separately by each department at that different time. Each engineering process has a close relation with other engineering processes. The introduction of processes in a different time has caused the several problems such as lack of interrelationship between engineering processes, lack of integration fleet-wide statistical data, lack of the function of data comparison among plants and increase of access time by different access location on internet. These problems have caused inefficiency of engineering system utilization to get proper information and degraded engineering system utilization. KHNP has introduced and conducted advanced engineering processes to maintain equipment effectively in a highly reliable condition since 2000s. But engineering systems for process implementation have been developed in each department at a different time. This has caused the problems of process inefficiency and data discordance. Integrated Engineering System(IES) to integrate dispersed engineering processes will improve work efficiency and utilization of engineering system because integration system would enable engineer to get total engineering information easily and do engineering work efficiently.

  20. Ontology to relational database transformation for web application development and maintenance

    Science.gov (United States)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.