WorldWideScience

Sample records for model organism databases

  1. The Zebrafish Model Organism Database (ZFIN)

    Data.gov (United States)

    U.S. Department of Health & Human Services — ZFIN serves as the zebrafish model organism database. It aims to: a) be the community database resource for the laboratory use of zebrafish, b) develop and support...

  2. Xanthusbase: adapting wikipedia principles to a model organism database.

    Science.gov (United States)

    Arshinoff, Bradley I; Suen, Garret; Just, Eric M; Merchant, Sohel M; Kibbe, Warren A; Chisholm, Rex L; Welch, Roy D

    2007-01-01

    xanthusBase (http://www.xanthusbase.org) is the official model organism database (MOD) for the social bacterium Myxococcus xanthus. In many respects, M.xanthus represents the pioneer model organism (MO) for studying the genetic, biochemical, and mechanistic basis of prokaryotic multicellularity, a topic that has garnered considerable attention due to the significance of biofilms in both basic and applied microbiology research. To facilitate its utility, the design of xanthusBase incorporates open-source software, leveraging the cumulative experience made available through the Generic Model Organism Database (GMOD) project, MediaWiki (http://www.mediawiki.org), and dictyBase (http://www.dictybase.org), to create a MOD that is both highly useful and easily navigable. In addition, we have incorporated a unique Wikipedia-style curation model which exploits the internet's inherent interactivity, thus enabling M.xanthus and other myxobacterial researchers to contribute directly toward the ongoing genome annotation.

  3. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    Science.gov (United States)

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

  4. Classical databases and knowledge organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2015-01-01

    This paper considers classical bibliographic databases based on the Boolean retrieval model (such as MEDLINE and PsycInfo). This model is challenged by modern search engines and information retrieval (IR) researchers, who often consider Boolean retrieval a less efficient approach. The paper...... examines this claim and argues for the continued value of Boolean systems, which suggests two further considerations: (1) the important role of human expertise in searching (expert searchers and “information literate” users) and (2) the role of library and information science and knowledge organization (KO...... implications for the maintenance of information science and KO as research fields, as well as for the information profession as a profession in its own right....

  5. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms.

    Science.gov (United States)

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-12-03

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes.

  6. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms

    Science.gov (United States)

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-01-01

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes. PMID:24296905

  7. Immediate dissemination of student discoveries to a model organism database enhances classroom-based research experiences.

    Science.gov (United States)

    Wiley, Emily A; Stover, Nicholas A

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.

  8. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  9. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2008-05-01

    Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  10. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    OpenAIRE

    Demian Horia

    2008-01-01

    In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  11. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    Science.gov (United States)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  12. Building a Database for a Quantitative Model

    Science.gov (United States)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  13. Table of 3D organ model IDs and organ names (IS-A Tree) - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us BodyParts...nce between 3D organ model IDs and organ names available in IS-A Tree. Data file File name: isa_parts..._list_e.txt (IS-A Tree) File URL: ftp://ftp.biosciencedbc.jp/archive/bodyparts3d/LATEST/isa_parts..._list_e.txt File size: 126 KB Simple search URL http://togodb.biosciencedbc.jp/togodb/view/bodyparts3d_isa_parts...| Contact Us Table of 3D organ model IDs and organ names (IS-A Tree) - BodyParts3D | LSDB Archive ...

  14. Table of 3D organ model IDs and organ names (PART-OF Tree) - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us BodyParts...ata file File name: partof_parts_list_e.txt (PART-OF Tree) File URL: ftp://ftp.biosciencedbc.jp/archive/bodyparts...3d/LATEST/partof_parts_list_e.txt File size: 58 KB Simple search URL http://togodb.biosciencedbc.jp/togodb/view/bodyparts...3d_partof_parts_list_e Data acquisition method - Data analysis ...atabase Site Policy | Contact Us Table of 3D organ model IDs and organ names (PART-OF Tree) - BodyParts3D | LSDB Archive ...

  15. Function and organization of CPC database system

    International Nuclear Information System (INIS)

    Yoshida, Tohru; Tomiyama, Mineyoshi.

    1986-02-01

    It is very time-consuming and expensive work to develop computer programs. Therefore, it is desirable to effectively use the existing program. For this purpose, it is required for researchers and technical staffs to obtain the relevant informations easily. CPC (Computer Physics Communications) is a journal published to facilitate the exchange of physics programs and of the relevant information about the use of computers in the physics community. There are about 1300 CPC programs in JAERI computing center, and the number of programs is increasing. A new database system (CPC database) has been developed to manage the CPC programs and their information. Users obtain information about all the programs stored in the CPC database. Also users can find and copy the necessary program by inputting the program name, the catalogue number and the volume number. In this system, each operation is done by menu selection. Every CPC program is compressed and stored in the database; the required storage size is one third of the non-compressed format. Programs unused for a long time are moved to magnetic tape. The present report describes the CPC database system and the procedures for its use. (author)

  16. Software Engineering Laboratory (SEL) database organization and user's guide

    Science.gov (United States)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  17. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  18. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  19. Property Modelling and Databases in Product-Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Sansonetti, Sascha

    development, however, it is necessary to have a large database of measured property data that has been checked for consistency and accuracy. The presentation will first introduce a database, in terms of its knowledge representation structure, the type and range of properties and chemical systems covered......, and their internal consistency-accuracy checks. The database includes properties of organic chemicals, polymers and ionic liquids. There are also chemical class specific database sections, such as for solvents, aroma-chemicals, surfactants and emulsifiers. The use of this property database for model development...... of the PC-SAFT is used. The developed database and property prediction models have been combined into a properties-software that allows different product-process design related applications. The presentation will also briefly highlight applications of the software for virtual product-process design...

  20. Self-organizing strategies for a column-store database

    NARCIS (Netherlands)

    Ivanova, M.; Kersten, M.L.; Nes, N.; Kemper, A.; Valduriez, P.; Mouaddib, N.; Teubner, J.; Bouzeghoub, M.; Markl, V.; Amsaleg, L.; Manolescu, I.

    2008-01-01

    Column-store database systems open new vistas for improved maintenance through self-organization. Individual columns are the focal point, which simplify balancing conflicting requirements. This work presents two workload-driven self-organizing techniques in a column-store, i.e. adaptive segmentation

  1. Self-organizing strategies for a column-store database

    NARCIS (Netherlands)

    M.G. Ivanova (Milena); M.L. Kersten (Martin); N.J. Nes (Niels)

    2008-01-01

    textabstractColumn-store database systems open new vistas for improved maintenance through self-organization. Individual columns are the focal point, which simplify balancing conflicting requirements. This work presents two workload-driven self-organizing techniques in a column-store, i.e. adaptive

  2. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  3. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  4. Hierarchical clustering techniques for image database organization and summarization

    Science.gov (United States)

    Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    This paper investigates clustering techniques as a method of organizing image databases to support popular visual management functions such as searching, browsing and navigation. Different types of hierarchical agglomerative clustering techniques are studied as a method of organizing features space as well as summarizing image groups by the selection of a few appropriate representatives. Retrieval performance using both single and multiple level hierarchies are experimented with and the algorithms show an interesting relationship between the top k correct retrievals and the number of comparisons required. Some arguments are given to support the use of such cluster-based techniques for managing distributed image databases.

  5. A GIS database for crop modelling

    NARCIS (Netherlands)

    Burrill, A.; Vossen, P.; Diepen, van C.A.

    1995-01-01

    The EC land information system has been combined with meteorological, topographical and crop parameter data, and with historical agricultural statistics, to produce an integrated database suitable as input to a European-level crop growth modelling system. The selection of variables to be included in

  6. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  7. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe.

    Science.gov (United States)

    Aksoy, Ece; Yigini, Yusuf; Montanarella, Luca

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they're collected from the "Land Use/Cover Area frame Statistical Survey" (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and "Soil Transformations in European Catchments" (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960-1990 and 2000-2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas; agricultural

  8. Spatial Database Modeling for Indoor Navigation Systems

    Science.gov (United States)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  9. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  10. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  11. On the modelling of microsegregation in steels involving thermodynamic databases

    International Nuclear Information System (INIS)

    You, D; Bernhard, C; Michelic, S; Wieser, G; Presoly, P

    2016-01-01

    A microsegregation model involving thermodynamic database based on Ohnaka's model is proposed. In the model, the thermodynamic database is applied for equilibrium calculation. Multicomponent alloy effects on partition coefficients and equilibrium temperatures are accounted for. Microsegregation and partition coefficients calculated using different databases exhibit significant differences. The segregated concentrations predicted using the optimized database are in good agreement with the measured inter-dendritic concentrations. (paper)

  12. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. First Database Course--Keeping It All Organized

    Science.gov (United States)

    Baugh, Jeanne M.

    2015-01-01

    All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…

  15. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  16. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  17. A Framework for Cloudy Model Optimization and Database Storage

    Science.gov (United States)

    Calvén, Emilia; Helton, Andrew; Sankrit, Ravi

    2018-01-01

    We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.

  18. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes

    DEFF Research Database (Denmark)

    Santos Delgado, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase-available at http://www.cyclebase.org-an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version...... 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNAand protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface......, designed around an overview figure that summarizes all the cell-cycle-related data for a gene....

  19. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  20. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  1. Development of a database for chemical mechanism assignments for volatile organic emissions.

    Science.gov (United States)

    Carter, William P L

    2015-10-01

    The development of a database for making model species assignments when preparing total organic gas (TOG) emissions input for atmospheric models is described. This database currently has assignments of model species for 12 different gas-phase chemical mechanisms for over 1700 chemical compounds and covers over 3000 chemical categories used in five different anthropogenic TOG profile databases or output by two different biogenic emissions models. This involved developing a unified chemical classification system, assigning compounds to mixtures, assigning model species for the mechanisms to the compounds, and making assignments for unknown, unassigned, and nonvolatile mass. The comprehensiveness of the assignments, the contributions of various types of speciation categories to current profile and total emissions data, inconsistencies with existing undocumented model species assignments, and remaining speciation issues and areas of needed work are also discussed. The use of the system to prepare input for SMOKE, the Speciation Tool, and for biogenic models is described in the supplementary materials. The database, associated programs and files, and a users manual are available online at http://www.cert.ucr.edu/~carter/emitdb . Assigning air quality model species to the hundreds of emitted chemicals is a necessary link between emissions data and modeling effects of emissions on air quality. This is not easy and makes it difficult to implement new and more chemically detailed mechanisms in models. If done incorrectly, it is similar to errors in emissions speciation or the chemical mechanism used. Nevertheless, making such assignments is often an afterthought in chemical mechanism development and emissions processing, and existing assignments are usually undocumented and have errors and inconsistencies. This work is designed to address some of these problems.

  2. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. © The Author(s) 2016. Published by Oxford University Press.

  3. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access and manip......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...

  4. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  5. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  6. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    Science.gov (United States)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  7. Viewpoints: a framework for object oriented database modelling and distribution

    Directory of Open Access Journals (Sweden)

    Fouzia Benchikha

    2006-01-01

    Full Text Available The viewpoint concept has received widespread attention recently. Its integration into a data model improves the flexibility of the conventional object-oriented data model and allows one to improve the modelling power of objects. The viewpoint paradigm can be used as a means of providing multiple descriptions of an object and as a means of mastering the complexity of current database systems enabling them to be developed in a distributed manner. The contribution of this paper is twofold: to define an object data model integrating viewpoints in databases and to present a federated database system integrating multiple sources following a local-as-extended-view approach.

  8. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  9. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  10. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes.

    Science.gov (United States)

    Santos, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    The eukaryotic cell division cycle is a highly regulated process that consists of a complex series of events and involves thousands of proteins. Researchers have studied the regulation of the cell cycle in several organisms, employing a wide range of high-throughput technologies, such as microarray-based mRNA expression profiling and quantitative proteomics. Due to its complexity, the cell cycle can also fail or otherwise change in many different ways if important genes are knocked out, which has been studied in several microscopy-based knockdown screens. The data from these many large-scale efforts are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase--available at http://www.cyclebase.org--an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNA and protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface, designed around an overview figure that summarizes all the cell-cycle-related data for a gene. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Modeling and analysis of metrics databases

    OpenAIRE

    Paul, Raymond A.

    1999-01-01

    The main objective of this research is to propose a comprehensive framework for quality and risk management in software development process based on analysis and modeling of software metrics data. Existing software metrics work has focused mainly on the type of metrics tobe collected ...

  12. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  13. Imprecision and Uncertainty in the UFO Database Model.

    Science.gov (United States)

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  14. Impact of Prior Knowledge of Informational Content and Organization on Learning Search Principles in a Database.

    Science.gov (United States)

    Linde, Lena; Bergstrom, Monica

    1988-01-01

    The importance of prior knowledge of informational content and organization for search performance on a database was evaluated for 17 undergraduates. Pretraining related to content, and information did facilitate learning logical search principles in a relational database; contest pretraining was more efficient. (SLD)

  15. Using LUCAS topsoil database to estimate soil organic carbon content in local spectral libraries

    Science.gov (United States)

    Castaldi, Fabio; van Wesemael, Bas; Chabrillat, Sabine; Chartin, Caroline

    2017-04-01

    The quantification of the soil organic carbon (SOC) content over large areas is mandatory to obtain accurate soil characterization and classification, which can improve site specific management at local or regional scale exploiting the strong relationship between SOC and crop growth. The estimation of the SOC is not only important for agricultural purposes: in recent years, the increasing attention towards global warming highlighted the crucial role of the soil in the global carbon cycle. In this context, soil spectroscopy is a well consolidated and widespread method to estimate soil variables exploiting the interaction between chromophores and electromagnetic radiation. The importance of spectroscopy in soil science is reflected by the increasing number of large soil spectral libraries collected in the world. These large libraries contain soil samples derived from a consistent number of pedological regions and thus from different parent material and soil types; this heterogeneity entails, in turn, a large variability in terms of mineralogical and organic composition. In the light of the huge variability of the spectral responses to SOC content and composition, a rigorous classification process is necessary to subset large spectral libraries and to avoid the calibration of global models failing to predict local variation in SOC content. In this regard, this study proposes a method to subset the European LUCAS topsoil database into soil classes using a clustering analysis based on a large number of soil properties. The LUCAS database was chosen to apply a standardized multivariate calibration approach valid for large areas without the need for extensive field and laboratory work for calibration of local models. Seven soil classes were detected by the clustering analyses and the samples belonging to each class were used to calibrate specific partial least square regression (PLSR) models to estimate SOC content of three local libraries collected in Belgium (Loam belt

  16. Database organization for computer-aided characterization of laser diode

    International Nuclear Information System (INIS)

    Oyedokun, Z.O.

    1988-01-01

    Computer-aided data logging involves a huge amount of data which must be properly managed for optimized storage space, easy access, retrieval and utilization. An organization method is developed to enhance the advantages of computer-based data logging of the testing of the semiconductor injection laser which optimize storage space, permit authorized user easy access and inhibits penetration. This method is based on unique file identification protocol tree structure and command file-oriented access procedures

  17. Museum Information System of Serbia recent approach to database modeling

    OpenAIRE

    Gavrilović, Goran

    2007-01-01

    The paper offers an illustration of the main parameters for museum database projection (case study of Integrated Museum Information System of Serbia). The simple case of museum data model development and implementation was described. The main aim is to present the advantages of ORM (Object Role Modeling) methodology by using Microsoft Visio as an eligible programmed support in formalization of museum business rules.

  18. A new world lakes database for global hydrological modelling

    Science.gov (United States)

    Pimentel, Rafael; Hasan, Abdulghani; Isberg, Kristina; Arheimer, Berit

    2017-04-01

    Lakes are crucial systems in global hydrology, they constitutes approximately a 65% of the total amount of surface water over the world. The recent advances in remote sensing technology have allowed getting new higher spatiotemporal resolution for global water bodies information. Within them, ESA global map of water bodies, stationary map at 150 m spatial resolution, (Lamarche et al., 2015) and the new high-resolution mapping of global surface water and its long-term changes, 32 years product with a 30 m spatial resolution (Pekel et al., 2016). Nevertheless, these databases identifies all the water bodies, they do not make differences between lakes, rivers, wetlands and seas. Some global databases with isolate lake information are available, i.e. GLWD (Global Lakes and Wetland Database) (Lernhard and Döll, 2004), however the location of some of the lakes is shifted in relation with topography and their extension have also experimented changes since the creation of the database. This work presents a new world lake database based on ESA global map water bodies and relied on the lakes in GLWD. Lakes from ESA global map of water bodies were identified using a flood fill algorithm, which is initialized using the centroid of the lakes defined in GLWD. Some manual checks were done to split lakes that are really connected but identified as different lakes in GLWD database. In this way the database associated information provided in GLDW is maintained. Moreover, the locations of the outlet of all them were included in the new database. The high resolution upstream area information provided by Global Width Database for Large Rivers (GWD-LR) was used for that. This additional points location constitutes very useful information for watershed delineation by global hydrological modelling.. The methodology was validated using in situ information from Sweden lakes and extended over the world. 13 500 lakes greater than 0.1 km2 were identified.

  19. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  20. 3MdB: the Mexican Million Models database

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  1. Prefix list for each organism - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Prefix list for each organism Data detail Data name Prefix list for each organi...sm DOI 10.18908/lsdba.nbdc00464-006 Description of data contents List of prefixes for organisms used in Gcl...ust. Each prefix is applied to the top of the sequence ID according to each organism. The first line specifies the number of organi...sm species (95). From the second line, the prefix of each organi... Database Site Policy | Contact Us Prefix list for each organism - Gclust Server | LSDB Archive ...

  2. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting...

  3. Comparison of thermodynamic databases used in geochemical modelling

    International Nuclear Information System (INIS)

    Chandratillake, M.R.; Newton, G.W.A.; Robinson, V.J.

    1988-05-01

    Four thermodynamic databases used by European groups for geochemical modelling have been compared. Thermodynamic data for both aqueous species and solid species have been listed. When the values are directly comparable any differences between them have been highlighted at two levels of significance. (author)

  4. Technical Work Plan for: Thermodynamic Databases for Chemical Modeling

    International Nuclear Information System (INIS)

    C.F. Jovecolon

    2006-01-01

    The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parameter values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species

  5. Technical Work Plan for: Thermodynamic Database for Chemical Modeling

    Energy Technology Data Exchange (ETDEWEB)

    C.F. Jovecolon

    2006-09-07

    The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parameter values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species.

  6. Designation of organism group - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Designation of organism group Data detail Data name Designation of organism gr...oup DOI 10.18908/lsdba.nbdc00464-007 Description of data contents The definition for grouping 95 species of organi...sm is specified. The first line specifies the number of organism species, and //END is entered on the ...method - Number of data entries - Data item Description Field 1 Prefix of the sequence ID of organi...icense Update History of This Database Site Policy | Contact Us Designation of organism group - Gclust Server | LSDB Archive ...

  7. From ISIS to CouchDB: Databases and Data Models for Bibliographic Records

    Directory of Open Access Journals (Sweden)

    Luciano Ramalho

    2011-04-01

    Full Text Available For decades bibliographic data has been stored in non-relational databases, and thousands of libraries in developing countries still use ISIS databases to run their OPACs. Fast forward to 2010 and the NoSQL movement has shown that non-relational databases are good enough for Google, Amazon.com and Facebook. Meanwhile, several Open Source NoSQL systems have appeared. This paper discusses the data model of one class of NoSQL products, semistructured, document-oriented databases exemplified by Apache CouchDB and MongoDB, and why they are well-suited to collective cataloging applications. Also shown are the methods, tools, and scripts used to convert, from ISIS to CouchDB, bibliographic records of LILACS, a key Latin American and Caribbean health sciences index operated by the Pan-American Health Organization.

  8. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  9. Database application for changing data models in environmental engineering

    Energy Technology Data Exchange (ETDEWEB)

    Hussels, Ulrich; Camarinopoulos, Stephanos; Luedtke, Torsten; Pampoukis, Georgios [RISA Sicherheitsanalysen GmbH, Berlin-Charlottenburg (Germany)

    2013-07-01

    Whenever a technical task is to be solved with the help of a database application and uncertainties regarding the structure, scope or level of detail of the data model exist (either currently or in the future) the use of a generic database application can reduce considerably the cost of implementation and maintenance. Simultaneously the approach described in this contribution permits the operation with different views on the data and even finding and defining new views which had not been considered before. The prerequisite for this is that the preliminary information (structure as well as data) stored into the generic application matches the intended use. In this case, parts of the generic model developed with the generic approach can be reused and according efforts for a major rebuild can be saved. This significantly reduces the development time. At the same time flexibility is achieved concerning the environmental data model, which is not given in the context of conventional developments. (orig.)

  10. NGNP Risk Management Database: A Model for Managing Risk

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2009-09-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft® Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool’s design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  11. Cambridge Structural Database as a tool for studies of general structural features of organic molecular crystals

    International Nuclear Information System (INIS)

    Kuleshova, Lyudmila N; Antipin, Mikhail Yu

    1999-01-01

    The review surveys and generalises data on the use of the Cambridge Structural Database (CSD) for studying and revealing general structural features of organic molecular crystals. It is demonstrated that software and facilities of the CSD allow one to test the applicability of a number of known concepts of organic crystal chemistry (the principle of close packing, the frequency of occurrence of space groups, the preferred formation of centrosymmetrical molecular crystals, etc.) on the basis of abundant statistical data. Examples of the use of the Cambridge Structural Database in engineering of molecular crystals and in the systematic search for compounds with specified properties are given. The bibliography includes 122 references.

  12. Flare parameters inferred from a 3D loop model database

    Science.gov (United States)

    Cuambe, Valente A.; Costa, J. E. R.; Simões, P. J. A.

    2018-04-01

    We developed a database of pre-calculated flare images and spectra exploring a set of parameters which describe the physical characteristics of coronal loops and accelerated electron distribution. Due to the large number of parameters involved in describing the geometry and the flaring atmosphere in the model used (Costa et al. 2013), we built a large database of models (˜250 000) to facilitate the flare analysis. The geometry and characteristics of non-thermal electrons are defined on a discrete grid with spatial resolution greater than 4 arcsec. The database was constructed based on general properties of known solar flares and convolved with instrumental resolution to replicate the observations from the Nobeyama radio polarimeter (NoRP) spectra and Nobeyama radio-heliograph (NoRH) brightness maps. Observed spectra and brightness distribution maps are easily compared with the modelled spectra and images in the database, indicating a possible range of solutions. The parameter search efficiency in this finite database is discussed. Eight out of ten parameters analysed for one thousand simulated flare searches were recovered with a relative error of less than 20 per cent on average. In addition, from the analysis of the observed correlation between NoRH flare sizes and intensities at 17 GHz, some statistical properties were derived. From these statistics the energy spectral index was found to be δ ˜ 3, with non-thermal electron densities showing a peak distribution ⪅107 cm-3, and Bphotosphere ⪆2000 G. Some bias for larger loops with heights as great as ˜2.6 × 109 cm, and looptop events were noted. An excellent match of the spectrum and the brightness distribution at 17 and 34 GHz of the 2002 May 31 flare, is presented as well.

  13. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Science.gov (United States)

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  14. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Directory of Open Access Journals (Sweden)

    Ahmad Tamimi

    Full Text Available Profile Hidden Markov Model (Profile-HMM is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  15. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  16. Applications of the Cambridge Structural Database in organic chemistry and crystal chemistry.

    Science.gov (United States)

    Allen, Frank H; Motherwell, W D Samuel

    2002-06-01

    The Cambridge Structural Database (CSD) and its associated software systems have formed the basis for more than 800 research applications in structural chemistry, crystallography and the life sciences. Relevant references, dating from the mid-1970s, and brief synopses of these papers are collected in a database, DBUse, which is freely available via the CCDC website. This database has been used to review research applications of the CSD in organic chemistry, including supramolecular applications, and in organic crystal chemistry. The review concentrates on applications that have been published since 1990 and covers a wide range of topics, including structure correlation, conformational analysis, hydrogen bonding and other intermolecular interactions, studies of crystal packing, extended structural motifs, crystal engineering and polymorphism, and crystal structure prediction. Applications of CSD information in studies of crystal structure precision, the determination of crystal structures from powder diffraction data, together with applications in chemical informatics, are also discussed.

  17. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    Energy Technology Data Exchange (ETDEWEB)

    Parisien, Lia [The Environmental Council Of The States, Washington, DC (United States)

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  18. Using the Cambridge Structural Database to Teach Molecular Geometry Concepts in Organic Chemistry

    Science.gov (United States)

    Wackerly, Jay Wm.; Janowicz, Philip A.; Ritchey, Joshua A.; Caruso, Mary M.; Elliott, Erin L.; Moore, Jeffrey S.

    2009-01-01

    This article reports a set of two homework assignments that can be used in a second-year undergraduate organic chemistry class. These assignments were designed to help reinforce concepts of molecular geometry and to give students the opportunity to use a technological database and data mining to analyze experimentally determined chemical…

  19. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002.

    Science.gov (United States)

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics's usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics. © The Author(s) 2015. Published by Oxford University Press.

  20. Database on organic composite materials for cryogenic use. Goku teion prime yo yuki fukugo zairyo (yuki zairyo) no database

    Energy Technology Data Exchange (ETDEWEB)

    Nishijima, S.; Okada, T. (Osaka Univ., Osaka (Japan). Institute of Scientific and Industrial Research)

    1990-10-25

    A description is given of a database (DB) on organic composite materials for cryogenic use, which has been set up at the Institute of Scientific and Industrial Research of Osaka University. Principal features of the DB are noted as follows: first, the DB holds only those data on physical properties of the materials that have been obtained by using the measuring apparatuses which the institute is provided with so that they should be free from limitations of qualities arising from the use of different kinds of measuring methods; second, the name of supplier and the trade name of each material are included in its data as subsidary information as to the material so that users of the DB can easily obtain it in case of need. Connected with the usage of the DB, there is a description of the arrangement of directories in the DB, meanings of some abbreviations and kinds of contents which each data file should hold. It is noted that the DB will be supplied in the form of a floppy disk in which the data are recorded with the aid of the software on the market. 3 refs., 6 figs., 4 tabs.

  1. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, Ross F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  2. Database and Interim Glass Property Models for Hanford HLW Glasses

    International Nuclear Information System (INIS)

    Hrma, Pavel R; Piepel, Gregory F; Vienna, John D; Cooley, Scott K; Kim, Dong-Sang; Russell, Renee L

    2001-01-01

    The purpose of this report is to provide a methodology for an increase in the efficiency and a decrease in the cost of vitrifying high-level waste (HLW) by optimizing HLW glass formulation. This methodology consists in collecting and generating a database of glass properties that determine HLW glass processability and acceptability and relating these properties to glass composition. The report explains how the property-composition models are developed, fitted to data, used for glass formulation optimization, and continuously updated in response to changes in HLW composition estimates and changes in glass processing technology. Further, the report reviews the glass property-composition literature data and presents their preliminary critical evaluation and screening. Finally the report provides interim property-composition models for melt viscosity, for liquidus temperature (with spinel and zircon primary crystalline phases), and for the product consistency test normalized releases of B, Na, and Li. Models were fitted to a subset of the screened database deemed most relevant for the current HLW composition region

  3. Database and Interim Glass Property Models for Hanford HLW Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Piepel, Gregory F.; Vienna, John D.; Cooley, Scott K.; Kim, Dong-Sang; Russell, Renee L.

    2001-07-24

    The purpose of this report is to provide a methodology for an increase in the efficiency and a decrease in the cost of vitrifying high-level waste (HLW) by optimizing HLW glass formulation. This methodology consists in collecting and generating a database of glass properties that determine HLW glass processability and acceptability and relating these properties to glass composition. The report explains how the property-composition models are developed, fitted to data, used for glass formulation optimization, and continuously updated in response to changes in HLW composition estimates and changes in glass processing technology. Further, the report reviews the glass property-composition literature data and presents their preliminary critical evaluation and screening. Finally the report provides interim property-composition models for melt viscosity, for liquidus temperature (with spinel and zircon primary crystalline phases), and for the product consistency test normalized releases of B, Na, and Li. Models were fitted to a subset of the screened database deemed most relevant for the current HLW composition region.

  4. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  5. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    Science.gov (United States)

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  6. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    Science.gov (United States)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  7. Coarse sediment and oil database and fate model

    International Nuclear Information System (INIS)

    Humphrey, B.; Owens, E.H.; Patrick, G.

    1992-09-01

    Oil spills in Canadian waters have a high probability of impacting coarse sediment beaches, and there is a need to be able to predict oil fate and estimate natural self-cleaning rates. Data are lacking on many oil-sediment interactions and shoreline interactions have historically been considered using fairly simple concepts. The processes which may occur on a coarse sediment beach were examined. Those considered important are developed into a fate and persistence model for stranded oil. The processes are divided into stages relative to the spill event, and the factors which affect each stage were evaluated. Three areas of special interest are the capacity of a beach to hold oil, the residual capacity of a beach for oil, and the long-term fate of the oil. Model algorithms are developed and the outputs compared to a database of information collected during the Exxon Valdez spill. The database includes files relating to the location and wave energy of beach sediments, surface oil cover for the segments at various times, subsurface oil character, and pit oiling data. Over 10,000 oil cover records are included, from January 1990 to August 1991, along with some total hydrocarbon data. The model provides information at two levels: a general level which can be used for planning and sensitivity mapping, and a more detailed model for prediction of oil fate on specific known beaches. The strengths and weaknesses of the model are assessed in terms of data deficiencies. The type and nature of data most useful for spill planning and monitoring are identified. 42 refs., 23 figs., 7 tabs

  8. Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.

    Science.gov (United States)

    Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin

    2017-05-15

    The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.

  9. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  10. On the Perceptual Organization of Image Databases Using Cognitive Discriminative Biplots

    Directory of Open Access Journals (Sweden)

    Spiros Fotopoulos

    2007-01-01

    Full Text Available A human-centered approach to image database organization is presented in this study. The management of a generic image database is pursued using a standard psychophysical experimental procedure followed by a well-suited data analysis methodology that is based on simple geometrical concepts. The end result is a cognitive discriminative biplot, which is a visualization of the intrinsic organization of the image database best reflecting the user's perception. The discriminating power of the introduced cognitive biplot constitutes an appealing tool for image retrieval and a flexible interface for visual data mining tasks. These ideas were evaluated in two ways. First, the separability of semantically distinct image classes was measured according to their reduced representations on the biplot. Then, a nearest-neighbor retrieval scheme was run on the emerged low-dimensional terrain to measure the suitability of the biplot for performing content-based image retrieval (CBIR. The achieved organization performance when compared with the performance of a contemporary system was found superior. This promoted the further discussion of packing these ideas into a realizable algorithmic procedure for an efficient and effective personalized CBIR system.

  11. BUSINESS MODELLING AND DATABASE DESIGN IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Mihai-Constantin AVORNICULUI

    2015-04-01

    Full Text Available Electronic commerce is growing constantly from one year to another in the last decade, few are the areas that also register such a growth. It covers the exchanges of computerized data, but also electronic messaging, linear data banks and electronic transfer payment. Cloud computing, a relatively new concept and term, is a model of access services via the internet to distributed systems of configurable calculus resources at request which can be made available quickly with minimum management effort and intervention from the client and the provider. Behind an electronic commerce system in cloud there is a data base which contains the necessary information for the transactions in the system. Using business modelling, we get many benefits, which makes the design of the database used by electronic commerce systems in cloud considerably easier.

  12. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  13. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SAHG Database Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...Protein structure Human and other Vertebrate Genomes - Human ORFs Protein sequence database...s - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description...42,577 domain-structure models in ~24900 unique human protein sequences from the RefSeq database. Features a

  14. Developing High-resolution Soil Database for Regional Crop Modeling in East Africa

    Science.gov (United States)

    Han, E.; Ines, A. V. M.

    2014-12-01

    The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.

  15. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  16. LANL High-Level Model (HLM) database development letter report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    Traditional methods of evaluating munitions have been able to successfully compare like munition`s capabilities. On the modern battlefield, however, many different types of munitions compete for the same set of targets. Assessing the overall stockpile capability and proper mix of these weapons is not a simple task, as their use depends upon the specific geographic region of the world, the threat capabilities, the tactics and operational strategy used by both the US and Threat commanders, and of course the type and quantity of munitions available to the CINC. To sort out these types of issues, a hierarchical set of dynamic, two-sided combat simulations are generally used. The DoD has numerous suitable models for this purpose, but rarely are the models focused on munitions expenditures. Rather, they are designed to perform overall platform assessments and force mix evaluations. However, in some cases, the models could be easily adapted to provide this information, since it is resident in the model`s database. Unfortunately, these simulations` complexity (their greatest strength) precludes quick turnaround assessments of the type and scope required by senior decision-makers.

  17. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    Science.gov (United States)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into

  18. Podiform chromite deposits--database and grade and tonnage models

    Science.gov (United States)

    Mosier, Dan L.; Singer, Donald A.; Moring, Barry C.; Galloway, John P.

    2012-01-01

    Chromite ((Mg, Fe++)(Cr, Al, Fe+++)2O4) is the only source for the metallic element chromium, which is used in the metallurgical, chemical, and refractory industries. Podiform chromite deposits are small magmatic chromite bodies formed in the ultramafic section of an ophiolite complex in the oceanic crust. These deposits have been found in midoceanic ridge, off-ridge, and suprasubduction tectonic settings. Most podiform chromite deposits are found in dunite or peridotite near the contact of the cumulate and tectonite zones in ophiolites. We have identified 1,124 individual podiform chromite deposits, based on a 100-meter spatial rule, and have compiled them in a database. Of these, 619 deposits have been used to create three new grade and tonnage models for podiform chromite deposits. The major podiform chromite model has a median tonnage of 11,000 metric tons and a mean grade of 45 percent Cr2O3. The minor podiform chromite model has a median tonnage of 100 metric tons and a mean grade of 43 percent Cr2O3. The banded podiform chromite model has a median tonnage of 650 metric tons and a mean grade of 42 percent Cr2O3. Observed frequency distributions are also given for grades of rhodium, iridium, ruthenium, palladium, and platinum. In resource assessment applications, both major and minor podiform chromite models may be used for any ophiolite complex regardless of its tectonic setting or ophiolite zone. Expected sizes of undiscovered podiform chromite deposits, with respect to degree of deformation or ore-forming process, may determine which model is appropriate. The banded podiform chromite model may be applicable for ophiolites in both suprasubduction and midoceanic ridge settings.

  19. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Olson, R.J.

    2002-03-19

    NPP for 0.5{sup o}-grid cells for which inventory, modeling, or remote-sensing tools were used to scale up the point measurements. Documentation of the content and organization of the EMDI databases are provided.

  20. MicrobesFlux: a web platform for drafting metabolic models from the KEGG database

    Directory of Open Access Journals (Sweden)

    Feng Xueyang

    2012-08-01

    Full Text Available Abstract Background Concurrent with the efforts currently underway in mapping microbial genomes using high-throughput sequencing methods, systems biologists are building metabolic models to characterize and predict cell metabolisms. One of the key steps in building a metabolic model is using multiple databases to collect and assemble essential information about genome-annotations and the architecture of the metabolic network for a specific organism. To speed up metabolic model development for a large number of microorganisms, we need a user-friendly platform to construct metabolic networks and to perform constraint-based flux balance analysis based on genome databases and experimental results. Results We have developed a semi-automatic, web-based platform (MicrobesFlux for generating and reconstructing metabolic models for annotated microorganisms. MicrobesFlux is able to automatically download the metabolic network (including enzymatic reactions and metabolites of ~1,200 species from the KEGG database (Kyoto Encyclopedia of Genes and Genomes and then convert it to a metabolic model draft. The platform also provides diverse customized tools, such as gene knockouts and the introduction of heterologous pathways, for users to reconstruct the model network. The reconstructed metabolic network can be formulated to a constraint-based flux model to predict and analyze the carbon fluxes in microbial metabolisms. The simulation results can be exported in the SBML format (The Systems Biology Markup Language. Furthermore, we also demonstrated the platform functionalities by developing an FBA model (including 229 reactions for a recent annotated bioethanol producer, Thermoanaerobacter sp. strain X514, to predict its biomass growth and ethanol production. Conclusion MicrobesFlux is an installation-free and open-source platform that enables biologists without prior programming knowledge to develop metabolic models for annotated microorganisms in the KEGG

  1. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  2. GIS-based hydrogeological databases and groundwater modelling

    Science.gov (United States)

    Gogu, Radu Constantin; Carabin, Guy; Hallet, Vincent; Peters, Valerie; Dassargues, Alain

    2001-12-01

    Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high-quality data. Putting all data into a coherent and logical structure supported by a computing environment helps ensure validity and availability and provides a powerful tool for hydrogeological studies. A hydrogeological geographic information system (GIS) database that offers facilities for groundwater-vulnerability analysis and hydrogeological modelling has been designed in Belgium for the Walloon region. Data from five river basins, chosen for their contrasting hydrogeological characteristics, have been included in the database, and a set of applications that have been developed now allow further advances. Interest is growing in the potential for integrating GIS technology and groundwater simulation models. A "loose-coupling" tool was created between the spatial-database scheme and the groundwater numerical model interface GMS (Groundwater Modelling System). Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. Résumé. La validité et la reproductibilité de l'analyse d'un aquifère dépend étroitement de la disponibilité de grandes quantités de données de très bonne qualité. Le fait de mettre toutes les données dans une structure cohérente et logique soutenue par les logiciels nécessaires aide à assurer la validité et la disponibilité et fournit un outil puissant pour les études hydrogéologiques. Une base de données pour un système d'information géographique (SIG) hydrogéologique qui offre toutes les facilités pour l'analyse de la vulnérabilité des eaux souterraines et la modélisation hydrogéologique a été établi en Belgique pour la région Wallonne. Les données de cinq bassins de rivières, choisis pour leurs caractéristiques hydrogéologiques différentes, ont été introduites dans la base de données, et un ensemble d

  3. Model organisms and target discovery.

    Science.gov (United States)

    Muda, Marco; McKenna, Sean

    2004-09-01

    The wealth of information harvested from full genomic sequencing projects has not generated a parallel increase in the number of novel targets for therapeutic intervention. Several pharmaceutical companies have realized that novel drug targets can be identified and validated using simple model organisms. After decades of service in basic research laboratories, yeasts, worms, flies, fishes, and mice are now the cornerstones of modern drug discovery programs.: © 2004 Elsevier Ltd . All rights reserved.

  4. A Propose Model For Distributed Database System On Academic ...

    African Journals Online (AJOL)

    This paper takes a look at distributed database systems and its implementation and suitability to the academic environment of Nigeria tertiary institutions. It also takes cognizance of network operating system since the implementation of distributed database system highly depends upon computer networks. A simplified ...

  5. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...... in the estimated/predicted property values, how to assess the quality and reliability of the estimated/predicted property values? The paper will review a class of models for prediction of physical and thermodynamic properties of organic chemicals and their mixtures based on the combined group contribution – atom...

  6. Generic models of deep formation water calculated with PHREEQC using the "gebo"-database

    Science.gov (United States)

    Bozau, E.; van Berk, W.

    2012-04-01

    To identify processes during the use of formation waters for geothermal energy production an extended hydrogeochemical thermodynamic database (named "gebo"-database) for the well known and commonly used software PHREEQC has been developed by collecting and inserting data from literature. The following solution master species: Fe(+2), Fe(+3), S(-2), C(-4), Si, Zn, Pb, and Al are added to the database "pitzer.dat" which is provided with the code PHREEQC. According to the solution master species the necessary solution species and phases (solid phases and gases) are implemented. Furthermore, temperature and pressure adaptations of the mass action law constants, Pitzer parameters for the calculation of activity coefficients in waters of high ionic strength and solubility equilibria among gaseous and aqueous species of CO2, methane, and hydrogen sulphide are implemented into the "gebo"-database. Combined with the "gebo"-database the code PHREEQC can be used to test the behaviour of highly concentrated solutions (e.g. formation waters, brines). Chemical changes caused by temperature and pressure gradients as well as the exposure of the water to the atmosphere and technical equipments can be modelled. To check the plausibility of additional and adapted data/parameters experimental solubility data from literature (e.g. sulfate and carbonate minerals) are compared to modelled mineral solubilities at elevated levels of Total Dissolved Solids (TDS), temperature, and pressure. First results show good matches between modelled and experimental mineral solubility for barite, celestite, anhydrite, and calcite in high TDS waters indicating the plausibility of additional and adapted data and parameters. Furthermore, chemical parameters of geothermal wells in the North German Basin are used to test the "gebo"-database. The analysed water composition (starting with the main cations and anions) is calculated by thermodynamic equilibrium reactions of pure water with the minerals found in

  7. GlobTherm, a global database on thermal tolerances for aquatic and terrestrial organisms

    Science.gov (United States)

    Bennett, Joanne M.; Calosi, Piero; Clusella-Trullas, Susana; Martínez, Brezo; Sunday, Jennifer; Algar, Adam C.; Araújo, Miguel B.; Hawkins, Bradford A.; Keith, Sally; Kühn, Ingolf; Rahbek, Carsten; Rodríguez, Laura; Singer, Alexander; Villalobos, Fabricio; Ángel Olalla-Tárraga, Miguel; Morales-Castilla, Ignacio

    2018-03-01

    How climate affects species distributions is a longstanding question receiving renewed interest owing to the need to predict the impacts of global warming on biodiversity. Is climate change forcing species to live near their critical thermal limits? Are these limits likely to change through natural selection? These and other important questions can be addressed with models relating geographical distributions of species with climate data, but inferences made with these models are highly contingent on non-climatic factors such as biotic interactions. Improved understanding of climate change effects on species will require extensive analysis of thermal physiological traits, but such data are both scarce and scattered. To overcome current limitations, we created the GlobTherm database. The database contains experimentally derived species' thermal tolerance data currently comprising over 2,000 species of terrestrial, freshwater, intertidal and marine multicellular algae, plants, fungi, and animals. The GlobTherm database will be maintained and curated by iDiv with the aim to keep expanding it, and enable further investigations on the effects of climate on the distribution of life on Earth.

  8. Validity of Physician Billing Claims to Identify Deceased Organ Donors in Large Healthcare Databases

    Science.gov (United States)

    Li, Alvin Ho-ting; Kim, S. Joseph; Rangrej, Jagadish; Scales, Damon C.; Shariff, Salimah; Redelmeier, Donald A.; Knoll, Greg; Young, Ann; Garg, Amit X.

    2013-01-01

    Objective We evaluated the validity of physician billing claims to identify deceased organ donors in large provincial healthcare databases. Methods We conducted a population-based retrospective validation study of all deceased donors in Ontario, Canada from 2006 to 2011 (n = 988). We included all registered deaths during the same period (n = 458,074). Our main outcome measures included sensitivity, specificity, positive predictive value, and negative predictive value of various algorithms consisting of physician billing claims to identify deceased organ donors and organ-specific donors compared to a reference standard of medical chart abstraction. Results The best performing algorithm consisted of any one of 10 different physician billing claims. This algorithm had a sensitivity of 75.4% (95% CI: 72.6% to 78.0%) and a positive predictive value of 77.4% (95% CI: 74.7% to 80.0%) for the identification of deceased organ donors. As expected, specificity and negative predictive value were near 100%. The number of organ donors identified by the algorithm each year was similar to the expected value, and this included the pre-validation period (1991 to 2005). Algorithms to identify organ–specific donors performed poorly (e.g. sensitivity ranged from 0% for small intestine to 67% for heart; positive predictive values ranged from 0% for small intestine to 37% for heart). Interpretation Primary data abstraction to identify deceased organ donors should be used whenever possible, particularly for the detection of organ-specific donations. The limitations of physician billing claims should be considered whenever they are used. PMID:23967114

  9. Representativeness of the Spinal Cord Injury Model Systems National Database.

    Science.gov (United States)

    Ketchum, Jessica M; Cuthbert, Jeffrey P; Deutsch, Anne; Chen, Yuying; Charlifue, Susan; Chen, David; Dijkers, Marcel P; Graham, James E; Heinemann, Allen W; Lammertse, Daniel P; Whiteneck, Gale G

    2018-02-01

    Secondary analysis of prospectively collected observational data. To assess the representativeness of the Spinal Cord Injury Model Systems National Database (SCIMS-NDB) of all adults aged 18 years or older receiving inpatient rehabilitation in the United States (US) for new onset traumatic spinal cord injury (TSCI). Inpatient rehabilitation centers in the US. We compared demographic, functional status, and injury characteristics (nine categorical variables comprising of 46 categories and two continuous variables) between the SCIMS-NDB (N = 5969) and UDS-PRO/eRehabData (N = 99,142) cases discharged from inpatient rehabilitation in 2000-2010. There are negligible differences (exist for age categories, sex, race/ethnicity, marital status, FIM Motor score, and time from injury to rehabilitation admission. Important differences (>10%) exist in mean age and preinjury occupational status; the SCIMS-NDB sample was younger and included a higher percentage of individuals who were employed (62.7 vs. 41.7%) and fewer who were retired (10.2 vs. 36.1%). Adults in the SCIMS-NDB are largely representative of the population of adults receiving inpatient rehabilitation for new onset TSCI in the US. However, users of the SCIMS-NDB may need to adjust statistically for differences in age and preinjury occupational status to improve generalizability of findings.

  10. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  11. Conceptual data modeling on the KRR-1 and 2 decommissioning database

    International Nuclear Information System (INIS)

    Park, Hee Seoung; Park, Seung Kook; Lee, Kune Woo; Park, Jin Ho

    2002-01-01

    A study of the conceptual data modeling to realize the decommissioning database on the KRR-1 and 2 was carried out. In this study, the current state of the abroad decommissioning databased was investigated to make a reference of the database. A scope of the construction of decommissioning database has been set up based on user requirements. Then, a theory of the database construction was established and a scheme on the decommissioning information was classified. The facility information, work information, radioactive waste information, and radiological information dealing with the decommissioning database were extracted through interviews with an expert group and also decided upon the system configuration of the decommissioning database. A code which is composed of 17 bit was produced considering the construction, scheme and information. The results of the conceptual data modeling and the classification scheme will be used as basic data to create a prototype design of the decommissioning database

  12. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  13. Identification of organization name variants in large databases using rule-based scoring and clustering: With a case study on the web of science database

    NARCIS (Netherlands)

    E.A.M. Caron (Emiel); H.A.M. Daniels (Hennie)

    2016-01-01

    textabstractThis research describes a general method to automatically clean organizational and business names variants within large databases, such as: patent databases, bibliographic databases, databases in business information systems, or any other database containing organisational name

  14. Avibase – a database system for managing and organizing taxonomic concepts

    Directory of Open Access Journals (Sweden)

    Denis Lepage

    2014-06-01

    Full Text Available Scientific names of biological entities offer an imperfect resolution of the concepts that they are intended to represent. Often they are labels applied to entities ranging from entire populations to individual specimens representing those populations, even though such names only unambiguously identify the type specimen to which they were originally attached. Thus the real-life referents of names are constantly changing as biological circumscriptions are redefined and thereby alter the sets of individuals bearing those names. This problem is compounded by other characteristics of names that make them ambiguous identifiers of biological concepts, including emendations, homonymy and synonymy. Taxonomic concepts have been proposed as a way to address issues related to scientific names, but they have yet to receive broad recognition or implementation. Some efforts have been made towards building systems that address these issues by cataloguing and organizing taxonomic concepts, but most are still in conceptual or proof-of-concept stage. We present the on-line database Avibase as one possible approach to organizing taxonomic concepts. Avibase has been successfully used to describe and organize 844,000 species-level and 705,000 subspecies-level taxonomic concepts across every major bird taxonomic checklist of the last 125 years. The use of taxonomic concepts in place of scientific names, coupled with efficient resolution services, is a major step toward addressing some of the main deficiencies in the current practices of scientific name dissemination and use.

  15. Kidney transplantation after previous liver transplantation: analysis of the organ procurement transplant network database.

    Science.gov (United States)

    Gonwa, Thomas A; McBride, Maureen A; Mai, Martin L; Wadei, Hani M

    2011-07-15

    Patients after liver transplant have a high incidence of chronic kidney disease and end-stage renal disease (ESRD). We investigated kidney transplantation after liver transplantation using the Organ Procurement Transplant Network database. The Organ Procurement Transplant Network database was queried for patients who received kidney transplantation after previous liver transplantation. These patients were compared with patients who received primary kidney transplantation alone during the same time period. Between 1997 and 2008, 157,086 primary kidney transplants were performed. Of these, 680 deceased donor kidney transplants and 410 living donor kidney transplants were performed in previous recipients of liver transplants. The number of kidney after liver transplants performed each year has increased from 37 per year to 124 per year in 2008. The time from liver transplant to kidney transplant increased from 8.2 to 9.0 years for living donor transplants and from 5.4 to 9.6 years for deceased donor. The 1, 3, and 5 year actuarial graft survival in both living donor kidney after liver transplant and deceased donor kidney after liver transplant are less than the kidney transplant alone patients. However, the death-censored graft survivals are equal. The patient survival is also less but is similar to what would be expected in liver transplant recipients who did not have ESRD. In 2008, kidney after liver transplantation represented 0.9% of the total kidney alone transplants performed in the United States. Kidney transplantation is an appropriate therapy for selected patients who develop ESRD after liver transplantation.

  16. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    Energy Technology Data Exchange (ETDEWEB)

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat [Department of Biomedical Engineering, Marquette University, Milwaukee, Wisconsin 53233 (United States); Division of Imaging and Applied Mathematics (OSEL/CDRH), US Food and Drug Administration, Silver Spring, Maryland 20905 (United States); Department of Biomedical Engineering, Marquette University, Milwaukee, Wisconsin 53233 (United States)

    2012-09-15

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate

  17. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation.

    Science.gov (United States)

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-09-01

    The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360° through anthropomorphic voxelized female chest and head (0° and 30° tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI(vol) and multiplying by a physical CTDI(vol) measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30° relative to a nontilted scan. Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are within 1% of those

  18. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    International Nuclear Information System (INIS)

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-01-01

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360° through anthropomorphic voxelized female chest and head (0° and 30° tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI vol and multiplying by a physical CTDI vol measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30° relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are

  19. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  20. Solid Waste Projection Model: Database (Version 1.4). Technical reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User`s Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193).

  1. Modeling Spatial Data within Object Relational-Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-03-01

    Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.

  2. Organization Development: Strategies and Models.

    Science.gov (United States)

    Beckhard, Richard

    This book, written for managers, specialists, and students of management, is based largely on the author's experience in helping organization leaders with planned-change efforts, and on related experience of colleagues in the field. Chapter 1 presents the background and causes for the increased concern with organization development and planned…

  3. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  4. Organization of central database for implementation of ionizing radiation protection in the Republic of Croatia

    International Nuclear Information System (INIS)

    Kubelka, D.; Svilicic, N.

    2000-01-01

    The paper is intended to give an overview of the situation in the Republic of Croatia resulting from passing of the new ionizing radiation protection law. Data collecting organization and records keeping structure will be highlighted in particular, as well as data exchange between individual services involved in ionizing radiation protection. The Radiation Protection Act has been prepared in compliance with the international standards and Croatian regulations governing the ionizing radiation protection field. Its enforcement shall probably commence in October 1999, when the necessary bylaws regulating in detail numerous specific and technical issues of particular importance for ionizing radiation protection implementation are expected to be adopted. Within the Croatian Government, the Ministry of Health is charge of ionizing radiation protection. Such competence is traditional in our country and common throughout the world. This Ministry has authorized three institutions to carry out technical tasks related to the radiation protection, such as radiation sources inspections and personal dosimetry. Such distribution of work demands coordination of all involved institutions, control of their work and records keeping. The Croatian Radiation Protection Institute has been entrusted to coordinate work of these institutions, control their activities, and set up the central national registry of radiation sources and workers, as well as doses received by the staff during their work. Since the Croatian Radiation Protection Institute is a newly established institution, we could freely determine our operational framework. Due to its publicly accessible source code and wide base of users and developers, the best prospective for stability and long-term accessibility is offered by the Linux operating system. For the database development, Oracle RDBMS was used, partly because it is a leading manufacturer of database management systems, and partly because our staff is very familiar

  5. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  6. Distribution of the Object Oriented Databases. A Viewpoint of the MVDB Model's Methodology and Architecture

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2008-01-01

    Full Text Available In databases, much work has been done towards extending models with advanced tools such as view technology, schema evolution support, multiple classification, role modeling and viewpoints. Over the past years, most of the research dealing with the object multiple representation and evolution has proposed to enrich the monolithic vision of the classical object approach in which an object belongs to one hierarchy class. In particular, the integration of the viewpoint mechanism to the conventional object-oriented data model gives it flexibility and allows one to improve the modeling power of objects. The viewpoint paradigm refers to the multiple descriptions, the distribution, and the evolution of object. Also, it can be an undeniable contribution for a distributed design of complex databases. The motivation of this paper is to define an object data model integrating viewpoints in databases and to present a federated database architecture integrating multiple viewpoint sources following a local-as-extended-view data integration approach.

  7. Air Quality Modelling and the National Emission Database

    DEFF Research Database (Denmark)

    Jensen, S. S.

    The project focuses on development of institutional strengthening to be able to carry out national air emission inventories based on the CORINAIR methodology. The present report describes the link between emission inventories and air quality modelling to ensure that the new national air emission...... inventory is able to take into account the data requirements of air quality models...

  8. Data-Based Mechanistic Modeling of Flow-Concentration Dynamics ...

    African Journals Online (AJOL)

    Journal of Civil Engineering Research and Practice ... The resulting model showed that it is possible to use the DBM modelling approach to address the problem of representing the potential lag between polluting activity and its effect as well as provide more salient information about the system dynamics. This kind of ...

  9. Parameters for Organism Grouping - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Parameters for Organism Grouping Data detail Data name Parameters for Organism...ld for the ratio of organism species showing homology in the organism species in each organism group when allocation to the organism... 0.5, the cluster is determined as belonging to the Plants group if the sequences of four or more organism s...pecies out of seven species in this organism group exist in the cluster. Data file File name: pat_def1 File ...m Description Field 1 Group number Field 2 Designated value for allocation to organism group Field 3 Group n

  10. Model Organisms Fact Sheet: Using Model Organisms to Study Health and Disease

    Science.gov (United States)

    ... research organisms to explore the basic biology and chemistry of life. Scientists decide which organism to study ... and much is already known about their genetic makeup . For these and other reasons, studying model organisms ...

  11. Modelling organic particles in the atmosphere

    International Nuclear Information System (INIS)

    Couvidat, Florian

    2012-01-01

    Organic aerosol formation in the atmosphere is investigated via the development of a new model named H 2 O (Hydrophilic/Hydrophobic Organics). First, a parameterization is developed to take into account secondary organic aerosol formation from isoprene oxidation. It takes into account the effect of nitrogen oxides on organic aerosol formation and the hydrophilic properties of the aerosols. This parameterization is then implemented in H 2 O along with some other developments and the results of the model are compared to organic carbon measurements over Europe. Model performance is greatly improved by taking into account emissions of primary semi-volatile compounds, which can form secondary organic aerosols after oxidation or can condense when temperature decreases. If those emissions are not taken into account, a significant underestimation of organic aerosol concentrations occurs in winter. The formation of organic aerosols over an urban area was also studied by simulating organic aerosols concentration over the Paris area during the summer campaign of Megapoli (July 2009). H 2 O gives satisfactory results over the Paris area, although a peak of organic aerosol concentrations from traffic, which does not appear in the measurements, appears in the model simulation during rush hours. It could be due to an underestimation of the volatility of organic aerosols. It is also possible that primary and secondary organic compounds do not mix well together and that primary semi volatile compounds do not condense on an organic aerosol that is mostly secondary and highly oxidized. Finally, the impact of aqueous-phase chemistry was studied. The mechanism for the formation of secondary organic aerosol includes in-cloud oxidation of glyoxal, methylglyoxal, methacrolein and methylvinylketone, formation of methyltetrols in the aqueous phase of particles and cloud droplets, and the in-cloud aging of organic aerosols. The impact of wet deposition is also studied to better estimate the

  12. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  13. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    Science.gov (United States)

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  14. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    Science.gov (United States)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  15. An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723

  16. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  17. Completion of autobuilt protein models using a database of protein fragments

    International Nuclear Information System (INIS)

    Cowtan, Kevin

    2012-01-01

    Two developments in the process of automated protein model building in the Buccaneer software are described: the use of a database of protein fragments in improving the model completeness and the assembly of disconnected chain fragments into complete molecules. Two developments in the process of automated protein model building in the Buccaneer software are presented. A general-purpose library for protein fragments of arbitrary size is described, with a highly optimized search method allowing the use of a larger database than in previous work. The problem of assembling an autobuilt model into complete chains is discussed. This involves the assembly of disconnected chain fragments into complete molecules and the use of the database of protein fragments in improving the model completeness. Assembly of fragments into molecules is a standard step in existing model-building software, but the methods have not received detailed discussion in the literature

  18. Combining computational models, semantic annotations and simulation experiments in a graph database

    Science.gov (United States)

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  19. Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Table of Cluster and Organism Species Number Data detail Data name Table of Cluster and Organism...resentative sequence ID of cluster, its length, the number of sequences contained in the cluster, organism s...pecies, the number of sequences belonging to the cluster for each of 95 organism ...t Us Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive ...

  20. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  1. Hydraulic fracture propagation modeling and data-based fracture identification

    Science.gov (United States)

    Zhou, Jing

    Successful shale gas and tight oil production is enabled by the engineering innovation of horizontal drilling and hydraulic fracturing. Hydraulically induced fractures will most likely deviate from the bi-wing planar pattern and generate complex fracture networks due to mechanical interactions and reservoir heterogeneity, both of which render the conventional fracture simulators insufficient to characterize the fractured reservoir. Moreover, in reservoirs with ultra-low permeability, the natural fractures are widely distributed, which will result in hydraulic fractures branching and merging at the interface and consequently lead to the creation of more complex fracture networks. Thus, developing a reliable hydraulic fracturing simulator, including both mechanical interaction and fluid flow, is critical in maximizing hydrocarbon recovery and optimizing fracture/well design and completion strategy in multistage horizontal wells. A novel fully coupled reservoir flow and geomechanics model based on the dual-lattice system is developed to simulate multiple nonplanar fractures' propagation in both homogeneous and heterogeneous reservoirs with or without pre-existing natural fractures. Initiation, growth, and coalescence of the microcracks will lead to the generation of macroscopic fractures, which is explicitly mimicked by failure and removal of bonds between particles from the discrete element network. This physics-based modeling approach leads to realistic fracture patterns without using the empirical rock failure and fracture propagation criteria required in conventional continuum methods. Based on this model, a sensitivity study is performed to investigate the effects of perforation spacing, in-situ stress anisotropy, rock properties (Young's modulus, Poisson's ratio, and compressive strength), fluid properties, and natural fracture properties on hydraulic fracture propagation. In addition, since reservoirs are buried thousands of feet below the surface, the

  2. Organ Impairment—Drug–Drug Interaction Database: A Tool for Evaluating the Impact of Renal or Hepatic Impairment and Pharmacologic Inhibition on the Systemic Exposure of Drugs

    Science.gov (United States)

    Yeung, CK; Yoshida, K; Kusama, M; Zhang, H; Ragueneau-Majlessi, I; Argon, S; Li, L; Chang, P; Le, CD; Zhao, P; Zhang, L; Sugiyama, Y; Huang, S-M

    2015-01-01

    The organ impairment and drug–drug interaction (OI-DDI) database is the first rigorously assembled database of pharmacokinetic drug exposure data from publicly available renal and hepatic impairment studies presented together with the maximum change in drug exposure from drug interaction inhibition studies. The database was used to conduct a systematic comparison of the effect of renal/hepatic impairment and pharmacologic inhibition on drug exposure. Additional applications are feasible with the public availability of this database. PMID:26380158

  3. ExtraTrain: a database of Extragenic regions and Transcriptional information in prokaryotic organisms

    Science.gov (United States)

    Pareja, Eduardo; Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Bonal, Javier; Tobes, Raquel

    2006-01-01

    Background Transcriptional regulation processes are the principal mechanisms of adaptation in prokaryotes. In these processes, the regulatory proteins and the regulatory DNA signals located in extragenic regions are the key elements involved. As all extragenic spaces are putative regulatory regions, ExtraTrain covers all extragenic regions of available genomes and regulatory proteins from bacteria and archaea included in the UniProt database. Description ExtraTrain provides integrated and easily manageable information for 679816 extragenic regions and for the genes delimiting each of them. In addition ExtraTrain supplies a tool to explore extragenic regions, named Palinsight, oriented to detect and search palindromic patterns. This interactive visual tool is totally integrated in the database, allowing the search for regulatory signals in user defined sets of extragenic regions. The 26046 regulatory proteins included in ExtraTrain belong to the families AraC/XylS, ArsR, AsnC, Cold shock domain, CRP-FNR, DeoR, GntR, IclR, LacI, LuxR, LysR, MarR, MerR, NtrC/Fis, OmpR and TetR. The database follows the InterPro criteria to define these families. The information about regulators includes manually curated sets of references specifically associated to regulator entries. In order to achieve a sustainable and maintainable knowledge database ExtraTrain is a platform open to the contribution of knowledge by the scientific community providing a system for the incorporation of textual knowledge. Conclusion ExtraTrain is a new database for exploring Extragenic regions and Transcriptional information in bacteria and archaea. ExtraTrain database is available at . PMID:16539733

  4. CONCEPTUAL DATA MODELING OF THE INTEGRATED DATABASE FOR THE RADIOACTIVE WASTE MANAGEMENT

    International Nuclear Information System (INIS)

    Park, H.S; Shon, J.S; Kim, K.J; Park, J.H; Hong, K.P; Park, S.H

    2003-01-01

    A study of a database system that can manage radioactive waste collectively on a network has been carried out. A conceptual data modeling that is based on the theory of information engineering (IE), which is the first step of the whole database development, has been studied to manage effectively information and data related to radioactive waste. In order to establish the scope of the database, user requirements and system configuration for radioactive waste management were analyzed. The major information extracted from user requirements are solid waste, liquid waste, gaseous waste, and waste related to spent fuel. The radioactive waste management system is planning to share information with associated companies

  5. The European fossil-fuelled power station database used in the SEI CASM model

    International Nuclear Information System (INIS)

    Bailey, P.

    1996-01-01

    The database contains details of power stations in Europe that burn fossil-fuels. All countries are covered from Ireland to the European region of Russia as far as the Urals. The following data are given for each station: Location (country and EMEP square), capacity (net MW e and boiler size), year of commissioning, and fuels burnt. A listing of the database is included in the report. The database is primarily used for estimation of emissions and abatement costs of sulfur and nitrogen oxides in the SEI acid rain model CASM. 24 refs, tabs

  6. The European fossil-fuelled power station database used in the SEI CASM model

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, P. [comp.] [Stockholm Environment Inst. at York (United Kingdom)

    1996-06-01

    The database contains details of power stations in Europe that burn fossil-fuels. All countries are covered from Ireland to the European region of Russia as far as the Urals. The following data are given for each station: Location (country and EMEP square), capacity (net MW{sub e} and boiler size), year of commissioning, and fuels burnt. A listing of the database is included in the report. The database is primarily used for estimation of emissions and abatement costs of sulfur and nitrogen oxides in the SEI acid rain model CASM. 24 refs, tabs

  7. Content-based organization of the information space in multi-database networks

    NARCIS (Netherlands)

    Papazoglou, M.; Milliner, S.

    1998-01-01

    Abstract. Rapid growth in the volume of network-available data, complexity, diversity and terminological fluctuations, at different data sources, render network-accessible information increasingly difficult to achieve. The situation is particularly cumbersome for users of multi-database systems who

  8. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...... successfully demonstrated the use of our systems to semantically integrate two main database resources from IOW and NERI - these are available on-line. We are in the process of adding further databases and sup-porting a wider variety of user queries such as Decision Support System queries....

  9. Cardiac Electromechanical Models: From Cell to Organ

    Directory of Open Access Journals (Sweden)

    Natalia A Trayanova

    2011-08-01

    Full Text Available The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computation physiology and medicine. This review focuses on electromechanical (EM models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single cell models and the second half addresses organ models. At the subcelluar level, myofilament models represent actin-myosin interaction and Ca-based activation. Myofilament models and their refinements represent an overview of the development in the field. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered the cellular basis of the Frank-Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction-diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and

  10. Project-matrix models of marketing organization

    Directory of Open Access Journals (Sweden)

    Gutić Dragutin

    2009-01-01

    Full Text Available Unlike theory and practice of corporation organization, in marketing organization numerous forms and contents at its disposal are not reached until this day. It can be well estimated that marketing organization today in most of our companies and in almost all its parts, noticeably gets behind corporation organization. Marketing managers have always been occupied by basic, narrow marketing activities as: sales growth, market analysis, market growth and market share, marketing research, introduction of new products, modification of products, promotion, distribution etc. They rarely found it necessary to focus a bit more to different aspects of marketing management, for example: marketing planning and marketing control, marketing organization and leading. This paper deals with aspects of project - matrix marketing organization management. Two-dimensional and more-dimensional models are presented. Among two-dimensional, these models are analyzed: Market management/products management model; Products management/management of product lifecycle phases on market model; Customers management/marketing functions management model; Demand management/marketing functions management model; Market positions management/marketing functions management model. .

  11. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    Science.gov (United States)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  12. Complex Systems and Self-organization Modelling

    CERN Document Server

    Bertelle, Cyrille; Kadri-Dahmani, Hakima

    2009-01-01

    The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.

  13. Information structure design for databases a practical guide to data modelling

    CERN Document Server

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  14. Transport and Environment Database System (TRENDS): Maritime Air Pollutant Emission Modelling

    DEFF Research Database (Denmark)

    Georgakaki, Aliki; Coffey, Robert; Lock, Grahm

    2005-01-01

    changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA......) level, so a bottom-up approach is used. A port to MCA distance database has also been constructed for the purpose of the study. This was the first attempt to use Eurostat maritime statistics for emission modelling; and the problems encountered, since the statistical data collection was not undertaken...... with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types...

  15. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPD Database Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...AGE) reference maps. Features and manner of utilization of database Proteins extracted from organs and subce

  16. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  17. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  18. High-Throughput Computational Screening of the Metal Organic Framework Database for CH4/H2 Separations.

    Science.gov (United States)

    Altintas, Cigdem; Erucar, Ilknur; Keskin, Seda

    2018-01-31

    Metal organic frameworks (MOFs) have been considered as one of the most exciting porous materials discovered in the last decade. Large surface areas, high pore volumes, and tailorable pore sizes make MOFs highly promising in a variety of applications, mainly in gas separations. The number of MOFs has been increasing very rapidly, and experimental identification of materials exhibiting high gas separation potential is simply impractical. High-throughput computational screening studies in which thousands of MOFs are evaluated to identify the best candidates for target gas separation is crucial in directing experimental efforts to the most useful materials. In this work, we used molecular simulations to screen the most complete and recent collection of MOFs from the Cambridge Structural Database to unlock their CH 4 /H 2 separation performances. This is the first study in the literature, which examines the potential of all existing MOFs for adsorption-based CH 4 /H 2 separation. MOFs (4350) were ranked based on several adsorbent evaluation metrics including selectivity, working capacity, adsorbent performance score, sorbent selection parameter, and regenerability. A large number of MOFs were identified to have extraordinarily large CH 4 /H 2 selectivities compared to traditional adsorbents such as zeolites and activated carbons. We examined the relations between structural properties of MOFs such as pore sizes, porosities, and surface areas and their selectivities. Correlations between the heat of adsorption, adsorbility, metal type of MOFs, and selectivities were also studied. On the basis of these relations, a simple mathematical model that can predict the CH 4 /H 2 selectivity of MOFs was suggested, which will be very useful in guiding the design and development of new MOFs with extraordinarily high CH 4 /H 2 separation performances.

  19. A New Optimized Model to Handle Temporal Data using Open Source Database

    Directory of Open Access Journals (Sweden)

    KUMAR, S.

    2017-05-01

    Full Text Available The majority of the database applications now a days deal with temporal data. Temporal records are known to change during the course of time and facilities to manage the multiple snapshots of these records are generally missing in conventional databases. Consequently, different temporal data models have been proposed and implemented as an extension of the temporal less database systems. In the single relation model, the present and past instances are stored in a single relation that makes its handling cumbersome and inefficient. This paper emphasize upon storing the past instances of the records in the multiple historical relations. The current relations will manage the recent snapshot of data. The tuple time stamping approach is used to timestamp the temporal records. This paper proposes a temporal model for the management of time varying data built on the top of conventional open source database. Indexing is used to enhance the performance of the model. The proposed model is also compared with the single relation model.

  20. Clinical Prediction Models for Cardiovascular Disease: The Tufts PACE CPM Database

    Science.gov (United States)

    Wessler, Benjamin S.; Lana Lai, YH; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S.; Kent, David M.

    2015-01-01

    Background Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease (CVD) there are numerous CPMs available though the extent of this literature is not well described. Methods and Results We conducted a systematic review for articles containing CPMs for CVD published between January 1990 through May 2012. CVD includes coronary heart disease (CHD), heart failure (HF), arrhythmias, stroke, venous thromboembolism (VTE) and peripheral vascular disease (PVD). We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. 717 (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions including 215 CPMs for patients with CAD, 168 CPMs for population samples, and 79 models for patients with HF. There are 77 distinct index/ outcome (I/O) pairings. Of the de novo models in this database 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. Conclusions There is an abundance of CPMs available for a wide assortment of CVD conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. PMID:26152680

  1. Some aspects of the file organization and retrieval strategy in large data-bases

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Govorun, N.N.

    1977-01-01

    Methods of organizing a big information retrieval system are discribed. A special attention is paid to the file organization. An adapting file structure is described in more detail. The discussed method gives one the opportunity to organize large files in such a way that the response time of the system can be minimized, when the file is increasing. In connection with the retrieval strategy a method is proposed, which uses the frequencies of the descr/iptors and the couples of the descriptors to forecast the expected number of the relevant documents. Programmes are made, on the base of these methods, which are used in the information retrieval systems of JINR

  2. Quantitative Developments of Biomolecular Databases, Measurement Methodology, and Comprehensive Transport Models for Bioanalytical Microfluidics

    Science.gov (United States)

    2006-10-01

    chemistry models (beads and surfaces)[38] M11. Biochemistry database integrated with electrochemistry M12. Hydrogel models for surface biochemistry[30] M13 ...bacteria and λ- phage DNA. This device relies on the balance between electroosmotic flow and DEP force on suspended particles. In another application...electrochemistry M12. Hydrogel models for surface biochemistry[30] M13 . Least square-based engine for extraction of kinetic coefficients[38] M14. Rapid ANN

  3. Modeling and implementing a database on drugs into a hospital intranet.

    Science.gov (United States)

    François, M; Joubert, M; Fieschi, D; Fieschi, M

    1998-09-01

    Our objective was to develop a drug information service, implementing a database on drugs in our university hospitals information system. Thériaque is a database, maintained by a group of pharmacists and physicians, on all the drugs available in France. Before its implementation we modeled its content (chemical classes, active components, excipients, indications, contra-indications, side effects, and so on) according to an object-oriented method. Then we designed HTML pages whose appearance translates the structure of classes of objects of the model. Fields in pages are dynamically fulfilled by the results of queries to a relational database in which information on drugs is stored. This allowed a fast implementation and did not imply to port a client application on the thousands of workstations over the network. The interface provides end-users with an easy-to-use and natural way to access information related to drugs in an internet environment.

  4. Crystal Plasticity Modeling of Microstructure Evolution and Mechanical Fields During Processing of Metals Using Spectral Databases

    Science.gov (United States)

    Knezevic, Marko; Kalidindi, Surya R.

    2017-05-01

    This article reviews the advances made in the development and implementation of a novel approach to speeding up crystal plasticity simulations of metal processing by one to three orders of magnitude when compared with the conventional approaches, depending on the specific details of implementation. This is mainly accomplished through the use of spectral crystal plasticity (SCP) databases grounded in the compact representation of the functions central to crystal plasticity computations. A key benefit of the databases is that they allow for a noniterative retrieval of constitutive solutions for any arbitrary plastic stretching tensor (i.e., deformation mode) imposed on a crystal of arbitrary orientation. The article emphasizes the latest developments in terms of embedding SCP databases within implicit finite elements. To illustrate the potential of these novel implementations, the results from several process modeling applications including equichannel angular extrusion and rolling are presented and compared with experimental measurements and predictions from other models.

  5. Hydrologic Derivatives for Modeling and Analysis—A new global high-resolution database

    Science.gov (United States)

    Verdin, Kristine L.

    2017-07-17

    The U.S. Geological Survey has developed a new global high-resolution hydrologic derivative database. Loosely modeled on the HYDRO1k database, this new database, entitled Hydrologic Derivatives for Modeling and Analysis, provides comprehensive and consistent global coverage of topographically derived raster layers (digital elevation model data, flow direction, flow accumulation, slope, and compound topographic index) and vector layers (streams and catchment boundaries). The coverage of the data is global, and the underlying digital elevation model is a hybrid of three datasets: HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales), GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010), and the SRTM (Shuttle Radar Topography Mission). For most of the globe south of 60°N., the raster resolution of the data is 3 arc-seconds, corresponding to the resolution of the SRTM. For the areas north of 60°N., the resolution is 7.5 arc-seconds (the highest resolution of the GMTED2010 dataset) except for Greenland, where the resolution is 30 arc-seconds. The streams and catchments are attributed with Pfafstetter codes, based on a hierarchical numbering system, that carry important topological information. This database is appropriate for use in continental-scale modeling efforts. The work described in this report was conducted by the U.S. Geological Survey in cooperation with the National Aeronautics and Space Administration Goddard Space Flight Center.

  6. Estimating soil water-holding capacities by linking the Food and Agriculture Organization Soil map of the world with global pedon databases and continuous pedotransfer functions

    Science.gov (United States)

    Reynolds, C. A.; Jackson, T. J.; Rawls, W. J.

    2000-12-01

    Spatial soil water-holding capacities were estimated for the Food and Agriculture Organization (FAO) digital Soil Map of the World (SMW) by employing continuous pedotransfer functions (PTF) within global pedon databases and linking these results to the SMW. The procedure first estimated representative soil properties for the FAO soil units by statistical analyses and taxotransfer depth algorithms [Food and Agriculture Organization (FAO), 1996]. The representative soil properties estimated for two layers of depths (0-30 and 30-100 cm) included particle-size distribution, dominant soil texture, organic carbon content, coarse fragments, bulk density, and porosity. After representative soil properties for the FAO soil units were estimated, these values were substituted into three different pedotransfer functions (PTF) models by Rawls et al. [1982], Saxton et al. [1986], and Batjes [1996a]. The Saxton PTF model was finally selected to calculate available water content because it only required particle-size distribution data and results closely agreed with the Rawls and Batjes PTF models that used both particle-size distribution and organic matter data. Soil water-holding capacities were then estimated by multiplying the available water content by the soil layer thickness and integrating over an effective crop root depth of 1 m or less (i.e., encountered shallow impermeable layers) and another soil depth data layer of 2.5 m or less.

  7. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RED Database Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti...on The Rice Expression Database (RED) is a database that aggregates the gene expr...icroarray Project and other research groups. Features and manner of utilization of database

  8. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  9. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database... classification Protein sequence databases Organism Taxonom...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The Trypanosomes database... is a database providing the comprehensive information of proteins that is effective t

  10. TogoTable: cross-database annotation system using the Resource Description Framework (RDF) data model.

    Science.gov (United States)

    Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko

    2014-07-01

    TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. The conceptual model of organization social responsibility

    OpenAIRE

    LUO, Lan; WEI, Jingfu

    2014-01-01

    With the developing of the research of CSR, people more and more deeply noticethat the corporate should take responsibility. Whether other organizations besides corporatesshould not take responsibilities beyond their field? This paper puts forward theconcept of organization social responsibility on the basis of the concept of corporate socialresponsibility and other theories. And the conceptual models are built based on theconception, introducing the OSR from three angles: the types of organi...

  12. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  13. Modelling of phase diagrams and thermodynamic properties using Calphad method – Development of thermodynamic databases

    Czech Academy of Sciences Publication Activity Database

    Kroupa, Aleš

    2013-01-01

    Roč. 66, JAN (2013), s. 3-13 ISSN 0927-0256 R&D Projects: GA MŠk(CZ) OC08053 Institutional support: RVO:68081723 Keywords : Calphad method * phase diagram modelling * thermodynamic database development Subject RIV: BJ - Thermodynamics Impact factor: 1.879, year: 2013

  14. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  15. Exposure Modeling Tools and Databases for Consideration for Relevance to the Amended TSCA (ISES)

    Science.gov (United States)

    The Agency’s Office of Research and Development (ORD) has a number of ongoing exposure modeling tools and databases. These efforts are anticipated to be useful in supporting ongoing implementation of the amended Toxic Substances Control Act (TSCA). Under ORD’s Chemic...

  16. The Subject-Object Relationship Interface Model in Database Management Systems.

    Science.gov (United States)

    Yannakoudakis, Emmanuel J.; Attar-Bashi, Hussain A.

    1989-01-01

    Describes a model that displays structures necessary to map between the conceptual and external levels in database management systems, using an algorithm that maps the syntactic representations of tuples onto semantic representations. A technique for translating tuples into natural language sentences is introduced, and a system implemented in…

  17. PK/DB: database for pharmacokinetic properties and predictive in silico ADME models.

    Science.gov (United States)

    Moda, Tiago L; Torres, Leonardo G; Carrara, Alexandre E; Andricopulo, Adriano D

    2008-10-01

    The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, blood-brain barrier and water solubility). http://www.pkdb.ifsc.usp.br

  18. Environmental Education Organizations and Programs in Texas: Identifying Patterns through a Database and Survey Approach for Establishing Frameworks for Assessment and Progress

    Science.gov (United States)

    Lloyd-Strovas, Jenny D.; Arsuffi, Thomas L.

    2016-01-01

    We examined the diversity of environmental education (EE) in Texas, USA, by developing a framework to assess EE organizations and programs at a large scale: the Environmental Education Database of Organizations and Programs (EEDOP). This framework consisted of the following characteristics: organization/visitor demographics, pedagogy/curriculum,…

  19. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  20. Development of a database of organ doses for paediatric and young adult CT scans in the United Kingdom

    Science.gov (United States)

    Kim, K. P.; Berrington de González, A.; Pearce, M. S.; Salotti, J. A.; Parker, L.; McHugh, K.; Craft, A. W.; Lee, C.

    2012-01-01

    Despite great potential benefits, there are concerns about the possible harm from medical imaging including the risk of radiation-related cancer. There are particular concerns about computed tomography (CT) scans in children because both radiation dose and sensitivity to radiation for children are typically higher than for adults undergoing equivalent procedures. As direct empirical data on the cancer risks from CT scans are lacking, the authors are conducting a retrospective cohort study of over 240 000 children in the UK who underwent CT scans. The main objective of the study is to quantify the magnitude of the cancer risk in relation to the radiation dose from CT scans. In this paper, the methods used to estimate typical organ-specific doses delivered by CT scans to children are described. An organ dose database from Monte Carlo radiation transport-based computer simulations using a series of computational human phantoms from newborn to adults for both male and female was established. Organ doses vary with patient size and sex, examination types and CT technical settings. Therefore, information on patient age, sex and examination type from electronic radiology information systems and technical settings obtained from two national surveys in the UK were used to estimate radiation dose. Absorbed doses to the brain, thyroid, breast and red bone marrow were calculated for reference male and female individuals with the ages of newborns, 1, 5, 10, 15 and 20 y for a total of 17 different scan types in the pre- and post-2001 time periods. In general, estimated organ doses were slightly higher for females than males which might be attributed to the smaller body size of the females. The younger children received higher doses in pre-2001 period when adult CT settings were typically used for children. Paediatric-specific adjustments were assumed to be used more frequently after 2001, since then radiation doses to children have often been smaller than those to adults. The

  1. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  2. Predicting 30-day Hospital Readmission with Publicly Available Administrative Database. A Conditional Logistic Regression Modeling Approach.

    Science.gov (United States)

    Zhu, K; Lou, Z; Zhou, J; Ballester, N; Kong, N; Parikh, P

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners. Explore the use of conditional logistic regression to increase the prediction accuracy. We analyzed an HCUP statewide inpatient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models. The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of

  3. Gas Chromatography and Mass Spectrometry Measurements and Protocols for Database and Library Development Relating to Organic Species in Support of the Mars Science Laboratory

    Science.gov (United States)

    Misra, P.; Garcia, R.; Mahaffy, P. R.

    2010-04-01

    An organic contaminant database and library has been developed for use with the Sample Analysis at Mars (SAM) instrumentation utilizing laboratory-based Gas Chromatography-Mass Spectrometry measurements of pyrolyzed and baked material samples.

  4. D Digital Model Database Applied to Conservation and Research of Wooden Construction in China

    Science.gov (United States)

    Zheng, Y.

    2013-07-01

    Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E.) to the end of the Yuan dynasty (1279-1368C.E.) in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH) to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate information inventory

  5. 3D DIGITAL MODEL DATABASE APPLIED TO CONSERVATION AND RESEARCH OF WOODEN CONSTRUCTION IN CHINA

    Directory of Open Access Journals (Sweden)

    Y. Zheng

    2013-07-01

    Full Text Available Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E. to the end of the Yuan dynasty (1279–1368C.E. in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate

  6. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    OpenAIRE

    Lončarić, Zdenko; Vukobratović, Marija; Ragaly, Peter; Filep, Tibor; Popović, Brigita; Karalić, Krunoslav; Vukobratović, Želimir

    2009-01-01

    Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media) evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of ...

  7. Resveratrol and Lifespan in Model Organisms.

    Science.gov (United States)

    Pallauf, Kathrin; Rimbach, Gerald; Rupp, Petra Maria; Chin, Dawn; Wolf, Insa M A

    2016-01-01

    Resveratrol may possess life-prolonging and health-benefitting properties, some of which may resemble the effect of caloric restriction (CR). CR appears to prolong the lifespan of model organisms in some studies and may benefit human health. However, for humans, restricting food intake for an extended period of time seems impracticable and substances imitating the beneficial effects of CR without having to reduce food intake could improve health in an aging and overweight population. We have reviewed the literature studying the influence of resveratrol on the lifespan of model organisms including yeast, flies, worms, and rodents. We summarize the in vivo findings, describe modulations of molecular targets and gene expression observed in vivo and in vitro, and discuss how these changes may contribute to lifespan extension. Data from clinical studies are summarized to provide an insight about the potential of resveratrol supplementation in humans. Resveratrol supplementation has been shown to prolong lifespan in approximately 60% of the studies conducted in model organisms. However, current literature is contradictory, indicating that the lifespan effects of resveratrol vary strongly depending on the model organism. While worms and killifish seemed very responsive to resveratrol, resveratrol failed to affect lifespan in the majority of the studies conducted in flies and mice. Furthermore, factors such as dose, gender, genetic background and diet composition may contribute to the high variance in the observed effects. It remains inconclusive whether resveratrol is indeed a CR mimetic and possesses life-prolonging properties. The limited bioavailability of resveratrol may further impede its potential effects.

  8. Integrated modelling of two xenobiotic organic compounds

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Gernaey, K.V.; Henze, Mogens

    2006-01-01

    This paper presents a dynamic mathematical model that describes the fate and transport of two selected xenobiotic organic compounds (XOCs) in a simplified representation. of an integrated urban wastewater system. A simulation study, where the xenobiotics bisphenol A and pyrene are used as reference...

  9. A STRATEGIC MANAGEMENT MODEL FOR SERVICE ORGANIZATIONS

    OpenAIRE

    Andreea ZAMFIR

    2013-01-01

    This paper provides a knowledge-based strategic management of services model, with a view to emphasise an approach to gaining competitive advantage through knowledge, people and networking. The long-term evolution of the service organization is associated with the way in which the strategic management is practised.

  10. Modeling of Organic Effects on Aerosols Growth

    Science.gov (United States)

    Caboussat, A.; Amundson, N. R.; He, J.; Seinfeld, J. H.

    2006-05-01

    Over the last two decades, a series of modules has been developed in the atmospheric modeling community to predict the phase transition, multistage growth phenomena, crystallization and evaporation of inorganic aerosols. In the same time, the water interactions of particles containing organic constituents have been recognized as an important factor for aerosol activation and cloud formation. However, the research on hygroscopicity of organic-containing aerosols, motivated by the organic effect on aerosol growth and activation, has gathered much less attention. We present here a new model (UHAERO), that is both efficient and rigorously computes phase separation and liquid-liquid equilibrium for organic particles, as well as the dynamics partitioning between gas and particulate phases, with emphasis on the role of water vapor in the gas-liquid partitioning. The model does not rely on any a priori specification of the phases present in certain atmospheric conditions. The determination of the thermodynamic equilibrium is based on the minimization of the Gibbs free energy. The mass transfer between the particle and the bulk gas phase is dynamically driven by the difference between bulk gas pressure and the gas pressure at the surface of a particle. The multicomponent phase equilibrium for a closed organic aerosol system at constant temperature and pressure and for specified feeds is the solution to the liquid-liquid equilibrium problem arising from the constrained minimization of the Gibbs free energy. A geometrical concept of phase simplex (phase separation) is introduced to characterize the thermodynamic equilibrium. The computation of the mass fluxes is achieved by coupling the thermodynamics of the organic aerosol particle and the determination of the mass fluxes. Numerical results show the efficiency of the model, which make it suitable for insertion in global three- dimensional air quality models. The Gibbs free energy is modeled by the UNIFAC model to illustrate

  11. Using the Cambridge structure database of organic and organometalic compounds in structure biology

    Czech Academy of Sciences Publication Activity Database

    Hašek, Jindřich

    2010-01-01

    Roč. 17, 1a (2010), b24-b26 ISSN 1211-5894. [Discussions in Structural Molecular Biology /8./. Nové Hrady, 18.03.2010-20.03.2010] R&D Projects: GA AV ČR IAA500500701; GA ČR GA305/07/1073 Institutional research plan: CEZ:AV0Z40500505 Keywords : organic chemistry * Cambridge Structure Data base * molecular structure Subject RIV: CD - Macromolecular Chemistry http://xray.cz/ms/bul2010-1a/friday2.pdf

  12. A scalable database model for multiparametric time series: a volcano observatory case study

    Science.gov (United States)

    Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea

    2014-05-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  13. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMOS Database Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...e Microarray Opening Site is a database of comprehensive information for Rice Mic...es and manner of utilization of database You can refer to the information of the

  14. Emergent organization in a model market

    Science.gov (United States)

    Yadav, Avinash Chand; Manchanda, Kaustubh; Ramaswamy, Ramakrishna

    2017-09-01

    We study the collective behaviour of interacting agents in a simple model of market economics that was originally introduced by Nørrelykke and Bak. A general theoretical framework for interacting traders on an arbitrary network is presented, with the interaction consisting of buying (namely consumption) and selling (namely production) of commodities. Extremal dynamics is introduced by having the agent with least profit in the market readjust prices, causing the market to self-organize. In addition to examining this model market on regular lattices in two-dimensions, we also study the cases of random complex networks both with and without community structures. Fluctuations in an activity signal exhibit properties that are characteristic of avalanches observed in models of self-organized criticality, and these can be described by power-law distributions when the system is in the critical state.

  15. Biophysical Modeling of Respiratory Organ Motion

    Science.gov (United States)

    Werner, René

    Methods to estimate respiratory organ motion can be divided into two groups: biophysical modeling and image registration. In image registration, motion fields are directly extracted from 4D ({D}+{t}) image sequences, often without concerning knowledge about anatomy and physiology in detail. In contrast, biophysical approaches aim at identification of anatomical and physiological aspects of breathing dynamics that are to be modeled. In the context of radiation therapy, biophysical modeling of respiratory organ motion commonly refers to the framework of continuum mechanics and elasticity theory, respectively. Underlying ideas and corresponding boundary value problems of those approaches are described in this chapter, along with a brief comparison to image registration-based motion field estimation.

  16. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.

    Science.gov (United States)

    Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W

    2014-01-01

    Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.

  17. Data extraction tool and colocation database for satellite and model product evaluation (Invited)

    Science.gov (United States)

    Ansari, S.; Zhang, H.; Privette, J. L.; Del Greco, S.; Urzen, M.; Pan, Y.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2009-12-01

    The Satellite Product Evaluation Center (SPEC) is an ongoing project to integrate operational monitoring of data products from satellite and model analysis, with support for quantitative calibration, validation and algorithm improvement. The system uniquely allows scientists and others to rapidly access, subset, visualize, statistically compare and download multi-temporal data from multiple in situ, satellite, weather radar and model sources without reference to native data and metadata formats, packaging or physical location. Although still in initial development, the SPEC database and services will contain a wealth of integrated data for evaluation, validation, and discovery science activities across many different disciplines. The SPEC data extraction architecture departs from traditional dataset and research driven approaches through the use of standards and relational database technology. The NetCDF for Java API is used as a framework for data decoding and abstraction. The data are treated as generic feature types (such as Grid or Swath) as defined by the NetCDF Climate and Forecast (CF) metadata conventions. Colocation data for various field measurement networks, such as the Climate Reference Network (CRN) and Ameriflux network, are extracted offline, from local disk or distributed sources. The resulting data subsets are loaded into a relational database for fast access. URL-based (Representational State Transfer (REST)) web services are provided for simple database access to application programmers and scientists. SPEC supports broad NOAA, U.S. Global Change Research Program (USGCRP) and World Climate Research Programme (WCRP) initiatives including the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NOAA’s Climate Data Record (CDR) programs. SPEC is a collaboration between NOAA’s National Climatic Data Center (NCDC) and DOE’s Oak Ridge National Laboratory (ORNL). In this presentation we will describe the data extraction

  18. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  19. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  20. Quinolone antibiotics and suicidal behavior: analysis of the World Health Organization's adverse drug reactions database and discussion of potential mechanisms.

    Science.gov (United States)

    Samyde, Julie; Petit, Pierre; Hillaire-Buys, Dominique; Faillie, Jean-Luc

    2016-07-01

    Several case-reports suggest that the use of quinolones may increase the risk of psychiatric adverse reactions such as suicidal behaviors. The aim of this study is to investigate whether there is a safety signal for quinolone-related suicidal behaviors in a global adverse drug reactions database. All antibiotic-related adverse reactions were extracted from VigiBase, the World Health Organization (WHO) global Individual Case Safety Report (ICSR) database. Disproportionality analyses were performed to investigate the association between reports of suicidal behavior and exposure to quinolones, in comparison with other antibiotics. From December 1970 through January 2015, we identified 992,097 antibiotic-related adverse reactions. Among them, 608 were quinolone-related suicidal behaviors including 97 cases of completed suicides. There was increased reporting of suicidal behavior (adjusted reporting odds ratios [ROR] 2.78, 95 % CI 2.51-3.08) with quinolones as compared to other antibiotics. Candidate mechanisms for quinolone-induced suicidal behaviors include GABAA antagonism, activation of NMDA receptors, decreased serotonin levels, oxidative stress, and altered microRNA expressions. We found a strong safety signal suggesting an increased risk of suicidal behaviors associated with quinolone use. Plausible psychopharmacological mechanisms could underlie this association. Further investigations are urgent to confirm and better understand these findings.

  1. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n...ame Arabidopsis Phenome Database Alternative name - DOI 10.18908/lsdba.nbdc01509-000 Creator Creator Name: H... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database

  2. A COMPARISON STUDY FOR INTRUSION DATABASE (KDD99, NSL-KDD BASED ON SELF ORGANIZATION MAP (SOM ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    LAHEEB M. IBRAHIM

    2013-02-01

    Full Text Available Detecting anomalous traffic on the internet has remained an issue of concern for the community of security researchers over the years. The advances in the area of computing performance, in terms of processing power and storage, have fostered their ability to host resource-intensive intelligent algorithms, to detect intrusive activity, in a timely manner. As part of this project, we study and analyse the performance of Self Organization Map (SOM Artificial Neural Network, when implemented as part of an Intrusion Detection System, to detect anomalies on acknowledge Discovery in Databases KDD 99 and NSL-KDD datasets of internet traffic activity simulation. Results obtained are compared and analysed based on several performance metrics, where the detection rate for KDD 99 dataset is 92.37%, while detection rate for NSL-KDD dataset is 75.49%.

  3. Chess databases as a research vehicle in psychology: Modeling large data.

    Science.gov (United States)

    Vaci, Nemanja; Bilalić, Merim

    2017-08-01

    The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.

  4. Technical report on implementation of reactor internal 3D modeling and visual database system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation`s integrated computer aided engineering system, such as Mitsubishi`s NUWINGS (Japan), AECL`s CANDID (Canada) and Duke Power`s PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new.

  5. Modeling and Design of Capacitive Micromachined Ultrasonic Transducers Based-on Database Optimization

    International Nuclear Information System (INIS)

    Chang, M W; Gwo, T J; Deng, T M; Chang, H C

    2006-01-01

    A Capacitive Micromachined Ultrasonic Transducers simulation database, based on electromechanical coupling theory, has been fully developed for versatile capacitive microtransducer design and analysis. Both arithmetic and graphic configurations are used to find optimal parameters based on serial coupling simulations. The key modeling parameters identified can improve microtransducer's character and reliability effectively. This method could be used to reduce design time and fabrication cost, eliminating trial-and-error procedures. Various microtransducers, with optimized characteristics, can be developed economically using the developed database. A simulation to design an ultrasonic microtransducer is completed as an executed example. The dependent relationship between membrane geometry, vibration displacement and output response is demonstrated. The electromechanical coupling effects, mechanical impedance and frequency response are also taken into consideration for optimal microstructures. The microdevice parameters with the best output signal response are predicted, and microfabrication processing constraints and realities are also taken into consideration

  6. Design of Cognitive Radio Database using Terrain Maps and Validated Propagation Models

    Directory of Open Access Journals (Sweden)

    Anwar Mohamed Fanan

    2017-09-01

    Full Text Available Cognitive Radio (CR encompasses a number of technologies which enable adaptive self-programing of systems at different levels to provide more effective use of the increasingly congested radio spectrum. CRs have potential to use spectrum allocated to TV services, which is not used by the primary user (TV, without causing disruptive interference to licensed users by using appropriate propagation modelling in TV White Spaces (TVWS. In this paper we address two related aspects of channel occupancy prediction for cognitive radio. Firstly, we continue to investigate the best propagation model among three propagation models (Extended-Hata, Davidson-Hata and Egli for use in the TV band, whilst also finding the optimum terrain data resolution to use (1000, 100 or 30 m. We compare modelled results with measurements taken in randomly-selected locations around Hull UK, using the two comparison criteria of implementation time and accuracy, when used for predicting TVWS system performance. Secondly, we describe how such models can be integrated into a database-driven tool for CR channel selection within the TVWS environment by creating a flexible simulation system for creating a TVWS database.

  7. Performance of a TV white space database with different terrain resolutions and propagation models

    Directory of Open Access Journals (Sweden)

    A. M. Fanan

    2017-11-01

    Full Text Available Cognitive Radio has now become a realistic option for the solution of the spectrum scarcity problem in wireless communication. TV channels (the primary user can be protected from secondary-user interference by accurate prediction of TV White Spaces (TVWS by using appropriate propagation modelling. In this paper we address two related aspects of channel occupancy prediction for cognitive radio. Firstly we investigate the best combination of empirical propagation model and spatial resolution of terrain data for predicting TVWS by examining the performance of three propagation models (Extended-Hata, Davidson-Hata and Egli in the TV band 470 to 790 MHz along with terrain data resolutions of 1000, 100 and 30 m, when compared with a comprehensive set of propagation measurements taken in randomly-selected locations around Hull, UK. Secondly we describe how such models can be integrated into a database-driven tool for cognitive radio channel selection within the TVWS environment.

  8. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  9. The Fluka Linebuilder and Element Database: Tools for Building Complex Models of Accelerators Beam Lines

    CERN Document Server

    Mereghetti, A; Cerutti, F; Versaci, R; Vlachoudis, V

    2012-01-01

    Extended FLUKA models of accelerator beam lines can be extremely complex: heavy to manipulate, poorly versatile and prone to mismatched positioning. We developed a framework capable of creating the FLUKA model of an arbitrary portion of a given accelerator, starting from the optics configuration and a few other information provided by the user. The framework includes a builder (LineBuilder), an element database and a series of configuration and analysis scripts. The LineBuilder is a Python program aimed at dynamically assembling complex FLUKA models of accelerator beam lines: positions, magnetic fields and scorings are automatically set up, and geometry details such as apertures of collimators, tilting and misalignment of elements, beam pipes and tunnel geometries can be entered at user’s will. The element database (FEDB) is a collection of detailed FLUKA geometry models of machine elements. This framework has been widely used for recent LHC and SPS beam-machine interaction studies at CERN, and led to a dra...

  10. Development of fauna, micro flora and aquatic organisms database at the vicinity of Gamma Green House in Malaysian Nuclear Agency

    International Nuclear Information System (INIS)

    Nur Humaira Lau Abdullah; Mohd Zaidan Kandar; Phua Choo Kwai Hoe

    2012-01-01

    The biodiversity database of non-human biota which consisted of flora, fauna, aquatic organisms and micro flora at the vicinity of Gamma Greenhouse (GGH) in Malaysian Nuclear Agency is under development. In 2011, a workshop on biodiversity and sampling of flora and fauna by local experts had been conducted in BAB to expose the necessary knowledge to all those involved in this study. Since then, several field surveys had been successfully being carried out covering terrestrial and aquatic ecosystems in order to observe species distribution pattern and to collect the non-human biota samples. The surveys had been conducted according to standard survey procedures and the samples collected were preserved and identified using appropriate techniques. In this paper, the work on fauna, micro flora and aquatic organisms was presented. The fauna and micro flora specimens were kept in Biodiversity Laboratory in Block 44. Based on those field surveys several species of terrestrial vertebrate and invertebrate organisms were spotted. A diverse group of mushroom was found to be present at the study site. The presence of several aquatic zooplankton for example Cyclops, Nauplius; phytoplankton and bacteria for example Klebsiella sp, Enterobacter sp and others in the pond nearby proved that the pond ecosystem is in good condition. Through this study, the preliminary biodiversity list of fauna at the vicinity of the nuclear facility, GGH had been developed and the work will continue for complete baseline data development. Besides that, many principles and methodologies used in ecological survey had been learnt and applied but the skills involved still need to be polished through workshops, collaboration and consultation from local experts. Thus far, several agencies had been approached to gain collaboration and consultation such as Institut Perikanan Malaysia, UKM, UPM and UMT. (author)

  11. Emissions databases for polycyclic aromatic compounds in the Canadian Athabasca oil sands region - development using current knowledge and evaluation with passive sampling and air dispersion modelling data

    Science.gov (United States)

    Qiu, Xin; Cheng, Irene; Yang, Fuquan; Horb, Erin; Zhang, Leiming; Harner, Tom

    2018-03-01

    Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs) in the Athabasca oil sands region (AOSR) were developed. The first database was derived from volatile organic compound (VOC) emissions data provided by the Cumulative Environmental Management Association (CEMA) and the second database was derived from additional data collected within the Joint Canada-Alberta Oil Sands Monitoring (JOSM) program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, and dibenzothiophenes (DBTs), obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model-measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC emissions estimation and

  12. An expression database for roots of the model legume Medicago truncatula under salt stress

    Directory of Open Access Journals (Sweden)

    Dong Jiangli

    2009-11-01

    Full Text Available Abstract Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.

  13. Solubility Database

    Science.gov (United States)

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  14. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  15. Dynamic Model for Hydro-Turbine Generator Units Based on a Database Method for Guide Bearings

    Directory of Open Access Journals (Sweden)

    Yong Xu

    2013-01-01

    Full Text Available A suitable dynamic model of rotor system is of great significance not only for supplying knowledge of the fault mechanism, but also for assisting in machine health monitoring research. Many techniques have been developed for properly modeling the radial vibration of large hydro-turbine generator units. However, an applicable dynamic model has not yet been reported in literature due to the complexity of the boundary conditions and exciting forces. In this paper, a finite element (FE rotor dynamic model of radial vibration taking account of operating conditions is proposed. A brief and practical database method is employed to model the guide bearing. Taking advantage of the method, rotating speed and bearing clearance can be considered in the model. A novel algorithm, which can take account of both transient and steady-state analysis, is proposed to solve the model. Dynamic response for rotor model of 125 MW hydro-turbine generator units in Gezhouba Power Station is simulated. Field data from Optimal Maintenance Information System for Hydro power plants (HOMIS are analyzed compared with the simulation. Results illustrate the application value of the model in providing knowledge of the fault mechanism and in failure diagnosis.

  16. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  17. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  18. Soil profile organic carbon prediction with Visible Near Infrared Reflec-tance spectroscopy based on a national database

    DEFF Research Database (Denmark)

    Deng, Fan; Knadel, Maria; Peng, Yi

    This study focuses on the application of the Danish national soil Visible Near Infrared Re-flectance spectroscopy (NIRs) database for predicting SOC in a field. The Conditioned Latin hypercube sam-pling (cLHS) method was used for the selection of 120 soil profiles based on DualEM21s and DEM data...... (ele-vation, slope, profile curvature). All the soil profile cores were taken by a 1 m long hydraulic auger with plastic liners inside. A Labspec 5100 equipped with a contact probe was used to acquire spectra at (350-2500 nm) in each 5 cm depth interval. The results show that after the removal...... of moisture effect using an external parameter orthogonalisation algorithm, most of the spectra collected at field moisture content can be projected in the National spectra library. Moreover, the prediction of SOC improved compared to the model based on absorbance spectra....

  19. Conception and development of a bibliographic database of blood nutrient fluxes across organs and tissues in ruminants: data gathering and management prior to meta-analysis.

    Science.gov (United States)

    Vernet, Jean; Ortigues-Marty, Isabelle

    2006-01-01

    In the organism, nutrient exchanges among tissues and organs are subject to numerous sources of physiological or nutritional variation, and the contribution of individual factors needs to be quantified before establishing general response laws. To achieve this, meta-analysis of data from publications is a useful tool. The objective of this work was to develop a bibliographic database of nutrient fluxes across organs and tissues of ruminant animals (Flora) under Access using the Merise method. The most important criteria for Flora were the ease to relate the various information, the exhaustivity and the accuracy of the data input, a complete description of the diets, taking into account the methods of the methodological procedures of measurement and analysis of blood nutrients and the traceability of the information. The conceptual data model was built in 6 parts. The first part describes the authors and source of publication, and the person in charge of data input. It clearly separates and identifies the experiments, the groups of animals and the treatments within a publication. The second part is concerned with feeds, diets and their chemical composition and nutritional value. The third and fourth parts describe the infusion of any substrates and the methods employed, respectively. The fifth part is devoted to the results of blood flows and nutrient fluxes. The sixth part gathers miscellaneous experimental information. All these parts are inter-connected. To model this database, the Merise method was utilised and 26 entities and 32 relationships were created. At the physical level, 93 tables were created, corresponding, for the majority, to entities and relationships of the data model. They were divided into reference tables (n = 65) and data tables (n = 28). Data processing was developed in Flora and included the control of the data, generic calculations of unknown data from given data, the automation of the estimation of the missing data or the chemical

  20. A Model-driven Role-based Access Control for SQL Databases

    Directory of Open Access Journals (Sweden)

    Raimundas Matulevičius

    2015-07-01

    Full Text Available Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC, which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.

  1. Emissions databases for polycyclic aromatic compounds in the Canadian Athabasca oil sands region – development using current knowledge and evaluation with passive sampling and air dispersion modelling data

    Directory of Open Access Journals (Sweden)

    X. Qiu

    2018-03-01

    Full Text Available Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs in the Athabasca oil sands region (AOSR were developed. The first database was derived from volatile organic compound (VOC emissions data provided by the Cumulative Environmental Management Association (CEMA and the second database was derived from additional data collected within the Joint Canada–Alberta Oil Sands Monitoring (JOSM program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs, alkylated PAHs, and dibenzothiophenes (DBTs, obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model–measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC

  2. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    International Nuclear Information System (INIS)

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-01-01

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: →We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. →The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity.

  3. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  4. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  5. Modeling global persistent organic chemicals in clouds

    Science.gov (United States)

    Mao, Xiaoxuan; Gao, Hong; Huang, Tao; Zhang, Lisheng; Ma, Jianmin

    2014-10-01

    A cloud model was implemented in a global atmospheric transport model to simulate cloud liquid water content and quantify the influence of clouds on gas/aqueous phase partitioning of persistent organic chemicals (POCs). Partitioning fractions of gas/aqueous and particle phases in clouds for three POCs α-hexachlorocyclohexane (α-HCH), polychlorinated biphenyl-28 (PCB-28), and PCB-138 in a cloudy atmosphere were estimated. Results show that the partition fraction of these selected chemicals depend on cloud liquid water content (LWC) and air temperature. We calculated global distribution of water droplet/ice particle-air partitioning coefficients of the three chemicals in clouds. The partition fractions at selected model grids in the Northern Hemisphere show that α-HCH, a hydrophilic chemical, is sorbed strongly onto cloud water droplets. The computed partition fractions at four selected model grids show that α-HCH tends to be sorbed onto clouds over land (source region) from summer to early fall, and over ocean from late spring to early fall. 20-60% of α-HCH is able to be sorbed to cloud waters over mid-latitude oceans during summer days. PCB-138, one of hydrophobic POCs, on the other hand, tends to be sorbed to particles in the atmosphere subject to air temperature. We also show that, on seasonal or annual average, 10-20% of averaged PCB-28 over the Northern Hemisphere could be sorbed onto clouds, leading to reduction of its gas-phase concentration in the atmosphere.

  6. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  7. The Mouse Tumor Biology Database: A Comprehensive Resource for Mouse Models of Human Cancer.

    Science.gov (United States)

    Krupke, Debra M; Begley, Dale A; Sundberg, John P; Richardson, Joel E; Neuhauser, Steven B; Bult, Carol J

    2017-11-01

    Research using laboratory mice has led to fundamental insights into the molecular genetic processes that govern cancer initiation, progression, and treatment response. Although thousands of scientific articles have been published about mouse models of human cancer, collating information and data for a specific model is hampered by the fact that many authors do not adhere to existing annotation standards when describing models. The interpretation of experimental results in mouse models can also be confounded when researchers do not factor in the effect of genetic background on tumor biology. The Mouse Tumor Biology (MTB) database is an expertly curated, comprehensive compendium of mouse models of human cancer. Through the enforcement of nomenclature and related annotation standards, MTB supports aggregation of data about a cancer model from diverse sources and assessment of how genetic background of a mouse strain influences the biological properties of a specific tumor type and model utility. Cancer Res; 77(21); e67-70. ©2017 AACR . ©2017 American Association for Cancer Research.

  8. Web application and database modeling of traffic impact analysis using Google Maps

    Science.gov (United States)

    Yulianto, Budi; Setiono

    2017-06-01

    Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.

  9. NoSQL Databases

    OpenAIRE

    PANYKO, Tomáš

    2013-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  10. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RMG Database Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database... description This database contains information on the rice mitochondrial genome. You ca...sis results. Features and manner of utilization of database The mitochondrial genome information can be used

  11. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us JSNP Database Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database... classification Human Genes and Diseases - General polymorphism databases Organism Taxonomy Name: Homo ...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat... and manner of utilization of database Allele frequencies in Japanese populatoin are also available. License

  12. A Conceptual Model and Database to Integrate Data and Project Management

    Science.gov (United States)

    Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.

    2015-12-01

    Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype

  13. The use of extracorporeal membrane oxygenation in blunt thoracic trauma: A study of the Extracorporeal Life Support Organization database.

    Science.gov (United States)

    Jacobs, Jordan V; Hooft, Nicole M; Robinson, Brenton R; Todd, Emily; Bremner, Ross M; Petersen, Scott R; Smith, Michael A

    2015-12-01

    Reports documenting the use of extracorporeal membrane oxygenation (ECMO) after blunt thoracic trauma are scarce. We used a large, multicenter database to examine outcomes when ECMO was used in treating patients with blunt thoracic trauma. We performed a retrospective analysis of ECMO patients in the Extracorporeal Life Support Organization database between 1998 and 2014. The diagnostic code for blunt pulmonary contusion (861.21, DRG International Classification of Diseases-9th Rev.) was used to identify patients treated with ECMO after blunt thoracic trauma. Variations of pre-ECMO respiratory support were also evaluated. The primary outcome was survival to discharge; the secondary outcome was hemorrhagic complication associated with ECMO. Eighty-five patients met inclusion criteria. The mean ± SEM age of the cohort was 28.9 ± 1.1 years; 71 (83.5%) were male. The mean ± SEM pre-ECMO PaO2/FIO2 ratio was 59.7 ± 3.5, and the mean ± SEM pre-ECMO length of ventilation was 94.7 ± 13.2 hours. Pre-ECMO support included inhaled nitric oxide (15 patients, 17.6%), high-frequency oscillation (10, 11.8%), and vasopressor agents (57, 67.1%). The mean ± SEM duration of ECMO was 207.4 ± 23.8 hours, and 63 patients (74.1%) were treated with venovenous ECMO. Thirty-two patients (37.6%) underwent invasive procedures before ECMO, and 12 patients (14.1%) underwent invasive procedures while on ECMO. Hemorrhagic complications occurred in 25 cases (29.4%), including 12 patients (14.1%) with surgical site bleeding and 16 (18.8%) with cannula site bleeding (6 patients had both). The rate of survival to discharge was 74.1%. Multivariate analysis showed that shorter duration of ECMO and the use of venovenous ECMO predicted survival. Outcomes after the use of ECMO in blunt thoracic trauma can be favorable. Some trauma patients are appropriate candidates for this therapy. Further study may discern which subpopulations of trauma patients will benefit most from ECMO. Therapeutic

  14. High resolution topography and land cover databases for wind resource assessment using mesoscale models

    Science.gov (United States)

    Barranger, Nicolas; Stathopoulos, Christos; Kallos, Georges

    2013-04-01

    In wind resource assessment, mesoscale models can provide wind flow characteristics without the use of mast measurements. In complex terrain, local orography and land cover data assimilation are essential parameters to accurately simulate the wind flow pattern within the atmospheric boundary layer. State-of-the-art Mesoscale Models such as RAMS usually provides orography and landuse data with of resolution of 30s (about 1km). This resolution is necessary for solving mesocale phenomena accurately but not sufficient when the aim is to quantitatively estimate the wind flow characteristics passing over sharp hills or ridges. Furthermore, the abrupt change in land cover characterization is nor always taken into account in the model with a low resolution land use database. When land cover characteristics changes dramatically, parameters such as roughness, albedo or soil moisture that can highly influence the Atmospheric Boundary Layer meteorological characteristics. Therefore they require to be accurately assimilated into the model. Since few years, high resolution databases derived from satellite imagery (Modis, SRTM, LandSat, SPOT ) are available online. Being converted to RAMS requirements inputs, an evaluation of the model requires to be achieved. For this purpose, three new high resolution land cover and two topographical databases are implemented and tested in RAMS. The analysis of terrain variability is performed using basis functions of space frequency and amplitude. Practically, one and two dimension Fast Fourier Transform is applied to terrain height to reveal the main characteristics of local orography according to the obtained wave spectrum. By this way, a comparison between different topographic data sets is performed, based on the terrain power spectrum entailed in the terrain height input. Furthermore, this analysis is a powerful tool in the determination of the proper horizontal grid resolution required to resolve most of the energy containing spectrum

  15. A Multiquantum State-to-State Model for the Fundamental States of Air: The Stellar Database

    Science.gov (United States)

    Lino da Silva, M.; Lopez, B.; Guerra, V.; Loureiro, J.

    2012-12-01

    We present a detailed database of vibrationally specific heavy-impact multiquantum rates for transitions between the fundamental states of neutral air species (N2 , O2 , NO, N and O). The most up-to-date datasets for atom- diatom collisions are firstly selected from the literature, scaled to accurate vibrational levels manifolds obtained using realistic intramolecular potentials, and extrapolated to high temperatures when necessary. For diatom-diatom collisions, vibrationally specific rates are produced using the Forced Harmonic Oscillator theory. An adequate manifold of vibrational levels is obtained from an accurate intermolecular potential, and available intermolecular potentials are approximated by a simplified Morse isotropic potential, or assumed through scaling of similar potentials otherwise. The database state-specific rates are valid for a large temperature range of low to very high temperatures, making it suitable for applications such as the modeling of high-enthalpy plasma sources or atmospheric entry applications. As experimentally determined state-specific rates are scarce, specially at high temperatures, emphasis has rather been put into verifying that the obtained rates are physically consistent, and verifying that they scale within the bounds of equilibrium rates available in the literature. The STELLAR database provides a complete and adequate set of heavy-impact rates for vibrational excitation, exchange, dissociation and recombination rates which can then be coupled to more detailed datasets for the simulation of physical-chemical processes in high-temperature plasmas. An application to the dissociation and exchange processes occurring behind an hypersonic shock wave are also presented in this work.

  16. Subject and authorship of records related to the Organization for Tropical Studies (OTS in BINABITROP, a comprehensive database about Costa Rican biology

    Directory of Open Access Journals (Sweden)

    Julián Monge-Nájera

    2013-06-01

    Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  17. Subject and authorship of records related to the Organization for Tropical Studies (OTS) in BINABITROP, a comprehensive database about Costa Rican biology.

    Science.gov (United States)

    Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz

    2013-06-01

    BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  18. Overall models and experimental database for UO2 and MOX fuel increasing performance

    International Nuclear Information System (INIS)

    Bernard, L.C.; Blanpain, P.

    2001-01-01

    Framatome steady-state fission gas release database includes more than 290 fuel rods irradiated in commercial and experimental reactors with rod average burnups up to 67 GWd/tM. The transient database includes close to 60 fuel rods with burnups up to 62 GWd//tM. The hold time for these rods ranged from several minutes to many hours and the linear heat generation rates ranged from 30 kW/m to 50 kW/m. The quality of the fission gas release model is state-of-the-art as the uncertainty of the model is comparable to other code models. Framatome is also greatly concerned with the MOX fuel performance and modeling given that, since 1997, more than 1500 MOX fuel assemblies have been delivered to French and foreign PWRs. The paper focuses on the significant data acquired through surveillance and analytical programs used for the validation and the improvement of the MOX fuel modeling. (author)

  19. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    Science.gov (United States)

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION

    Directory of Open Access Journals (Sweden)

    Zdenko Lončarić

    2009-12-01

    Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce

  1. Volcanogenic Massive Sulfide Deposits of the World - Database and Grade and Tonnage Models

    Science.gov (United States)

    Mosier, Dan L.; Berger, Vladimir I.; Singer, Donald A.

    2009-01-01

    Grade and tonnage models are useful in quantitative mineral-resource assessments. The models and database presented in this report are an update of earlier publications about volcanogenic massive sulfide (VMS) deposits. These VMS deposits include what were formerly classified as kuroko, Cyprus, and Besshi deposits. The update was necessary because of new information about some deposits, changes in information in some deposits, such as grades, tonnages, or ages, revised locations of some deposits, and reclassification of subtypes. In this report we have added new VMS deposits and removed a few incorrectly classified deposits. This global compilation of VMS deposits contains 1,090 deposits; however, it was not our intent to include every known deposit in the world. The data was recently used for mineral-deposit density models (Mosier and others, 2007; Singer, 2008). In this paper, 867 deposits were used to construct revised grade and tonnage models. Our new models are based on a reclassification of deposits based on host lithologies: Felsic, Bimodal-Mafic, and Mafic volcanogenic massive sulfide deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types occur in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment and economists to determine the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience

  2. Fiscal 1998 research report. Construction model project of the human sensory database; 1998 nendo ningen kankaku database kochiku model jigyo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This report summarizes the fiscal 1998 research result on construction of the human sensory database. The human sensory database for evaluating working environment was constructed on the basis of the measurement result on human sensory data (stress and fatigue) of 400 examinees at fields (transport field, control room and office) and in a laboratory. By using the newly developed standard measurement protocol for evaluating summer clothing (shirt, slacks and underwear), the database composed of the evaluation experiment results and the comparative experiment results on human physiological and sensory data of aged and young people was constructed. The database is featured by easy retrieval of various information concerned corresponding to requirements of tasks and use purposes. For evaluating the mass data with large time variation read corresponding to use purposes for every scene, the data detection support technique was adopted paying attention to physical and psychological variable phases, and mind and body events. A meaning of reaction and a hint for necessary measures are showed for every phase and event. (NEDO)

  3. Organic production in a dynamic CGE model

    DEFF Research Database (Denmark)

    Jacobsen, Lars Bo

    2004-01-01

    Concerns about the impact of modern agriculture on the environment have in recent years led to an interest in supporting the development of organic farming. In addition to environmental benefits, the aim is to encourage the provision of other “multifunctional” properties of organic farming...... agricultural sector and each secondary food industry has been split into two separate industries: one producing organic products, the other producing conventional products. The substitution nests in private consumption have also been altered to emphasise the pair wise substitution between organic...... and conventional products. One of the most important regulations regarding organic production concerns the conversion period, that is the period where the farmer starts to use organic production methods until the farmland and the production are considered organic. Currently organic production methods have...

  4. Modelling the Geographical Origin of Rice Cultivation in Asia Using the Rice Archaeological Database

    Science.gov (United States)

    Silva, Fabio; Stevens, Chris J.; Weisskopf, Alison; Castillo, Cristina; Qin, Ling; Bevan, Andrew; Fuller, Dorian Q.

    2015-01-01

    We have compiled an extensive database of archaeological evidence for rice across Asia, including 400 sites from mainland East Asia, Southeast Asia and South Asia. This dataset is used to compare several models for the geographical origins of rice cultivation and infer the most likely region(s) for its origins and subsequent outward diffusion. The approach is based on regression modelling wherein goodness of fit is obtained from power law quantile regressions of the archaeologically inferred age versus a least-cost distance from the putative origin(s). The Fast Marching method is used to estimate the least-cost distances based on simple geographical features. The origin region that best fits the archaeobotanical data is also compared to other hypothetical geographical origins derived from the literature, including from genetics, archaeology and historical linguistics. The model that best fits all available archaeological evidence is a dual origin model with two centres for the cultivation and dispersal of rice focused on the Middle Yangtze and the Lower Yangtze valleys. PMID:26327225

  5. Combining a weed traits database with a population dynamics model predicts shifts in weed communities.

    Science.gov (United States)

    Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A

    2015-04-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.

  6. HTO transfer from contaminated surfaces to the atmosphere: a database for model validation

    International Nuclear Information System (INIS)

    Davis, P.A.; Amiro, B.D.; Workman, W.J.G.; Corbett, B.J.

    1996-12-01

    This report comprises a detailed database that can be used to validate models of the emission of tritiated water vapour (HTO) from natural contaminated surfaces to the atmosphere. The data were collected in 1992 July during an intensive field study based on the flux-gradient method of micrometeorology. The measurements were made over a wetland area at the Chalk River Laboratories, and over a grassed field near the Pickering Nuclear Generating Station. The study sites, the sampling protocols and the analytical techniques are described in detail, and the measured fluxes are presented. The report also contains a detailed listing of HTO concentrations in air at two heights, HTO concentrations in the source compartments (soil, surface water and vegetation), supporting meteorological data, and various vegetation and soil properties. The uncertainties in all of the measured data are estimated. (author). 15 refs., 23 tabs., 9 figs

  7. Gene-disease relationship discovery based on model-driven data integration and database view definition.

    Science.gov (United States)

    Yilmaz, S; Jonveaux, P; Bicep, C; Pierron, L; Smaïl-Tabbone, M; Devignes, M D

    2009-01-15

    Computational methods are widely used to discover gene-disease relationships hidden in vast masses of available genomic and post-genomic data. In most current methods, a similarity measure is calculated between gene annotations and known disease genes or disease descriptions. However, more explicit gene-disease relationships are required for better insights into the molecular bases of diseases, especially for complex multi-gene diseases. Explicit relationships between genes and diseases are formulated as candidate gene definitions that may include intermediary genes, e.g. orthologous or interacting genes. These definitions guide data modelling in our database approach for gene-disease relationship discovery and are expressed as views which ultimately lead to the retrieval of documented sets of candidate genes. A system called ACGR (Approach for Candidate Gene Retrieval) has been implemented and tested with three case studies including a rare orphan gene disease.

  8. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  9. Studying Oogenesis in a Non-model Organism Using Transcriptomics: Assembling, Annotating, and Analyzing Your Data.

    Science.gov (United States)

    Carter, Jean-Michel; Gibbs, Melanie; Breuker, Casper J

    2016-01-01

    This chapter provides a guide to processing and analyzing RNA-Seq data in a non-model organism. This approach was implemented for studying oogenesis in the Speckled Wood Butterfly Pararge aegeria. We focus in particular on how to perform a more informative primary annotation of your non-model organism by implementing our multi-BLAST annotation strategy. We also provide a general guide to other essential steps in the next-generation sequencing analysis workflow. Before undertaking these methods, we recommend you familiarize yourself with command line usage and fundamental concepts of database handling. Most of the operations in the primary annotation pipeline can be performed in Galaxy (or equivalent standalone versions of the tools) and through the use of common database operations (e.g. to remove duplicates) but other equivalent programs and/or custom scripts can be implemented for further automation.

  10. How I do it: a practical database management system to assist clinical research teams with data collection, organization, and reporting.

    Science.gov (United States)

    Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe

    2015-04-01

    The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  11. Analysis of the Properties of Working Substances for the Organic Rankine Cycle based Database “REFPROP”

    Directory of Open Access Journals (Sweden)

    Galashov Nikolay

    2016-01-01

    Full Text Available The object of the study are substances that are used as a working fluid in systems operating on the basis of an organic Rankine cycle. The purpose of research is to find substances with the best thermodynamic, thermal and environmental properties. Research conducted on the basis of the analysis of thermodynamic and thermal properties of substances from the base “REFPROP” and with the help of numerical simulation of combined-cycle plant utilization triple cycle, where the lower cycle is an organic Rankine cycle. Base “REFPROP” describes and allows to calculate the thermodynamic and thermophysical parameters of most of the main substances used in production processes. On the basis of scientific publications on the use of working fluids in an organic Rankine cycle analysis were selected ozone-friendly low-boiling substances: ammonia, butane, pentane and Freon: R134a, R152a, R236fa and R245fa. For these substances have been identified and tabulated molecular weight, temperature of the triple point, boiling point, at atmospheric pressure, the parameters of the critical point, the value of the derivative of the temperature on the entropy of the saturated vapor line and the potential ozone depletion and global warming. It was also identified and tabulated thermodynamic and thermophysical parameters of the steam and liquid substances in a state of saturation at a temperature of 15 °C. This temperature is adopted as the minimum temperature of heat removal in the Rankine cycle when working on the water. Studies have shown that the best thermodynamic, thermal and environmental properties of the considered substances are pentane, butane and R245fa. For a more thorough analysis based on a gas turbine plant NK-36ST it has developed a mathematical model of combined cycle gas turbine (CCGT triple cycle, where the lower cycle is an organic Rankine cycle, and is used as the air cooler condenser. Air condenser allows stating material at a temperature

  12. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  13. The Time Is Right to Focus on Model Organism Metabolomes.

    Science.gov (United States)

    Edison, Arthur S; Hall, Robert D; Junot, Christophe; Karp, Peter D; Kurland, Irwin J; Mistrik, Robert; Reed, Laura K; Saito, Kazuki; Salek, Reza M; Steinbeck, Christoph; Sumner, Lloyd W; Viant, Mark R

    2016-02-15

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  14. The Time Is Right to Focus on Model Organism Metabolomes

    Directory of Open Access Journals (Sweden)

    Arthur S. Edison

    2016-02-01

    Full Text Available Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  15. Towards Global QSAR Model Building for Acute Toxicity: Munro Database Case Study

    Directory of Open Access Journals (Sweden)

    Swapnil Chavan

    2014-10-01

    Full Text Available A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  16. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  17. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    Science.gov (United States)

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  18. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  19. Predicting Expected Organ Donor Numbers in Australian Hospitals Outside of the Donate-Life Network Using the Anzics Adult Patient Database.

    Science.gov (United States)

    O'Brien, Yvette; Chavan, Shaila; Huckson, Sue; Russ, Graeme; Opdam, Helen; Pilcher, David

    2018-02-21

    The majority of organ donations in Australia occur in the DonateLife Network of hospitals, but limited monitoring at other sites may allow donation opportunities to be missed. Our aim was to estimate expected donor numbers using routinely collected data from the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD), and determine whether unrecognised potential donors might exist in non-DonateLife hospitals. All deaths at 150 Australian ICUs contributing to the ANZICS APD were analysed between January 2010 and December 2015. Donor numbers were extracted from the Australian and New Zealand Organ Donor registry. A univariate linear regression model was developed to estimate expected donor numbers in DonateLife hospitals, then applied to non-DonateLife hospitals. Of 33,614 deaths at 71 DonateLife hospitals, 6835 (20%) met criteria as 'ICU deaths potentially suitable to be donors" and 1992 (6%) were actual donors. There was a consistent relationship between these groups (R2=0.626, p<0.001) allowing the development of a prediction model which adequately estimated expected donors. Of 8,077 deaths in 79 non-DonateLife ICUs, 452 (6%) met criteria as potentially suitable donors. Applying the prediction model developed in DonateLife hospitals, the estimated expected donors in non-DonateLife hospitals was 130. However, there were only 75 actual donors. It is possible to estimate the expected number of Australian organ donors using routinely collected registry data. These findings suggest there may be a small but significant pool of under-utilised potential donors in non-DonateLife hospitals. This may provide an opportunity to increase donation rates.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially

  20. Saccharomyces genome database informs human biology

    OpenAIRE

    Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael

    2017-01-01

    Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and...

  1. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DGBY Database Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...sion and function of Baker's yeast). Features and manner of utilization of database This database

  2. Organization model and formalized description of nuclear enterprise information system

    International Nuclear Information System (INIS)

    Yuan Feng; Song Yafeng; Li Xudong

    2012-01-01

    Organization model is one of the most important models of Nuclear Enterprise Information System (NEIS). Scientific and reasonable organization model is the prerequisite that NEIS has robustness and extendibility, and is also the foundation of the integration of heterogeneous system. Firstly, the paper describes the conceptual model of the NEIS on ontology chart, which provides a consistent semantic framework of organization. Then it discusses the relations between the concepts in detail. Finally, it gives the formalized description of the organization model of NEIS based on six-tuple array. (authors)

  3. A method for building and evaluating formal specifications of object-oriented conceptual models of database systems

    NARCIS (Netherlands)

    Wieringa, Roelf J.

    1993-01-01

    This report describes a method called MCM (Method for Conceptual Modeling) for building and evaluating formal specifications of object-oriented models of database system behavior. An important aim of MCM is to bridge the gap between formal specification and informal understanding. Building a MCM

  4. Designing and Development of an Imitation Model of a Multi-Tenant Database Cluster

    Directory of Open Access Journals (Sweden)

    E. A. Boytsov

    2013-01-01

    Full Text Available One of the main trends of recent years in software design is a shift to a Software as a Service (SaaS paradigm which brings a number of advantages for both software developers and end users. However, along with these benefits this transition brings new architectural challenges. One of such challenges is the implementation of a data storage that would meet the needs of a service-provider, at the same time providing a fairly simple application programming interface for software developers. In order to develop effective solutions in this area, the architectural features of cloud-based applications should be taken into account. Among others, such key features are the need for scalability and quick adaptation to changing conditions. This paper provides a brief analysis of the problems in the field of cloud data storage systems based on the relational model and it proposes the concept of database cluster designed for applications with a multi-tenant architecture. Besides, the article describes a simulation model of such a cluster, as well as the main stages of its development and the main principles forming its foundation.

  5. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease.

    Science.gov (United States)

    Eppig, Janan T; Blake, Judith A; Bult, Carol J; Kadin, James A; Richardson, Joel E

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse-human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human-Mouse: Disease Connection, allows users to explore gene-phenotype-disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Spectral database constitutive representation within a spectral micromechanical solver for computationally efficient polycrystal plasticity modelling

    Science.gov (United States)

    Eghtesad, Adnan; Zecevic, Miroslav; Lebensohn, Ricardo A.; McCabe, Rodney J.; Knezevic, Marko

    2018-02-01

    We present the first successful implementation of a spectral crystal plasticity (SCP) model into a spectral visco-plastic fast Fourier transform (VPFFT) full-field solver. The SCP database allows for non-iterative retrieval of constitutive solutions for a crystal of any orientation subjected to any state of deformation at every voxel representing an FFT point of the overall voxel-based polycrystalline microstructure. Details of this approach are described and validated through example case studies involving a rigid-visco-plastic response and microstructure evolution of polycrystalline copper. It is observed that the novel implementation is able to speed up the overall VPFFT calculations because the conventional Newton-Raphson iterative solution procedure for single crystals in VPFFT is replaced by the more efficient SCP constitutive representation of the solution. As a result, the implementation facilitates efficient simulations of large voxel-based microstructures. Additionally, it provides an incentive for conceiving a multi-level SCP-VPFFT computational scheme. Here, every FFT point of the model is a polycrystal whose response is calculated using a Taylor-type homogenization.

  7. Data-based modelling and environmental sensitivity of vegetation in China

    Science.gov (United States)

    Wang, H.; Prentice, I. C.; Ni, J.

    2013-09-01

    A process-oriented niche specification (PONS) model was constructed to quantify climatic controls on the distribution of ecosystems, based on the vegetation map of China. PONS uses general hypotheses about bioclimatic controls to provide a "bridge" between statistical niche models and more complex process-based models. Canonical correspondence analysis provided an overview of relationships between the abundances of 55 plant communities in 0.1° grid cells and associated mean values of 20 predictor variables. Of these, GDD0 (accumulated degree days above 0 °C), Cramer-Prentice α (an estimate of the ratio of actual to equilibrium evapotranspiration) and mGDD5 (mean temperature during the period above 5 °C) showed the greatest predictive power. These three variables were used to develop generalized linear models for the probability of occurrence of 16 vegetation classes, aggregated from the original 55 types by k-means clustering according to bioclimatic similarity. Each class was hypothesized to possess a unimodal relationship to each bioclimate variable, independently of the other variables. A simple calibration was used to generate vegetation maps from the predicted probabilities of the classes. Modelled and observed vegetation maps showed good to excellent agreement (κ = 0.745). A sensitivity study examined modelled responses of vegetation distribution to spatially uniform changes in temperature, precipitation and [CO2], the latter included via an offset to α (based on an independent, data-based light use efficiency model for forest net primary production). Warming shifted the boundaries of most vegetation classes northward and westward while temperate steppe and desert replaced alpine tundra and steppe in the southeast of the Tibetan Plateau. Increased precipitation expanded mesic vegetation at the expense of xeric vegetation. The effect of [CO2] doubling was roughly equivalent to increasing precipitation by ~ 30%, favouring woody vegetation types

  8. Topobathymetric elevation model development using a new methodology: Coastal National Elevation Database

    Science.gov (United States)

    Danielson, Jeffrey J.; Poppenga, Sandra K.; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy A.; Barras, John

    2016-01-01

    During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models

  9. JAK/STAT signalling--an executable model assembled from molecule-centred modules demonstrating a module-oriented database concept for systems and synthetic biology.

    Science.gov (United States)

    Blätke, Mary Ann; Dittrich, Anna; Rohr, Christian; Heiner, Monika; Schaper, Fred; Marwan, Wolfgang

    2013-06-01

    Mathematical models of molecular networks regulating biological processes in cells or organisms are most frequently designed as sets of ordinary differential equations. Various modularisation methods have been applied to reduce the complexity of models, to analyse their structural properties, to separate biological processes, or to reuse model parts. Taking the JAK/STAT signalling pathway with the extensive combinatorial cross-talk of its components as a case study, we make a natural approach to modularisation by creating one module for each biomolecule. Each module consists of a Petri net and associated metadata and is organised in a database publically accessible through a web interface (). The Petri net describes the reaction mechanism of a given biomolecule and its functional interactions with other components including relevant conformational states. The database is designed to support the curation, documentation, version control, and update of individual modules, and to assist the user in automatically composing complex models from modules. Biomolecule centred modules, associated metadata, and database support together allow the automatic creation of models by considering differential gene expression in given cell types or under certain physiological conditions or states of disease. Modularity also facilitates exploring the consequences of alternative molecular mechanisms by comparative simulation of automatically created models even for users without mathematical skills. Models may be selectively executed as an ODE system, stochastic, or qualitative models or hybrid and exported in the SBML format. The fully automated generation of models of redesigned networks by metadata-guided modification of modules representing biomolecules with mutated function or specificity is proposed.

  10. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    International Nuclear Information System (INIS)

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 x 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs

  11. MARRVEL: Integration of Human and Model Organism Genetic Resources to Facilitate Functional Annotation of the Human Genome.

    Science.gov (United States)

    Wang, Julia; Al-Ouran, Rami; Hu, Yanhui; Kim, Seon-Young; Wan, Ying-Wooi; Wangler, Michael F; Yamamoto, Shinya; Chao, Hsiao-Tuan; Comjean, Aram; Mohr, Stephanie E; Perrimon, Norbert; Liu, Zhandong; Bellen, Hugo J

    2017-06-01

    One major challenge encountered with interpreting human genetic variants is the limited understanding of the functional impact of genetic alterations on biological processes. Furthermore, there remains an unmet demand for an efficient survey of the wealth of information on human homologs in model organisms across numerous databases. To efficiently assess the large volume of publically available information, it is important to provide a concise summary of the most relevant information in a rapid user-friendly format. To this end, we created MARRVEL (model organism aggregated resources for rare variant exploration). MARRVEL is a publicly available website that integrates information from six human genetic databases and seven model organism databases. For any given variant or gene, MARRVEL displays information from OMIM, ExAC, ClinVar, Geno2MP, DGV, and DECIPHER. Importantly, it curates model organism-specific databases to concurrently display a concise summary regarding the human gene homologs in budding and fission yeast, worm, fly, fish, mouse, and rat on a single webpage. Experiment-based information on tissue expression, protein subcellular localization, biological process, and molecular function for the human gene and homologs in the seven model organisms are arranged into a concise output. Hence, rather than visiting multiple separate databases for variant and gene analysis, users can obtain important information by searching once through MARRVEL. Altogether, MARRVEL dramatically improves efficiency and accessibility to data collection and facilitates analysis of human genes and variants by cross-disciplinary integration of 18 million records available in public databases to facilitate clinical diagnosis and basic research. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  12. Investigation of an artificial intelligence technology--Model trees. Novel applications for an immediate release tablet formulation database.

    Science.gov (United States)

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.

  13. Modelling the fate of persistent organic pollutants in Europe: parameterisation of a gridded distribution model

    International Nuclear Information System (INIS)

    Prevedouros, Konstantinos; MacLeod, Matthew; Jones, Kevin C.; Sweetman, Andrew J.

    2004-01-01

    A regionally segmented multimedia fate model for the European continent is described together with an illustrative steady-state case study examining the fate of γ-HCH (lindane) based on 1998 emission data. The study builds on the regionally segmented BETR North America model structure and describes the regional segmentation and parameterisation for Europe. The European continent is described by a 5 deg. x 5 deg. grid, leading to 50 regions together with four perimetric boxes representing regions buffering the European environment. Each zone comprises seven compartments including; upper and lower atmosphere, soil, vegetation, fresh water and sediment and coastal water. Inter-regions flows of air and water are described, exploiting information originating from GIS databases and other georeferenced data. The model is primarily designed to describe the fate of Persistent Organic Pollutants (POPs) within the European environment by examining chemical partitioning and degradation in each region, and inter-region transport either under steady-state conditions or fully dynamically. A test case scenario is presented which examines the fate of estimated spatially resolved atmospheric emissions of lindane throughout Europe within the lower atmosphere and surface soil compartments. In accordance with the predominant wind direction in Europe, the model predicts high concentrations close to the major sources as well as towards Central and Northeast regions. Elevated soil concentrations in Scandinavian soils provide further evidence of the potential of increased scavenging by forests and subsequent accumulation by organic-rich terrestrial surfaces. Initial model predictions have revealed a factor of 5-10 underestimation of lindane concentrations in the atmosphere. This is explained by an underestimation of source strength and/or an underestimation of European background levels. The model presented can further be used to predict deposition fluxes and chemical inventories, and it

  14. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    Science.gov (United States)

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  15. A Staffing Profile of United States Online Database Producers: A Model and Discussion of Educational Implications.

    Science.gov (United States)

    Lowry, Glenn R.

    1983-01-01

    Reports results of survey of United States online database producers designed to determine number of personnel employed in intellectual production of databases and provide basis for generation of people employed in frequently recurring staff categories. Implications for education based on needs suggested by staffing patterns are examined.…

  16. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    Peer-to-peer databases are becoming more prevalent on the internet for sharing and distributing applications, documents, files, and other digital media. The problem associated with answering large-scale ad hoc analysis queries, aggregation queries, on these databases poses unique challenges. This paper presents an ...

  17. Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database

    Science.gov (United States)

    Nguyen, S. H.; Yao, Z.; Kolbe, T. H.

    2017-10-01

    A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.

  18. Development of Pipeline Database and CAD Model for Selection of Core Security Zone in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Choi, Seong Soo

    2010-06-01

    The goals of this report are (1) to develop a piping database for safety class 1 and 2 piping in Ulchin Units 3 and 4 in order to identify vital areas (2) to develop a CAD model for a vital area visualization (3) to realize a 3D program for a virtual reality of vital areas. We have performed a piping segmentation and an accident consequence analysis and developed a piping database. We also have developed a CAD model for primary auxiliary building, containment building, secondary auxiliary building, and turbine building

  19. Sediment-hosted gold deposits of the world: database and grade and tonnage models

    Science.gov (United States)

    Berger, Vladimir I.; Mosier, Dan L.; Bliss, James D.; Moring, Barry C.

    2014-01-01

    All sediment-hosted gold deposits (as a single population) share one characteristic—they all have disseminated micron-sized invisible gold in sedimentary rocks. Sediment-hosted gold deposits are recognized in the Great Basin province of the western United States and in China along with a few recognized deposits in Indonesia, Iran, and Malaysia. Three new grade and tonnage models for sediment-hosted gold deposits are presented in this paper: (1) a general sediment-hosted gold type model, (2) a Carlin subtype model, and (3) a Chinese subtype model. These models are based on grade and tonnage data from a database compilation of 118 sediment-hosted gold deposits including a total of 123 global deposits. The new general grade and tonnage model for sediment-hosted gold deposits (n=118) has a median tonnage of 5.7 million metric tonnes (Mt) and a gold grade of 2.9 grams per tonne (g/t). This new grade and tonnage model is remarkable in that the estimated parameters of the resulting grade and tonnage distributions are comparable to the previous model of Mosier and others (1992). A notable change is in the reporting of silver in more than 10 percent of deposits; moreover, the previous model had not considered deposits in China. From this general grade and tonnage model, two significantly different subtypes of sediment-hosted gold deposits are differentiated: Carlin and Chinese. The Carlin subtype includes 88 deposits in the western United States, Indonesia, Iran, and Malaysia, with median tonnage and grade of 7.1 Mt and 2.0 g/t Au, respectively. The silver grade is 0.78 g/t Ag for the 10th percentile of deposits. The Chinese subtype represents 30 deposits in China, with a median tonnage of 3.9 Mt and medium grade of 4.6 g/t Au. Important differences are recognized in the mineralogy and alteration of the two sediment-hosted gold subtypes such as: increased sulfide minerals in the Chinese subtype and decalcification alteration dominant in the Carlin type. We therefore

  20. Pediatric Contractures in Burn Injury: A Burn Model System National Database Study.

    Science.gov (United States)

    Goverman, Jeremy; Mathews, Katie; Goldstein, Richard; Holavanahalli, Radha; Kowalske, Karen; Esselman, Peter; Gibran, Nicole; Suman, Oscar; Herndon, David; Ryan, Colleen M; Schneider, Jeffrey C

    Joint contractures are a major cause of morbidity and functional deficit. The incidence of postburn contractures and their associated risk factors in the pediatric population has not yet been reported. This study examines the incidence and severity of contractures in a large, multicenter, pediatric burn population. Associated risk factors for the development of contractures are determined. Data from the National Institute on Disability and Rehabilitation Research Burn Model System database, for pediatric (younger than 18 years) burn survivors from 1994 to 2003, were analyzed. Demographic and medical data were collected on each subject. The primary outcome measures included the presence of contractures, number of contractures per patient, and severity of contractures at each of nine locations (shoulder, elbow, hip, knee, ankle, wrist, neck, lumbar, and thoracic) at time of hospital discharge. Regression analysis was performed to determine predictors of the presence, severity, and numbers of contractures, with P burned, and TBSA grafted. This is the first study to report the epidemiology of postburn contractures in the pediatric population. Approximately one quarter of children with a major burn injury developed a contracture at hospital discharge, and this could potentially increase as the child grows. Contractures develop despite early therapeutic interventions such as positioning and splinting; therefore, it is essential that we identify novel and more effective prevention strategies.

  1. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  2. MODELING OF MANAGEMENT PROCESSES IN AN ORGANIZATION

    Directory of Open Access Journals (Sweden)

    Stefan Iovan

    2016-05-01

    Full Text Available When driving any major change within an organization, strategy and execution are intrinsic to a project’s success. Nevertheless, closing the gap between strategy and execution remains a challenge for many organizations [1]. Companies tend to focus more on execution than strategy for quick results, instead of taking the time needed to understand the parts that make up the whole, so the right execution plan can be put in place to deliver the best outcomes. A large part of this understands that business operations don’t fit neatly within the traditional organizational hierarchy. Business processes are often messy, collaborative efforts that cross teams, departments and systems, making them difficult to manage within a hierarchical structure [2]. Business process management (BPM fills this gap by redefining an organization according to its end-to-end processes, so opportunities for improvement can be identified and processes streamlined for growth, revenue and transformation. This white paper provides guidelines on what to consider when using business process applications to solve your BPM initiatives, and the unique capabilities software systems provides that can help ensure both your project’s success and the success of your organization as a whole. majority of medium and small businesses, big companies and even some guvermental organizations [2].

  3. Self-Organizing Map Models of Language Acquisition

    Directory of Open Access Journals (Sweden)

    Ping eLi

    2013-11-01

    Full Text Available Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic PDP architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development.

  4. Physiological Information Database (PID)

    Science.gov (United States)

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  5. A Database for Propagation Models and Conversion to C++ Programming Language

    Science.gov (United States)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    The telecommunications system design engineer generally needs the quantification of effects of the propagation medium (definition of the propagation channel) to design an optimal communications system. To obtain the definition of the channel, the systems engineer generally has a few choices. A search of the relevant publications such as the IEEE Transactions, CCIR's, NASA propagation handbook, etc., may be conducted to find the desired channel values. This method may need excessive amounts of time and effort on the systems engineer's part and there is a possibility that the search may not even yield the needed results. To help the researcher and the systems engineers, it was recommended by the conference participants of NASA Propagation Experimenters (NAPEX) XV (London, Ontario, Canada, June 28 and 29, 1991) that a software should be produced that would contain propagation models and the necessary prediction methods of most propagation phenomena. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming. In the past few years, a software was produced to fit these requirements as best as could be done. The software was distributed to all NAPEX participants for evaluation and use, the participant reactions, suggestions etc., were gathered and were used to improve the subsequent releases of the software. The existing database program is in the Microsoft Excel application software and works fine within the guidelines of that environment, however, recently there have been some questions about the robustness and survivability of the Excel software in the ever changing (hopefully improving) world of software packages.

  6. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  7. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PLACE Database Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database... description PLACE is a database of motifs found in plant cis-acting regulatory DNA elements base...that have been identified in these motifs in other genes or in other plant species in later publications. The database

  8. Consolidated Human Activity Database (CHAD) for use in human exposure and health studies and predictive models

    Science.gov (United States)

    EPA scientists have compiled detailed data on human behavior from 22 separate exposure and time-use studies into CHAD. The database includes more than 54,000 individual study days of detailed human behavior.

  9. National Solar Radiation Database (NSRDB) SolarAnywhere 10 km Model Output for 1989 to 2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Solar Radiation Database (NSRDB) was produced by the National Renewable Energy Laboratory under the U.S. Department of Energy's Office of Energy...

  10. Assessment of vapor pressure estimation methods for secondary organic aerosol modeling

    Science.gov (United States)

    Camredon, Marie; Aumont, Bernard

    Vapor pressure ( Pvap) is a fundamental property controlling the gas-particle partitioning of organic species. Therefore this pure substance property is a critical parameter for modeling the formation of secondary organic aerosols (SOA). Structure-property relationships are needed to estimate Pvap because (i) very few experimental data for Pvap are available for semi-volatile organics and (ii) the number of contributors to SOA is extremely large. The Lee and Kesler method, a modified form of the Mackay equation, the Myrdal and Yalkowsky method and the UNIFAC- pLo method are commonly used to estimate Pvap in gas-particle partitioning models. The objectives of this study are (i) to assess the accuracy of these four methods on a large experimental database selected to be representative of SOA contributors and (ii) to compare the estimates provided by the various methods for compounds detected in the aerosol phase.

  11. The initiative on Model Organism Proteomes (iMOP) Session

    DEFF Research Database (Denmark)

    Schrimpf, Sabine P; Mering, Christian von; Bendixen, Emøke

    2012-01-01

    iMOP – the Initiative on Model Organism Proteomes – was accepted as a new HUPO initiative at the Ninth HUPO meeting in Sydney in 2010. A goal of iMOP is to integrate research groups working on a great diversity of species into a model organism community. At the Tenth HUPO meeting in Geneva...

  12. Competency modeling targeted on promotion of organizations towards VO involvement

    NARCIS (Netherlands)

    Ermilova, E.; Afsarmanesh, H.

    2008-01-01

    During the last decades, a number of models is introduced in research, addressing different perspectives of the organizations’ competencies in collaborative networks. This paper introduces the "4C-model", developed to address competencies of organizations, involved in Virtual organizations Breeding

  13. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Madronich, Sasha [Univ. Corporation for Atmospheric Research, Boulder, CO (United States)

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  14. Database and Library Development of Organic Species using Gas Chromatography and Mass Spectral Measurements in Support of the Mars Science Laboratory

    Science.gov (United States)

    Garcia, Raul; Mahaffy, Paul; Misra, Prabhakar

    2010-02-01

    Our work involves the development of an organic contaminants database that will allow us to determine which compounds are found here on Earth and would be inadvertently detected in the Mars soil and gaseous samples as impurities. It will be used for the Sample Analysis at Mars (SAM) instrumentation analysis in the Mars Science Laboratory (MSL) rover scheduled for launch in 2011. In order to develop a comprehensive target database, we utilize the NIST Mass Spectral Library, Automated Mass Spectral Deconvolution and Identification System (AMDIS) and Ion Fingerprint Deconvolution (IFD) software to analyze the GC-MS data. We have analyzed data from commercial samples, such as paint and polymers, which have not been implemented into the rover and are now analyzing actual data from pyrolyzation on the rover. We have successfully developed an initial target compound database that will aid SAM in determining whether the components being analyzed come from Mars or are contaminants from either the rover itself or the Earth environment and are continuing to make improvements and adding data to the target contaminants database. )

  15. using stereochemistry models in teaching organic compounds

    African Journals Online (AJOL)

    Preferred Customer

    (Stereochemistry Model); the treatment had significant effect: students taught using. Stereochemistry Models ... ISSN 2227-5835. 93. Apart from the heavy conceptual demand on the memory capacity required of the ..... colors and sizes compared with the sketches on the chart that appear to be mock forms of the compounds.

  16. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  17. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi...cus Taxonomy ID: 10116 Database description On the pathological image database, over 53,000 high-resolution

  18. MODEL OF LEARNING ORGANIZATION IN BROADCASTING ORGANIZATION OF ISLAMIC REPUBLIC OF IRAN

    Directory of Open Access Journals (Sweden)

    Reza Najafbagy

    2010-11-01

    Full Text Available This article tries to present a model of learning organization for Iran Broadcasting Organization which is under the management of the spiritual leader of Iran. The study is based on characteristics of Peter Senge’s original learning organization namely, personal stery, mental models, shared vision, team learning and systems thinking. The methodology was a survey research employed questionnaire among sample employees and managers of the Organization.Findings showed that the Organization is fairly far from an ffective learning organization.Moreover, it seems that employees’ performance in team learning and changes in mental models are more satisfactory than managers. Regarding other characteristics of learning organizations, there are similarities in learning attempts by employees and managers. The rganization lacks organizational vision, and consequently there is no shared vision in the Organization. It also is in need of organizational culture. As a kind of state-owned organization, there s no need of financial support which affect the need for learning organization. It also does not face the threat of sustainabilitybecause there is no competitive organization.Findings also show that IBO need a fundamental change in its rganizational learning process. In this context, the general idea is to unfreeze the mindset of leadership of IBO and creating a visionand organizational culture based on learning and staff development. Then gradually through incremental effective change and continual organizational learning process in dividual, team and organization levels engage in development and reinforcement of skills of personal mastery, mental models, shared vision, team learning and systems thinking, should lead IBO to learning organization.

  19. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  20. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  1. Transport and Environment Database System (TRENDS): Maritime Air Pollutant Emission Modelling

    DEFF Research Database (Denmark)

    Georgakaki, Aliki; Coffey, R. A.; Lock, G.

    2003-01-01

    This paper reports the development of the maritime module within the framework of the TRENDS project. A detailed database has been constructed, which includes all stages of the energy consumption and air pollutant emission calculations. The technical assumptions and factors incorporated in the da...... ¿ short sea or deep-sea shipping. Key Words: Air Pollution, Maritime Transport, Air Pollutant Emissions......This paper reports the development of the maritime module within the framework of the TRENDS project. A detailed database has been constructed, which includes all stages of the energy consumption and air pollutant emission calculations. The technical assumptions and factors incorporated...... encountered since the statistical data collection was not undertaken with a view to this purpose are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission results per port and vessel type, to aggregate results for different types of movements...

  2. Modeling the influence of organic acids on soil weathering

    Science.gov (United States)

    Lawrence, Corey R.; Harden, Jennifer W.; Maher, Kate

    2014-01-01

    Biological inputs and organic matter cycling have long been regarded as important factors in the physical and chemical development of soils. In particular, the extent to which low molecular weight organic acids, such as oxalate, influence geochemical reactions has been widely studied. Although the effects of organic acids are diverse, there is strong evidence that organic acids accelerate the dissolution of some minerals. However, the influence of organic acids at the field-scale and over the timescales of soil development has not been evaluated in detail. In this study, a reactive-transport model of soil chemical weathering and pedogenic development was used to quantify the extent to which organic acid cycling controls mineral dissolution rates and long-term patterns of chemical weathering. Specifically, oxalic acid was added to simulations of soil development to investigate a well-studied chronosequence of soils near Santa Cruz, CA. The model formulation includes organic acid input, transport, decomposition, organic-metal aqueous complexation and mineral surface complexation in various combinations. Results suggest that although organic acid reactions accelerate mineral dissolution rates near the soil surface, the net response is an overall decrease in chemical weathering. Model results demonstrate the importance of organic acid input concentrations, fluid flow, decomposition and secondary mineral precipitation rates on the evolution of mineral weathering fronts. In particular, model soil profile evolution is sensitive to kaolinite precipitation and oxalate decomposition rates. The soil profile-scale modeling presented here provides insights into the influence of organic carbon cycling on soil weathering and pedogenesis and supports the need for further field-scale measurements of the flux and speciation of reactive organic compounds.

  3. Daphnia as an Emerging Epigenetic Model Organism

    Directory of Open Access Journals (Sweden)

    Kami D. M. Harris

    2012-01-01

    Full Text Available Daphnia offer a variety of benefits for the study of epigenetics. Daphnia’s parthenogenetic life cycle allows the study of epigenetic effects in the absence of confounding genetic differences. Sex determination and sexual reproduction are epigenetically determined as are several other well-studied alternate phenotypes that arise in response to environmental stressors. Additionally, there is a large body of ecological literature available, recently complemented by the genome sequence of one species and transgenic technology. DNA methylation has been shown to be altered in response to toxicants and heavy metals, although investigation of other epigenetic mechanisms is only beginning. More thorough studies on DNA methylation as well as investigation of histone modifications and RNAi in sex determination and predator-induced defenses using this ecologically and evolutionarily important organism will contribute to our understanding of epigenetics.

  4. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  5. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KOME Database Description General information of database Database name KOME Alternative name Knowledge-base... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...ngth cDNA project is shown in the database. The full-length cDNA clones were collected from various tissues ...treated under various stress conditions. The database contains not only information about complete nucleotid

  6. Satisfaction with life after burn: A Burn Model System National Database Study.

    Science.gov (United States)

    Goverman, J; Mathews, K; Nadler, D; Henderson, E; McMullen, K; Herndon, D; Meyer, W; Fauerbach, J A; Wiechman, S; Carrougher, G; Ryan, C M; Schneider, J C

    2016-08-01

    While mortality rates after burn are low, physical and psychosocial impairments are common. Clinical research is focusing on reducing morbidity and optimizing quality of life. This study examines self-reported Satisfaction With Life Scale scores in a longitudinal, multicenter cohort of survivors of major burns. Risk factors associated with Satisfaction With Life Scale scores are identified. Data from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) Burn Model System (BMS) database for burn survivors greater than 9 years of age, from 1994 to 2014, were analyzed. Demographic and medical data were collected on each subject. The primary outcome measures were the individual items and total Satisfaction With Life Scale (SWLS) scores at time of hospital discharge (pre-burn recall period) and 6, 12, and 24 months after burn. The SWLS is a validated 5-item instrument with items rated on a 1-7 Likert scale. The differences in scores over time were determined and scores for burn survivors were also compared to a non-burn, healthy population. Step-wise regression analysis was performed to determine predictors of SWLS scores at different time intervals. The SWLS was completed at time of discharge (1129 patients), 6 months after burn (1231 patients), 12 months after burn (1123 patients), and 24 months after burn (959 patients). There were no statistically significant differences between these groups in terms of medical or injury demographics. The majority of the population was Caucasian (62.9%) and male (72.6%), with a mean TBSA burned of 22.3%. Mean total SWLS scores for burn survivors were unchanged and significantly below that of a non-burn population at all examined time points after burn. Although the mean SWLS score was unchanged over time, a large number of subjects demonstrated improvement or decrement of at least one SWLS category. Gender, TBSA burned, LOS, and school status were associated with SWLS scores at 6 months

  7. Extension of the Representativeness of the Traumatic Brain Injury Model Systems National Database: 2001 to 2010

    Science.gov (United States)

    Cuthbert, Jeffrey P; Corrigan, John D.; Whiteneck, Gale G.; Harrison-Felix, Cynthia; Graham, James E.; Bell, Jeneita M.; Coronado, Victor G.

    2017-01-01

    Objective To extend the representativeness of the Traumatic Brain Injury Model Systems National Database (TBIMS-NDB) for individuals aged 16 years and older admitted for acute, inpatient rehabilitation in the United States with a primary diagnosis of traumatic brain injury (TBI) analyses completed by Corrigan and colleagues,3 by comparing this dataset to national data for patients admitted to inpatient rehabilitation with identical inclusion criteria that included 3 additional years of data and 2 new demographic variables. Design Secondary analysis of existing datasets; extension of previously published analyses. Setting Acute inpatient rehabilitation facilities. Participants Patients 16 years of age and older with a primary rehabilitation diagnosis of TBI; US TBI Rehabilitation population n = 156,447; TBIMS-NDB population n = 7373. Interventions None. Main Outcome Measure demographics, functional status and hospital length of stay. Results The TBIMS-NDB was largely representative of patients 16 years and older admitted for rehabilitation in the U.S. with a primary diagnosis of TBI on or after October 1, 2001 and discharged as of December 31, 2010. The results of the extended analyses were similar to those reported by Corrigan and colleagues. Age accounted for the largest difference between the samples, with the TBIMS-NDB including a smaller proportion of patients aged 65 and older as compared to all those admitted for rehabilitation with a primary diagnosis of TBI in the United States. After partitioning each dataset at age 65, most distributional differences found between samples were markedly reduced; however, differences on the Pre-injury vocational status of employed and rehabilitation lengths of stay between 1 and 9 days remained robust. The subsamples of patients aged 64 and younger was found to differ only slightly on all remaining variables, while those aged 65 and older were found to have meaningful differences on insurance type and age distribution

  8. Biogas composition and engine performance, including database and biogas property model

    NARCIS (Netherlands)

    Bruijstens, A.J.; Beuman, W.P.H.; Molen, M. van der; Rijke, J. de; Cloudt, R.P.M.; Kadijk, G.; Camp, O.M.G.C. op den; Bleuanus, W.A.J.

    2008-01-01

    In order to enable this evaluation of the current biogas quality situation in the EU; results are presented in a biogas database. Furthermore the key gas parameter Sonic Bievo Index (influence on open loop A/F-ratio) is defined and other key gas parameters like the Methane Number (knock resistance)

  9. CARD 2017: expansion and model-centric curation of the Comprehensive Antibiotic Resistance Database

    Science.gov (United States)

    The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins, and mutations involved in AMR. CARD is ontologi...

  10. A context-aware preference model for database querying in an Ambient Intelligent environment

    NARCIS (Netherlands)

    van Bunningen, A.H.; Feng, L.; Apers, Peter M.G.

    2006-01-01

    Users' preferences have traditionally been exploited in query personalization to better serve their information needs. With the emerging ubiquitous computing technologies, users will be situated in an Ambient Intelligent (AmI) environment, where users' database access will not occur at a single

  11. Personality organization, five-factor model, and mental health.

    Science.gov (United States)

    Laverdière, Olivier; Gamache, Dominick; Diguer, Louis; Hébert, Etienne; Larochelle, Sébastien; Descôteaux, Jean

    2007-10-01

    Otto Kernberg has developed a model of personality and psychological functioning centered on the concept of personality organization. The purpose of this study is to empirically examine the relationships between this model, the five-factor model, and mental health. The Personality Organization Diagnostic Form (Diguer et al., The Personality Organization Diagnostic Form-II (PODF-II), 2001), the NEO Five-Factor Inventory (Costa and McCrae, Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) Professional Manual. 1992a), and the Health-Sickness Rating Scale (Luborsky, Arch Gen Psychiatry. 1962;7:407-417) were used to assess these constructs. Results show that personality organization and personality factors are distinct but interrelated constructs and that both contribute in similar proportion to mental health. Results also suggest that the integration of personality organization and factors can provide clinicians and researchers with an enriched understanding of psychological functioning.

  12. Data describing the inclusion relationships between two organs (PART-OF Tree) - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us BodyParts...nclusion_relation_list.txt File URL: ftp://ftp.biosciencedbc.jp/archive/bodyparts3d/LATEST/partof_inclusion_...relation_list.txt File size: 90 KB Simple search URL http://togodb.biosciencedbc.jp/togodb/view/bodyparts3d_...ry of This Database Site Policy | Contact Us Data describing the inclusion relationships between two organs (PART-OF Tree) - BodyParts3D | LSDB Archive ...

  13. Designing a Composite Service Organization (Through Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Prof. Dr. A. Z. Memon

    2006-01-01

    Full Text Available Suppose we have a class of similar service organizations each of which is characterized by the same numerically measurable input/output characteristics. Even if the amount of any input does not differ in them, one or more organizations can be expected to outperform the others in one or more production aspects. Our interest lies in comparing the output efficiency levels of all service organizations. For it we use mathematical modeling, mainly linear programming to design a composite organization with new input measures which relative to a specific organization should have a higher level of efficiency with regard to all output measures. The other purpose of this paper is to evaluate the output characteristics of this proposed service organization. The paper also touches some other highly important planning features of this organization.

  14. (Tropical) soil organic matter modelling: problems and prospects

    NARCIS (Netherlands)

    Keulen, van H.

    2001-01-01

    Soil organic matter plays an important role in many physical, chemical and biological processes. However, the quantitative relations between the mineral and organic components of the soil and the relations with the vegetation are poorly understood. In such situations, the use of models is an

  15. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    , and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples.......Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval...

  16. Self-organizing map models of language acquisition

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  17. Social organization in the Minority Game model

    Science.gov (United States)

    Slanina, František

    2000-10-01

    We study the role of imitation within the Minority Game model of market. The players can exchange information locally, which leads to formation of groups which act as if they were single players. Coherent spatial areas of rich and poor agents result. We found that the global effectivity is optimized at certain value of the imitation probability, which decreases with increasing memory length. The social tensions are suppressed for large imitation probability, but generally the requirements of high global effectivity and low social tensions are in conflict.

  18. International workshop of the Confinement Database and Modelling Expert Group in collaboration with the Edge and Pedestal Physics Expert Group

    International Nuclear Information System (INIS)

    Cordey, J.; Kardaun, O.

    2001-01-01

    A Workshop of the Confinement Database and Modelling Expert Group (EG) was held on 2-6 April at the Plasma Physics Research Center of Lausanne (CRPP), Switzerland. Presentations were held on the present status of the plasma pedestal (temperature and energy) scalings from an empirical and theoretical perspective. An integrated approach to modelling tokamaks incorporating core transport, edge pedestal and SOL, together with a model for ELMs was presented by JCT. New experimental data on on global H-mode confinement were discussed and presentations on L-H threshold power were made

  19. Modelling the self-organization and collapse of complex networks

    Indian Academy of Sciences (India)

    Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.

  20. MiDAS 2.0: an ecosystem-specific taxonomy and online database for the organisms of wastewater treatment systems expanded for anaerobic digester groups.

    Science.gov (United States)

    McIlroy, Simon Jon; Kirkegaard, Rasmus Hansen; McIlroy, Bianca; Nierychlo, Marta; Kristensen, Jannie Munk; Karst, Søren Michael; Albertsen, Mads; Nielsen, Per Halkjær

    2017-01-01

    Wastewater is increasingly viewed as a resource, with anaerobic digester technology being routinely implemented for biogas production. Characterising the microbial communities involved in wastewater treatment facilities and their anaerobic digesters is considered key to their optimal design and operation. Amplicon sequencing of the 16S rRNA gene allows high-throughput monitoring of these systems. The MiDAS field guide is a public resource providing amplicon sequencing protocols and an ecosystem-specific taxonomic database optimized for use with wastewater treatment facility samples. The curated taxonomy endeavours to provide a genus-level-classification for abundant phylotypes and the online field guide links this identity to published information regarding their ecology, function and distribution. This article describes the expansion of the database resources to cover the organisms of the anaerobic digester systems fed primary sludge and surplus activated sludge. The updated database includes descriptions of the abundant genus-level-taxa in influent wastewater, activated sludge and anaerobic digesters. Abundance information is also included to allow assessment of the role of emigration in the ecology of each phylotype. MiDAS is intended as a collaborative resource for the progression of research into the ecology of wastewater treatment, by providing a public repository for knowledge that is accessible to all interested in these biotechnologically important systems. http://www.midasfieldguide.org. © The Author(s) 2017. Published by Oxford University Press.

  1. How valuable are model organisms for transposable element studies?

    Science.gov (United States)

    Kidwell, M G; Evgen'ev, M B

    1999-01-01

    Model organisms have proved to be highly informative for many types of genetic studies involving 'conventional' genes. The results have often been successfully generalized to other closely related organisms and also, perhaps surprisingly frequently, to more distantly related organisms. Because of the wealth of previous knowledge and their availability and convenience, model organisms were often the species of choice for many of the earlier studies of transposable elements. The question arises whether the results of genetic studies of transposable elements in model organisms can be extrapolated in the same ways as those of conventional genes? A number of observations suggest that special care needs to be taken in generalizing the results from model organisms to other species. A hallmark of many transposable elements is their ability to amplify rapidly in species genomes. Rapid spread of a newly invaded element throughout a species range has also been demonstrated. The types and genomic copy numbers of transposable elements have been shown to differ greatly between some closely related species. Horizontal transfer of transposable elements appears to be more frequent than for nonmobile genes. Furthermore, the population structure of some model organisms has been subject to drastic recent changes that may have some bearing on their transposable element genomic complements. In order to initiate discussion of this question, several case studies of transposable elements in well-studied Drosophila species are presented.

  2. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  3. Labour Quality Model for Organic Farming Food Chains

    OpenAIRE

    Gassner, B.; Freyer, B.; Leitner, H.

    2008-01-01

    The debate on labour quality in science is controversial as well as in the organic agriculture community. Therefore, we reviewed literature on different labour quality models and definitions, and had key informant interviews on labour quality issues with stakeholders in a regional oriented organic agriculture bread food chain. We developed a labour quality model with nine quality categories and discussed linkages to labour satisfaction, ethical values and IFOAM principles.

  4. Business Model Innovation in Incumbent Organizations: : Challenges and Success Routes

    OpenAIRE

    Salama, Ahmad; Parvez, Khawar

    2015-01-01

    In this thesis major challenges of creating business models at incumbents within mature industries are identified along with a mitigation plan. Pressure is upon incumbent organizations in order to keep up with the latest rapid technological advancements, the launching of startups that almost cover every field of business and the continuous change in customers’ tastes and needs. That along with various factors either forced organizations to continually reevaluate their current business models ...

  5. Reverse Osmosis Processing of Organic Model Compounds and Fermentation Broths

    Science.gov (United States)

    2006-04-01

    key species found in the fermentation broth: ethanol, butanol, acetic acid, oxalic acid, lactic acid, and butyric acid. Correlations of the rejection...AFRL-ML-TY-TP-2007-4545 POSTPRINT REVERSE OSMOSIS PROCESSING OF ORGANIC MODEL COMPOUNDS AND FERMENTATION BROTHS Robert Diltz...TELEPHONE NUMBER (Include area code) Bioresource Technology 98 (2007) 686–695Reverse osmosis processing of organic model compounds and fermentation broths

  6. The Cardiac Atlas Project—an imaging database for computational modeling and statistical atlases of the heart

    Science.gov (United States)

    Fonseca, Carissa G.; Backhaus, Michael; Bluemke, David A.; Britten, Randall D.; Chung, Jae Do; Cowan, Brett R.; Dinov, Ivo D.; Finn, J. Paul; Hunter, Peter J.; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Medrano−Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A.

    2011-01-01

    Motivation: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Results: Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). Availability: http://www.cardiacatlas.org Contact: a.young@auckland.ac.nz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21737439

  7. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  8. Modelling a critical infrastructure-driven spatial database for proactive disaster management: A developing country context

    Directory of Open Access Journals (Sweden)

    David O. Baloye

    2016-04-01

    Full Text Available The understanding and institutionalisation of the seamless link between urban critical infrastructure and disaster management has greatly helped the developed world to establish effective disaster management processes. However, this link is conspicuously missing in developing countries, where disaster management has been more reactive than proactive. The consequence of this is typified in poor response time and uncoordinated ways in which disasters and emergency situations are handled. As is the case with many Nigerian cities, the challenges of urban development in the city of Abeokuta have limited the effectiveness of disaster and emergency first responders and managers. Using geospatial techniques, the study attempted to design and deploy a spatial database running a web-based information system to track the characteristics and distribution of critical infrastructure for effective use during disaster and emergencies, with the purpose of proactively improving disaster and emergency management processes in Abeokuta. Keywords: Disaster Management; Emergency; Critical Infrastructure; Geospatial Database; Developing Countries; Nigeria

  9. NEW MODEL FOR QUANTIFICATION OF ICT DEPENDABLE ORGANIZATIONS RESILIENCE

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2011-03-01

    Full Text Available Business environment today demands high reliable organizations in every segment to be competitive on the global market. Beside that, ICT sector is becoming irreplaceable in many fields of business, from the communication to the complex systems for process control and production. To fulfill those requirements and to develop further, many organizations worldwide are implementing business paradigm called - organizations resilience. Although resilience is well known term in many science fields, it is not well studied due to its complex nature. This paper is dealing with developing the new model for assessment and quantification of ICT dependable organizations resilience.

  10. Knowledge Loss: A Defensive Model In Nuclear Research Organization Memory

    International Nuclear Information System (INIS)

    Mohamad Safuan Bin Sulaiman; Muhd Noor Muhd Yunus

    2013-01-01

    Knowledge is an essential part of research based organization. It should be properly managed to ensure that any pitfalls of knowledge retention due to knowledge loss of both tacit and explicit is mitigated. Audit of the knowledge entities exist in the organization is important to identify the size of critical knowledge. It is very much related to how much know-what, know-how and know-why experts exist in the organization. This study conceptually proposed a defensive model for Nuclear Malaysia's organization memory and application of Knowledge Loss Risk Assessment (KLRA) as an important tool for critical knowledge identification. (author)

  11. Uncertainty assessment of a polygon database of soil organic carbon for greenhouse gas reporting in Canada’s Arctic and sub-arctic

    Directory of Open Access Journals (Sweden)

    M.F. Hossain

    2014-08-01

    Full Text Available Canada’s Arctic and sub-arctic consist 46% of Canada’s landmass and contain 45% of the total soil organic carbon (SOC. Pronounced climate warming and increasing human disturbances could induce the release of this SOC to the atmosphere as greenhouse gases. Canada is committed to estimating and reporting the greenhouse gases emissions and removals induced by land use change in the Arctic and sub-arctic. To assess the uncertainty of the estimate, we compiled a site-measured SOC database for Canada’s north, and used it to compare with a polygon database, that will be used for estimating SOC for the UNFCCC reporting. In 10 polygons where 3 or more measured sites were well located in each polygon, the site-averaged SOC content agreed with the polygon data within ±33% for the top 30 cm and within ±50% for the top 1 m soil. If we directly compared the SOC of the 382 measured sites with the polygon mean SOC, there was poor agreement: The relative error was less than 50% at 40% of the sites, and less than 100% at 68% of the sites. The relative errors were more than 400% at 10% of the sites. These comparisons indicate that the polygon database is too coarse to represent the SOC conditions for individual sites. The difference is close to the uncertainty range for reporting. The spatial database could be improved by relating site and polygon SOC data with more easily observable surface features that can be identified and derived from remote sensing imagery.

  12. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  13. Influence of dissolved organic carbon content on modelling natural organic matter acid-base properties.

    Science.gov (United States)

    Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves

    2004-10-01

    Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.

  14. Drosophila melanogaster as a model organism to study nanotoxicity.

    Science.gov (United States)

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  15. Investigating ecological speciation in non-model organisms

    DEFF Research Database (Denmark)

    Foote, Andrew David

    2012-01-01

    on killer whale evolutionary ecology in search of any difficulty in demonstrating causal links between variation in phenotype, ecology, and reproductive isolation in this non-model organism. Results: At present, we do not have enough evidence to conclude that adaptive phenotype traits linked to ecological...... speciation in non-model organisms that lead to this bias? What alternative approaches might redress the balance? Organism: Genetically differentiated types of the killer whale (Orcinus orca) exhibiting differences in prey preference, habitat use, morphology, and behaviour. Methods: Review of the literature...... variation underlie reproductive isolation between sympatric killer whale types. Perhaps ecological speciation has occurred, but it is hard to prove. We will probably face this outcome whenever we wish to address non-model organisms – species in which it is not easy to apply experimental approaches...

  16. Modelling the fate of oxidisable organic contaminants in groundwater

    DEFF Research Database (Denmark)

    Barry, D.A.; Prommer, H.; Miller, C.T.

    2002-01-01

    modelling framework is illustrated by pertinent examples, showing the degradation of dissolved organics by microbial activity limited by the availability of nutrients or electron acceptors (i.e., changing redox states), as well as concomitant secondary reactions. Two field-scale modelling examples...... are discussed, the Vejen landfill (Denmark) and an example where metal contamination is remediated by redox changes wrought by injection of a dissolved organic compound. A summary is provided of current and likely future challenges to modelling of oxidisable organics in the subsurface. (C) 2002 Elsevier Science......Subsurface contamination by organic chemicals is a pervasive environmental problem, susceptible to remediation by natural or enhanced attenuation approaches or more highly engineered methods such as pump-and-treat, amongst others. Such remediation approaches, along with risk assessment...

  17. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  18. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  19. Microsoft Access Small Business Solutions State-of-the-Art Database Models for Sales, Marketing, Customer Management, and More Key Business Activities

    CERN Document Server

    Hennig, Teresa; Linson, Larry; Purvis, Leigh; Spaulding, Brent

    2010-01-01

    Database models developed by a team of leading Microsoft Access MVPs that provide ready-to-use solutions for sales, marketing, customer management and other key business activities for most small businesses. As the most popular relational database in the world, Microsoft Access is widely used by small business owners. This book responds to the growing need for resources that help business managers and end users design and build effective Access database solutions for specific business functions. Coverage includes::; Elements of a Microsoft Access Database; Relational Data Model; Dealing with C

  20. MORPHIN: a web tool for human disease research by projecting model organism biology onto a human integrated gene network.

    Science.gov (United States)

    Hwang, Sohyun; Kim, Eiru; Yang, Sunmo; Marcotte, Edward M; Lee, Insuk

    2014-07-01

    Despite recent advances in human genetics, model organisms are indispensable for human disease research. Most human disease pathways are evolutionally conserved among other species, where they may phenocopy the human condition or be associated with seemingly unrelated phenotypes. Much of the known gene-to-phenotype association information is distributed across diverse databases, growing rapidly due to new experimental techniques. Accessible bioinformatics tools will therefore facilitate translation of discoveries from model organisms into human disease biology. Here, we present a web-based discovery tool for human disease studies, MORPHIN (model organisms projected on a human integrated gene network), which prioritizes the most relevant human diseases for a given set of model organism genes, potentially highlighting new model systems for human diseases and providing context to model organism studies. Conceptually, MORPHIN investigates human diseases by an orthology-based projection of a set of model organism genes onto a genome-scale human gene network. MORPHIN then prioritizes human diseases by relevance to the projected model organism genes using two distinct methods: a conventional overlap-based gene set enrichment analysis and a network-based measure of closeness between the query and disease gene sets capable of detecting associations undetectable by the conventional overlap-based methods. MORPHIN is freely accessible at http://www.inetbio.org/morphin. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Organism-level models: When mechanisms and statistics fail us

    Science.gov (United States)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  2. PROCARB: A Database of Known and Modelled Carbohydrate-Binding Protein Structures with Sequence-Based Prediction Tools

    Directory of Open Access Journals (Sweden)

    Adeel Malik

    2010-01-01

    Full Text Available Understanding of the three-dimensional structures of proteins that interact with carbohydrates covalently (glycoproteins as well as noncovalently (protein-carbohydrate complexes is essential to many biological processes and plays a significant role in normal and disease-associated functions. It is important to have a central repository of knowledge available about these protein-carbohydrate complexes as well as preprocessed data of predicted structures. This can be significantly enhanced by tools de novo which can predict carbohydrate-binding sites for proteins in the absence of structure of experimentally known binding site. PROCARB is an open-access database comprising three independently working components, namely, (i Core PROCARB module, consisting of three-dimensional structures of protein-carbohydrate complexes taken from Protein Data Bank (PDB, (ii Homology Models module, consisting of manually developed three-dimensional models of N-linked and O-linked glycoproteins of unknown three-dimensional structure, and (iii CBS-Pred prediction module, consisting of web servers to predict carbohydrate-binding sites using single sequence or server-generated PSSM. Several precomputed structural and functional properties of complexes are also included in the database for quick analysis. In particular, information about function, secondary structure, solvent accessibility, hydrogen bonds and literature reference, and so forth, is included. In addition, each protein in the database is mapped to Uniprot, Pfam, PDB, and so forth.

  3. A Framework for Formal Modeling and Analysis of Organizations

    NARCIS (Netherlands)

    Jonker, C.M.; Sharpanskykh, O.; Treur, J.; P., Yolum

    2007-01-01

    A new, formal, role-based, framework for modeling and analyzing both real world and artificial organizations is introduced. It exploits static and dynamic properties of the organizational model and includes the (frequently ignored) environment. The transition is described from a generic framework of

  4. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RPSD Database Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...max Taxonomy ID: 3847 Database description We have determined the three-dimensional structures of the protei

  5. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us GETDB Database Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...cription About 4,600 insertion lines of enhancer trap lines based on the Gal4-UAS

  6. Spatial arrangement of organic compounds on a model mineral surface: implications for soil organic matter stabilization.

    Science.gov (United States)

    Petridis, Loukas; Ambaye, Haile; Jagadamma, Sindhu; Kilbey, S Michael; Lokitz, Bradley S; Lauter, Valeria; Mayes, Melanie A

    2014-01-01

    The complexity of the mineral-organic carbon interface may influence the extent of stabilization of organic carbon compounds in soils, which is important for global climate futures. The nanoscale structure of a model interface was examined here by depositing films of organic carbon compounds of contrasting chemical character, hydrophilic glucose and amphiphilic stearic acid, onto a soil mineral analogue (Al2O3). Neutron reflectometry, a technique which provides depth-sensitive insight into the organization of the thin films, indicates that glucose molecules reside in a layer between Al2O3 and stearic acid, a result that was verified by water contact angle measurements. Molecular dynamics simulations reveal the thermodynamic driving force behind glucose partitioning on the mineral interface: The entropic penalty of confining the less mobile glucose on the mineral surface is lower than for stearic acid. The fundamental information obtained here helps rationalize how complex arrangements of organic carbon on soil mineral surfaces may arise.

  7. Investigating ecological speciation in non-model organisms

    DEFF Research Database (Denmark)

    Foote, Andrew David

    2012-01-01

    Background: Studies of ecological speciation tend to focus on a few model biological systems. In contrast, few studies on non-model organisms have been able to infer ecological speciation as the underlying mechanism of evolutionary divergence. Questions: What are the pitfalls in studying ecological...... on killer whale evolutionary ecology in search of any difficulty in demonstrating causal links between variation in phenotype, ecology, and reproductive isolation in this non-model organism. Results: At present, we do not have enough evidence to conclude that adaptive phenotype traits linked to ecological...... variation underlie reproductive isolation between sympatric killer whale types. Perhaps ecological speciation has occurred, but it is hard to prove. We will probably face this outcome whenever we wish to address non-model organisms – species in which it is not easy to apply experimental approaches...

  8. A self-organized criticality model for plasma transport

    International Nuclear Information System (INIS)

    Carreras, B.A.; Newman, D.; Lynch, V.E.

    1996-01-01

    Many models of natural phenomena manifest the basic hypothesis of self-organized criticality (SOC). The SOC concept brings together the self-similarity on space and time scales that is common to many of these phenomena. The application of the SOC modelling concept to the plasma dynamics near marginal stability opens new possibilities of understanding issues such as Bohm scaling, profile consistency, broad band fluctuation spectra with universal characteristics and fast time scales. A model realization of self-organized criticality for plasma transport in a magnetic confinement device is presented. The model is based on subcritical resistive pressure-gradient-driven turbulence. Three-dimensional nonlinear calculations based on this model show the existence of transport under subcritical conditions. This model that includes fluctuation dynamics leads to results very similar to the running sandpile paradigm

  9. An Ising model for metal-organic frameworks

    Science.gov (United States)

    Höft, Nicolas; Horbach, Jürgen; Martín-Mayor, Victor; Seoane, Beatriz

    2017-08-01

    We present a three-dimensional Ising model where lines of equal spins are frozen such that they form an ordered framework structure. The frame spins impose an external field on the rest of the spins (active spins). We demonstrate that this "porous Ising model" can be seen as a minimal model for condensation transitions of gas molecules in metal-organic frameworks. Using Monte Carlo simulation techniques, we compare the phase behavior of a porous Ising model with that of a particle-based model for the condensation of methane (CH4) in the isoreticular metal-organic framework IRMOF-16. For both models, we find a line of first-order phase transitions that end in a critical point. We show that the critical behavior in both cases belongs to the 3D Ising universality class, in contrast to other phase transitions in confinement such as capillary condensation.

  10. Linking Land Surface Phenology and Vegetation-Plot Databases to Model Terrestrial Plant α-Diversity of the Okavango Basin

    Directory of Open Access Journals (Sweden)

    Rasmus Revermann

    2016-04-01

    Full Text Available In many parts of Africa, spatially-explicit information on plant α-diversity, i.e., the number of species in a given area, is missing as baseline information for spatial planning. We present an approach on how to combine vegetation-plot databases and remotely-sensed land surface phenology (LSP metrics to predict plant α-diversity on a regional scale. We gathered data on plant α-diversity, measured as species density, from 999 vegetation plots sized 20 m × 50 m covering all major vegetation units of the Okavango basin in the countries of Angola, Namibia and Botswana. As predictor variables, we used MODIS LSP metrics averaged over 12 years (250-m spatial resolution and three topographic attributes calculated from the SRTM digital elevation model. Furthermore, we tested whether additional climatic data could improve predictions. We tested three predictor subsets: (1 remote sensing variables; (2 climatic variables; and (3 all variables combined. We used two statistical modeling approaches, random forests and boosted regression trees, to predict vascular plant α-diversity. The resulting maps showed that the Miombo woodlands of the Angolan Central Plateau featured the highest diversity, and the lowest values were predicted for the thornbush savanna in the Okavango Delta area. Models built on the entire dataset exhibited the best performance followed by climate-only models and remote sensing-only models. However, models including climate data showed artifacts. In spite of lower model performance, models based only on LSP metrics produced the most realistic maps. Furthermore, they revealed local differences in plant diversity of the landscape mosaic that were blurred by homogenous belts as predicted by climate-based models. This study pinpoints the high potential of LSP metrics used in conjunction with biodiversity data derived from vegetation-plot databases to produce spatial information on a regional scale that is urgently needed for basic

  11. Regional Persistent Organic Pollutants' Environmental Impact Assessment and Control Model

    Directory of Open Access Journals (Sweden)

    Jurgis Staniskis

    2008-10-01

    Full Text Available The sources of formation, environmental distribution and fate of persistent organic pollutants (POPs are increasingly seen as topics to be addressed and solved at the global scale. Therefore, there are already two international agreements concerning persistent organic pollutants: the Protocol of 1998 to the 1979 Convention on the Long-Range Transboundary Air Pollution on Persistent Organic Pollutants (Aarhus Protocol; and the Stockholm Convention on Persistent Organic Pollutants. For the assessment of environmental pollution of POPs, for the risk assessment, for the evaluation of new pollutants as potential candidates to be included in the POPs list of the Stokholmo or/and Aarhus Protocol, a set of different models are developed or under development. Multimedia models help describe and understand environmental processes leading to global contamination through POPs and actual risk to the environment and human health. However, there is a lack of the tools based on a systematic and integrated approach to POPs management difficulties in the region.

  12. Making Organisms Model Human Behavior: Situated Models in North-American Alcohol Research, 1950-onwards

    Science.gov (United States)

    Leonelli, Sabina; Ankeny, Rachel A.; Nelson, Nicole C.; Ramsden, Edmund

    2014-01-01

    Argument We examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied. PMID:25233743

  13. Making organisms model human behavior: situated models in North-American alcohol research, since 1950.

    Science.gov (United States)

    Ankeny, Rachel A; Leonelli, Sabina; Nelson, Nicole C; Ramsden, Edmund

    2014-09-01

    We examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied.

  14. MODELLING CONSUMERS' DEMAND FOR ORGANIC FOOD PRODUCTS: THE SWEDISH EXPERIENCE

    Directory of Open Access Journals (Sweden)

    Manuchehr Irandoust

    2016-07-01

    Full Text Available This paper attempts to examine a few factors characterizing consumer preferences and behavior towards organic food products in the south of Sweden using a proportional odds model which captures the natural ordering of dependent variables and any inherent nonlinearities. The findings show that consumer's choice for organic food depends on perceived benefits of organic food (environment, health, and quality and consumer's perception and attitudes towards labelling system, message framing, and local origin. In addition, high willingness to pay and income level will increase the probability to buy organic food, while the cultural differences and socio-demographic characteristics have no effect on consumer behaviour and attitudes towards organic food products. Policy implications are offered.

  15. Modelization of tritium transfer into the organic compartments of algae

    International Nuclear Information System (INIS)

    Bonotto, S.; Gerber, G.B.; Arapis, G.; Kirchmann, R.

    1982-01-01

    Uptake of tritium oxide and its conversion into organic tritium was studied in four different types of algae with widely varying size and growth characteristics (Acetabularia acetabulum, Boergesenia forbesii, two strains of Chlamydomonas and Dunaliella bioculata). Water in the cell and the vacuales equilibrates rapidly with external tritium water. Tritium is actively incorporated into organically bound form as the organisms grow. During the stationary phase, incorporation of tritium is slow. There exists a discrimination against the incorporation of tritium into organically bound form. A model has been elaborated taking in account these different factors. It appears that transfer of organic tritium by algae growing near the sites of release would be significant only for actively growing algae. Algae growing slowly may, however, be useful as cumulative indicators of discontinuous tritium release. (author)

  16. Dilbert-Peter model of organization effectiveness: computer simulations

    OpenAIRE

    Sobkowicz, Pawel

    2010-01-01

    We describe a computer model of general effectiveness of a hierarchical organization depending on two main aspects: effects of promotion to managerial levels and efforts to self-promote of individual employees, reducing their actual productivity. The combination of judgment by appearance in the promotion to higher levels of hierarchy and the Peter Principle (which states that people are promoted to their level of incompetence) results in fast declines in effectiveness of the organization. The...

  17. Modeling nanostructure-enhanced light trapping in organic solar cells

    DEFF Research Database (Denmark)

    Adam, Jost

    A promising approach for improving the power conversion efficiencies of organic solar cells (OSCs) is by incorporating nanostructures in their thin film architecture to improve the light absorption in the device’s active polymer layers. Here, we present a modelling framework for the prediction....... Diffraction by fractal metallic supergratings. Optics Express, 15(24), 15628–15636 (2007) [3] Goszczak, A. J. et al. Nanoscale Aluminum dimples for light trapping in organic thin films (submitted)...

  18. A 3-Dimensional Facial Morpho-Dynamic Database in the development of a prediction model in orthognathic surgery.

    Science.gov (United States)

    Peretta, Redento; Concheri, Gianmaria; Comelli, Daniele; Meneghello, Roberto; Galzignato, Pier Francesco; Ferronato, Giuseppe

    2008-01-01

    Current methodologies in the prevision of post-surgical features of the face in orthognathic surgery are mainly 2-D. An improvement is certainly given by the introduction of CT, but its acceptance is controversial due to its high biological cost. As an alternative, in this study an effective procedure for the construction of a 3-D textured digital model of the face and dental arches of patients with dentofacial malformations using a 3-D laser scanner at no biological cost is presented. A 3-D Laser scanner Konica-Minolta VIVID 910 is used to obtain multiple scans from different perspectives of the face of patients with dentofacial malocclusions requiring orthognathic surgery. These multiple views are then recombined, integrating also the maxillary and mandibular arch plaster casts, to obtain the 3-D textured model of the face and occlusion with minimal error. A viable methodology was identified for the face and occlusal modeling of orthognathic patients and validated in a test case, confirming its effectiveness: the 3-D model created accurately describes the actual features of the patient's face; the proposed methodology can be easily applied in the clinical routine to accurately record the steps of the surgical treatment and to perform accurate anthropometric analyses of the facial morphology, and thus constitute the necessary database for the development of previsional tools in orthognathic surgery. The proposed method is effective in recording all the morphological facial features of patients with dentofacial malformations, to develop a facial modification database and tools for virtual surgery.

  19. Ectocarpus: a model organism for the brown algae.

    Science.gov (United States)

    Coelho, Susana M; Scornet, Delphine; Rousvoal, Sylvie; Peters, Nick T; Dartevelle, Laurence; Peters, Akira F; Cock, J Mark

    2012-02-01

    The brown algae are an interesting group of organisms from several points of view. They are the dominant organisms in many coastal ecosystems, where they often form large, underwater forests. They also have an unusual evolutionary history, being members of the stramenopiles, which are very distantly related to well-studied animal and green plant models. As a consequence of this history, brown algae have evolved many novel features, for example in terms of their cell biology and metabolic pathways. They are also one of only a small number of eukaryotic groups to have independently evolved complex multicellularity. Despite these interesting features, the brown algae have remained a relatively poorly studied group. This situation has started to change over the last few years, however, with the emergence of the filamentous brown alga Ectocarpus as a model system that is amenable to the genomic and genetic approaches that have proved to be so powerful in more classical model organisms such as Drosophila and Arabidopsis.

  20. GPCR-SSFE: A comprehensive database of G-protein-coupled receptor template predictions and homology models

    Directory of Open Access Journals (Sweden)

    Kreuchwig Annika

    2011-05-01

    Full Text Available Abstract Background G protein-coupled receptors (GPCRs transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Description Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. Conclusions The data provided by GPCR-SSFE are useful for investigating

  1. GPCR-SSFE: a comprehensive database of G-protein-coupled receptor template predictions and homology models.

    Science.gov (United States)

    Worth, Catherine L; Kreuchwig, Annika; Kleinau, Gunnar; Krause, Gerd

    2011-05-23

    G protein-coupled receptors (GPCRs) transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. The data provided by GPCR-SSFE are useful for investigating general and detailed sequence-structure-function relationships

  2. Modelling the fate of organic micropollutants in stormwater ponds

    DEFF Research Database (Denmark)

    Vezzaro, Luca; Eriksson, Eva; Ledin, Anna

    2011-01-01

    Urban water managers need to estimate the potential removal of organic micropollutants (MP) in stormwater treatment systems to support MP pollution control strategies. This study documents how the potential removal of organic MP in stormwater treatment systems can be quantified by using multimedia...... models. The fate of four different MP in a stormwater retention pond was simulated by applying two steady-state multimedia fate models (EPI Suite and SimpleBox) commonly applied in chemical risk assessment and a dynamic multimedia fate model (Stormwater Treatment Unit Model for Micro Pollutants — STUMP...... substance inherent properties to calculate MP fate but differ in their ability to represent the small physical scale and high temporal variability of stormwater treatment systems. Therefore the three models generate different results. A Global Sensitivity Analysis (GSA) highlighted that settling...

  3. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    Science.gov (United States)

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  4. General approach to characterizing reservoir fluids for EoS models using a large PVT database

    DEFF Research Database (Denmark)

    Varzandeh, Farhad; Stenby, Erling Halfdan; Yan, Wei

    2017-01-01

    Fluid characterization is needed when applying any EoS model to reservoir fluids. It is important especially for non-cubic models such as PC-SAFT where fluid characterization is less mature. Furthermore, there is a great interest to apply non-cubic models to high pressure high temperature reservoir...

  5. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    Science.gov (United States)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2015-01-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  6. Database Replication

    Directory of Open Access Journals (Sweden)

    Marius Cristian MAZILU

    2010-12-01

    Full Text Available For someone who has worked in an environment in which the same database is used for data entry and reporting, or perhaps managed a single database server that was utilized by too many users, the advantages brought by data replication are clear. The main purpose of this paper is to emphasize those advantages as well as presenting the different types of Database Replication and the cases in which their use is recommended.

  7. Identification of fire modeling issues based on an analysis of real events from the OECD FIRE database

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, Dominik [Swiss Federal Nuclear Safety Inspectorate ENSI, Brugg (Switzerland)

    2017-03-15

    Precursor analysis is widely used in the nuclear industry to judge the significance of events relevant to safety. However, in case of events that may damage equipment through effects that are not ordinary functional dependencies, the analysis may not always fully appreciate the potential for further evolution of the event. For fires, which are one class of such events, this paper discusses modelling challenges that need to be overcome when performing a probabilistic precursor analysis. The events used to analyze are selected from the Organisation for Economic Cooperation and Development (OECD) Fire Incidents Records Exchange (FIRE) Database.

  8. A Comparative Data-Based Modeling Study on Respiratory CO2 Gas Exchange during Mechanical Ventilation

    Directory of Open Access Journals (Sweden)

    Chang-Sei eKim

    2016-02-01

    Full Text Available The goal of this study is to derive a minimally complex but credible model of respiratory CO2 gas exchange that may be used in systematic design and pilot testing of closed-loop end-tidal CO2 controllers in mechanical ventilation. We first derived a candidate model that captures the essential mechanisms involved in the respiratory CO2 gas exchange process. Then, we simplified the candidate model to derive two lower-order candidate models. We compared these candidate models for predictive capability and reliability using experimental data collected from 25 pediatric subjects undergoing dynamically varying mechanical ventilation during surgical procedures. A two-compartment model equipped with transport delay to account for CO2 delivery between the lungs and the tissues showed modest but statistically significant improvement in predictive capability over the same model without transport delay. Aggregating the lungs and the tissues into a single compartment further degraded the predictive fidelity of the model. In addition, the model equipped with transport delay demonstrated superior reliability to the one without transport delay. Further, the respiratory parameters derived from the model equipped with transport delay, but not the one without transport delay, were physiologically plausible. The results suggest that gas transport between the lungs and the tissues must be taken into account to accurately reproduce the respiratory CO2 gas exchange process under conditions of wide-ranging and dynamically varying mechanical ventilation conditions.

  9. A Methodolgy, Based on Analytical Modeling, for the Design of Parallel and Distributed Architectures for Relational Database Query Processors.

    Science.gov (United States)

    1987-12-01

    AFIT MPOA Architecture ........ ....................... 13 9. DIRECT Architecture ........ .......................... 14 - d 10. Teradata Ynet...commercially available database machines [28,59.1]. the Britton-Lee and Teradata machines. There have been other cormpanies announce database machines. such...Processors Figure 10. Teradata Ynet Architecture The Britton-Lee IDM-500 series database machine is the most well known and widelv used database machine

  10. The UCSC Genome Browser Database: update 2006

    DEFF Research Database (Denmark)

    Hinrichs, A S; Karolchik, D; Baertsch, R

    2006-01-01

    The University of California Santa Cruz Genome Browser Database (GBD) contains sequence and annotation data for the genomes of about a dozen vertebrate species and several major model organisms. Genome annotations typically include assembly data, sequence composition, genes and gene predictions, ...

  11. Public Opinion Poll Question Databases: An Evaluation

    Science.gov (United States)

    Woods, Stephen

    2007-01-01

    This paper evaluates five polling resource: iPOLL, Polling the Nations, Gallup Brain, Public Opinion Poll Question Database, and Polls and Surveys. Content was evaluated on disclosure standards from major polling organizations, scope on a model for public opinion polls, and presentation on a flow chart discussing search limitations and usability.

  12. Subject and authorship of records related to the Organization for Tropical Studies (OTS in BINABITROP, a comprehensive database about Costa Rican biology

    Directory of Open Access Journals (Sweden)

    Julián Monge-Nájera

    2013-06-01

    Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.BINABITROP es una base de datos bibliográfica con más de 38 000 registros sobre los ecosistemas y organismos de Costa Rica. En contraste con bases de datos comerciales como Web of Knowledge y Scopus, que excluyen a la mayoría de las revistas científicas publicadas en los países tropicales, BINABITROP registra casi por completo la literatura biológica sobre Costa Rica. Analizamos los registros de La Selva, Palo Verde y Las Cruces. Hallamos que la mayoría de los registros corresponden a estudios sobre ecología y sistemática; que la mayoría de los autores sólo registraron un artículo en el período de estudio (1963-2011 y que la mayoría de la investigación formalmente publicada apareció en cuatro revistas: Biotropica, Revista de Biología Tropical/International Journal of Tropical Biology, Zootaxa y Brenesia. Este parece ser el primer estudio de una base de datos integral sobre literatura de biología tropical.

  13. Towards model evaluation and identification using Self-Organizing Maps

    Directory of Open Access Journals (Sweden)

    M. Herbst

    2008-04-01

    Full Text Available The reduction of information contained in model time series through the use of aggregating statistical performance measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. It has been readily shown that this loss imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty. In this contribution we present an approach using a Self-Organizing Map (SOM to circumvent the identifiability problem induced by the low discriminatory power of aggregating performance measures. Instead, a Self-Organizing Map is used to differentiate the spectrum of model realizations, obtained from Monte-Carlo simulations with a distributed conceptual watershed model, based on the recognition of different patterns in time series. Further, the SOM is used instead of a classical optimization algorithm to identify those model realizations among the Monte-Carlo simulation results that most closely approximate the pattern of the measured discharge time series. The results are analyzed and compared with the manually calibrated model as well as with the results of the Shuffled Complex Evolution algorithm (SCE-UA. In our study the latter slightly outperformed the SOM results. The SOM method, however, yields a set of equivalent model parameterizations and therefore also allows for confining the parameter space to a region that closely represents a measured data set. This particular feature renders the SOM potentially useful for future model identification applications.

  14. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    Science.gov (United States)

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  15. Self-organized Criticality Model for Ocean Internal Waves

    International Nuclear Information System (INIS)

    Wang Gang; Hou Yijun; Lin Min; Qiao Fangli

    2009-01-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of -2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed. (general)

  16. A global database of seismically and non-seismically triggered landslides for 2D/3D numerical modeling

    Science.gov (United States)

    Domej, Gisela; Bourdeau, Céline; Lenti, Luca; Pluta, Kacper

    2017-04-01

    Landsliding is a worldwide common phenomenon. Every year, and ranging in size from very small to enormous, landslides cause all too often loss of life and disastrous damage to infrastructure, property and the environment. One main reason for more frequent catastrophes is the growth of population on the Earth which entails extending urbanization to areas at risk. Landslides are triggered by a variety and combination of causes, among which the role of water and seismic activity appear to have the most serious consequences. In this regard, seismic shaking is of particular interest since topographic elevation as well as the landslide mass itself can trap waves and hence amplify incoming surface waves - a phenomenon known as "site effects". Research on the topic of landsliding due to seismic and non-seismic activity is extensive and a broad spectrum of methods for modeling slope deformation is available. Those methods range from pseudo-static and rigid-block based models to numerical models. The majority is limited to 2D modeling since more sophisticated approaches in 3D are still under development or calibration. However, the effect of lateral confinement as well as the mechanical properties of the adjacent bedrock might be of great importance because they may enhance the focusing of trapped waves in the landslide mass. A database was created to study 3D landslide geometries. It currently contains 277 distinct seismically and non-seismically triggered landslides spread all around the globe whose rupture bodies were measured in all available details. Therefore a specific methodology was developed to maintain predefined standards, to keep the bias as low as possible and to set up a query tool to explore the database. Besides geometry, additional information such as location, date, triggering factors, material, sliding mechanisms, event chronology, consequences, related literature, among other things are stored for every case. The aim of the database is to enable

  17. There Is No Simple Model of the Plasma Membrane Organization

    Science.gov (United States)

    Bernardino de la Serna, Jorge; Schütz, Gerhard J.; Eggeling, Christian; Cebecauer, Marek

    2016-01-01

    Ever since technologies enabled the characterization of eukaryotic plasma membranes, heterogeneities in the distributions of its constituents were observed. Over the years this led to the proposal of various models describing the plasma membrane organization such as lipid shells, picket-and-fences, lipid rafts, or protein islands, as addressed in numerous publications and reviews. Instead of emphasizing on one model we in this review give a brief overview over current models and highlight how current experimental work in one or the other way do not support the existence of a single overarching model. Instead, we highlight the vast variety of membrane properties and components, their influences and impacts. We believe that highlighting such controversial discoveries will stimulate unbiased research on plasma membrane organization and functionality, leading to a better understanding of this essential cellular structure. PMID:27747212

  18. Device model investigation of bilayer organic light emitting diodes

    International Nuclear Information System (INIS)

    Crone, B. K.; Davids, P. S.; Campbell, I. H.; Smith, D. L.

    2000-01-01

    Organic materials that have desirable luminescence properties, such as a favorable emission spectrum and high luminescence efficiency, are not necessarily suitable for single layer organic light-emitting diodes (LEDs) because the material may have unequal carrier mobilities or contact limited injection properties. As a result, single layer LEDs made from such organic materials are inefficient. In this article, we present device model calculations of single layer and bilayer organic LED characteristics that demonstrate the improvements in device performance that can occur in bilayer devices. We first consider an organic material where the mobilities of the electrons and holes are significantly different. The role of the bilayer structure in this case is to move the recombination away from the electrode that injects the low mobility carrier. We then consider an organic material with equal electron and hole mobilities but where it is not possible to make a good contact for one carrier type, say electrons. The role of a bilayer structure in this case is to prevent the holes from traversing the device without recombining. In both cases, single layer device limitations can be overcome by employing a two organic layer structure. The results are discussed using the calculated spatial variation of the carrier densities, electric field, and recombination rate density in the structures. (c) 2000 American Institute of Physics

  19. Financial incentives: alternatives to the altruistic model of organ donation.

    Science.gov (United States)

    Siminoff, L A; Leonard, M D

    1999-12-01

    Improvements in transplantation techniques have resulted in a demand for transplantable organs that far outpaces supply. Present efforts to secure organs use an altruistic system designed to appeal to a public that will donate organs because they are needed. Efforts to secure organs under this system have not been as successful as hoped. Many refinements to the altruistic model have been or are currently being proposed, such as "required request," "mandated choice," "routine notification," and "presumed consent." Recent calls for market approaches to organ procurement reflect growing doubts about the efficacy of these refinements. Market approaches generally use a "futures market," with benefits payable either periodically or when or if organs are procured. Lump-sum arrangements could include donations to surviving family or contributions to charities or to funeral costs. Possibilities for a periodic system of payments include reduced premiums for health or life insurance, or a reciprocity system whereby individuals who periodically reaffirm their willingness to donate are given preference if they require a transplant. Market approaches do raise serious ethical issues, including potential exploitation of the poor. Such approaches may also be effectively proscribed by the 1984 National Organ Transplant Act.

  20. Cleanup of a HLW nuclear fuel-reprocessing center using 3-D database modeling technology

    International Nuclear Information System (INIS)

    Sauer, R.C.

    1992-01-01

    A significant challenge in decommissioning any large nuclear facility is how to solidify the large volume of residual high-level radioactive waste (HLW) without structurally interfering with the existing equipment and piping used at the original facility or would require rework due to interferences which were not identified during the design process. This problem is further compounded when the nuclear facility to be decommissioned is a 35 year old nuclear fuel reprocessing center designed to recover usable uranium and plutonium. Facilities of this vintage usually tend to lack full documentation of design changes made over the years and as a result, crude traps or pockets of high-level contamination may not be fully realized. Any miscalculation in the construction or modification sequences could compound the overall dismantling and decontamination of the facility. This paper reports that development of a 3-dimensional (3-D) computer database tool was considered critical in defining the most complex portions of this one-of-a-kind vitrification facility

  1. Combining a weed traits database with a population dynamics model predicts shifts in weed communities

    DEFF Research Database (Denmark)

    Storkey, Jonathan; Holst, Niels; Bøjer, Ole Mission

    2015-01-01

    , populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters...

  2. Targeted Therapy Database (TTD): a model to match patient's molecular profile with current knowledge on cancer biology.

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-08-10

    The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit the available knowledge on cancer biology with the

  3. Modeling of the transient mobility in disordered organic semiconductors

    NARCIS (Netherlands)

    Germs, W.C.; Van der Holst, J.M.M.; Van Mensfoort, S.L.M.; Bobbert, P.A.; Coehoorn, R.

    2011-01-01

    In non-steady-state experiments, the electrical response of devicesbased on disordered organic semiconductors often shows a large transient contribution due to relaxation of the out-of-equilibrium charge-carrier distribution. We have developed a model describing this process, based only on the

  4. There Is No Simple Model of the Plasma Membrane Organization

    Czech Academy of Sciences Publication Activity Database

    de la serna, J. B.; Schütz, G.; Eggeling, Ch.; Cebecauer, Marek

    2016-01-01

    Roč. 4, SEP 2016 (2016), 106 ISSN 2296-634X R&D Projects: GA ČR GA15-06989S Institutional support: RVO:61388955 Keywords : plasma membrane * membrane organization models * heterogeneous distribution Subject RIV: CF - Physical ; Theoretical Chemistry

  5. Waste Reduction Model (WARM) Resources for Small Businesses and Organizations

    Science.gov (United States)

    This page provides a brief overview of how EPA’s Waste Reduction Model (WARM) can be used by small businesses and organizations. The page includes a brief summary of uses of WARM for the audience and links to other resources.

  6. Editorial: Plant organ abscission: from models to crops

    Science.gov (United States)

    The shedding of plant organs is a highly coordinated process essential for both vegetative and reproductive development (Addicott, 1982; Sexton and Roberts, 1982; Roberts et al., 2002; Leslie et al., 2007; Roberts and Gonzalez-Carranza, 2007; Estornell et al., 2013). Research with model plants, name...

  7. Modeling growth of specific spoilage organisms in tilapia ...

    African Journals Online (AJOL)

    enoh

    2012-03-29

    Mar 29, 2012 ... Tilapia is an important aquatic fish, but severe spoilage of tilapia is most likely related to the global aquaculture. The spoilage is mostly caused by specific spoilage organisms (SSO). Therefore, it is very important to use microbial models to predict the growth of SSO in tilapia. This study firstly verified.

  8. A model of virtual organization for corporate visibility and ...

    African Journals Online (AJOL)

    This paper considers the existing numerous research in business, Information and Communication Technology (ICT), examines a theoretical framework for value creation in a virtual world. Following a proposed model, a new strategic paradigm is created for corporate value; and virtual organization (VO) apply the use of ...

  9. Modeling growth of specific spoilage organisms in tilapia ...

    African Journals Online (AJOL)

    Tilapia is an important aquatic fish, but severe spoilage of tilapia is most likely related to the global aquaculture. The spoilage is mostly caused by specific spoilage organisms (SSO). Therefore, it is very important to use microbial models to predict the growth of SSO in tilapia. This study firstly verified Pseudomonas and Vibrio ...

  10. Promoting Representational Competence with Molecular Models in Organic Chemistry

    Science.gov (United States)

    Stull, Andrew T.; Gainer, Morgan; Padalkar, Shamin; Hegarty, Mary

    2016-01-01

    Mastering the many different diagrammatic representations of molecules used in organic chemistry is challenging for students. This article summarizes recent research showing that manipulating 3-D molecular models can facilitate the understanding and use of these representations. Results indicate that students are more successful in translating…

  11. GRACE Data-based High Accuracy Global Static Earth's Gravity Field Model

    Directory of Open Access Journals (Sweden)

    CHEN Qiujie

    2016-04-01

    Full Text Available To recover the highly accurate static earth's gravity field by using GRACE satellite data is one of the hot topics in geodesy. Since linearization errors of dynamic approach quickly increase when extending satellite arc length, we established a modified dynamic approach for processing GRACE orbit and range-rate measurements in this paper, which treated orbit observations of the twin GRACE satellites as approximate values for linearization. Using the GRACE data spanning the period Jan. 2003 to Dec. 2010, containing satellite attitudes, orbits, range-rate, and non-conservative forces, we developed two global static gravity field models. One is the unconstrained solution called Tongji-Dyn01s complete to degree and order 180; the other one is the Tongji-Dyn01k model computed by using Kaula constraint. The comparisons between our models and those latest GRACE-only models (including the AIUB-GRACE03, the GGM05S, the ITSG-Grace2014k and the Tongji-GRACE01 published by different international groups, and the external validations with marine gravity anomalies from DTU13 product and height anomalies from GPS/levelling data, were performed in this study. The results demonstrate that the Tongji-Dyn01s has the same accuracy level with those of the latest GRACE-only models, while the Tongji-Dyn01k model is closer to the EIGEN6C2 than the other GRACE-only models as a whole.

  12. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  13. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  14. Predictive Modeling for Strongly Correlated f-electron Systems: A first-principles and database driven machine learning approach

    Science.gov (United States)

    Ahmed, Towfiq; Khair, Adnan; Abdullah, Mueen; Harper, Heike; Eriksson, Olle; Wills, John; Zhu, Jian-Xin; Balatsky, Alexander

    Data driven computational tools are being developed for theoretical understanding of electronic properties in f-electron based materials, e.g., Lanthanides and Actnides compounds. Here we show our preliminary work on Ce compounds. Due to a complex interplay among the hybridization of f-electrons to non-interacting conduction band, spin-orbit coupling, and strong coulomb repulsion of f-electrons, no model or first-principles based theory can fully explain all the structural and functional phases of f-electron systems. Motivated by the large need in predictive modeling of actinide compounds, we adopted a data-driven approach. We found negative correlation between the hybridization and atomic volume. Mutual information between these two features were also investigated. In order to extend our search space with more features and predictability of new compounds, we are currently developing electronic structure database. Our f-electron database will be potentially aided by machine learning (ML) algorithm to extract complex electronic, magnetic and structural properties in f-electron system, and thus, will open up new pathways for predictive capabilities and design principles of complex materials. NSEC, IMS at LANL.

  15. A new model validation database for evaluating AERMOD, NRPB R91 and ADMS using krypton-85 data from BNFL Sellafield

    International Nuclear Information System (INIS)

    Hill, R.; Taylor, J.; Lowles, I.; Emmerson, K.; Parker, T.

    2004-01-01

    The emission of krypton-85 ( 85 Kr) from nuclear fuel reprocessing operations provide a classical passive tracer for the study of atmospheric dispersion. This is because of the persistence of this radioisotope in the atmosphere, due to its long radioactive halflife and inert chemistry; and the low background levels that result due to the limited number of anthropogenic sources globally. The BNFL Sellafield site in Cumbria (UK) is one of the most significant point sources of 85 Kr in the northern hemisphere, with 85 Kr being discharged from two stacks on the site, MAGNOX and THORP. Field experiments have been conducted since October 1996 using a cryogenic distillation technique (Janssens et al., 1986) to quantify the ground level concentration of 85 Kr. This paper reports on the construction of a model validation database to allow evaluation of regulatory atmospheric dispersion models using the measured 85 Kr concentrations as a tracer. The results of the database for local and regional scale dispersion are presented. (orig.)

  16. Modeling secondary organic aerosol formation through cloud processing of organic compounds

    Directory of Open Access Journals (Sweden)

    J. Chen

    2007-10-01

    Full Text Available Interest in the potential formation of secondary organic aerosol (SOA through reactions of organic compounds in condensed aqueous phases is growing. In this study, the potential formation of SOA from irreversible aqueous-phase reactions of organic species in clouds was investigated. A new proposed aqueous-phase chemistry mechanism (AqChem is coupled with the existing gas-phase Caltech Atmospheric Chemistry Mechanism (CACM and the Model to Predict the Multiphase Partitioning of Organics (MPMPO that simulate SOA formation. AqChem treats irreversible organic reactions that lead mainly to the formation of carboxylic acids, which are usually less volatile than the corresponding aldehydic compounds. Zero-dimensional model simulations were performed for tropospheric conditions with clouds present for three consecutive hours per day. Zero-dimensional model simulations show that 48-h average SOA formation is increased by 27% for a rural scenario with strong monoterpene emissions and 7% for an urban scenario with strong emissions of aromatic compounds, respectively, when irreversible organic reactions in clouds are considered. AqChem was also incorporated into the Community Multiscale Air Quality Model (CMAQ version 4.4 with CACM/MPMPO and applied to a previously studied photochemical episode (3–4 August 2004 focusing on the eastern United States. The CMAQ study indicates that the maximum contribution of SOA formation from irreversible reactions of organics in clouds is 0.28 μg m−3 for 24-h average concentrations and 0.60 μg m−3 for one-hour average concentrations at certain locations. On average, domain-wide surface SOA predictions for the episode are increased by 9% when irreversible, in-cloud processing of organics is considered. Because aldehydes of carbon number greater than four are assumed to convert fully to the corresponding carboxylic acids upon reaction with OH in cloud droplets and this assumption may overestimate

  17. A database of wavefront measurements for laser system modeling, optical component development and fabrication process qualification

    International Nuclear Information System (INIS)

    Wolfe, C.R.; Lawson, J.K.; Aikens, D.M.; English, R.E.

    1995-01-01

    In the second half of the 1990's, LLNL and others anticipate designing and beginning construction of the National Ignition Facility (NIF). The NIF will be capable of producing the worlds first laboratory scale fusion ignition and bum reaction by imploding a small target. The NIF will utilize approximately 192 simultaneous laser beams for this purpose. The laser will be capable of producing a shaped energy pulse of at least 1.8 million joules (MJ) with peak power of at least 500 trillion watts (TV). In total, the facility will require more than 7,000 large optical components. The performance of a high power laser of this kind can be seriously degraded by the presence of low amplitude, periodic modulations in the surface and transmitted wavefronts of the optics used. At high peak power, these phase modulations can convert into large intensity modulations by non-linear optical processes. This in turn can lead to loss in energy on target via many well known mechanisms. In some cases laser damage to the optics downstream of the source of the phase modulation can occur. The database described here contains wavefront phase maps of early prototype optical components for the NIF. It has only recently become possible to map the wavefront of these large aperture components with high spatial resolution. Modem large aperture static fringe and phase shifting interferometers equipped with large area solid state detectors have made this possible. In a series of measurements with these instruments, wide spatial bandwidth can be detected in the wavefront

  18. Modeling of activation data in the BrainMapTM database: Detection of outliers

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup; Hansen, Lars Kai

    2002-01-01

    We describe a system for meta-analytical modeling of activation foci from functional neuroimaging studies. Our main vehicle is a set of density models in Talairach space capturing the distribution of activation foci in sets of experiments labeled by lobar anatomy. One important use of such densit...... of atlases for outlier detection. Hum. Brain Mapping 15:146-156, 2002. © 2002 Wiley-Liss, Inc....

  19. Modeling of secondary organic aerosol yields from laboratory chamber data

    Directory of Open Access Journals (Sweden)

    M. N. Chan

    2009-08-01

    Full Text Available Laboratory chamber data serve as the basis for constraining models of secondary organic aerosol (SOA formation. Current models fall into three categories: empirical two-product (Odum, product-specific, and volatility basis set. The product-specific and volatility basis set models are applied here to represent laboratory data on the ozonolysis of α-pinene under dry, dark, and low-NOx conditions in the presence of ammonium sulfate seed aerosol. Using five major identified products, the model is fit to the chamber data. From the optimal fitting, SOA oxygen-to-carbon (O/C and hydrogen-to-carbon (H/C ratios are modeled. The discrepancy between measured H/C ratios and those based on the oxidation products used in the model fitting suggests the potential importance of particle-phase reactions. Data fitting is also carried out using the volatility basis set, wherein oxidation products are parsed into volatility bins. The product-specific model is most likely hindered by lack of explicit inclusion of particle-phase accretion compounds. While prospects for identification of the majority of SOA products for major volatile organic compounds (VOCs classes remain promising, for the near future empirical product or volatility basis set models remain the approaches of choice.

  20. On the influence of the exposure model on organ doses

    International Nuclear Information System (INIS)

    Drexler, G.; Eckerl, H.

    1988-01-01

    Based on the design characteristics of the MIRD-V phantom, two sex-specific adult phantoms, ADAM and EVA were introduced especially for the calculation of organ doses resulting from external irradiation. Although the body characteristics of all the phantoms are in good agreement with those of the reference man and woman, they have some disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages related to the location and shape of organs and form of the whole body. To overcome these disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages and to obtain more realistic phantoms, a technique based on computer tomographic data (voxel-phantom) was developed. This technique allows any physical phantom or real body to be converted into computer files. The improvements are of special importance with regard to the skeleton, because a better modeling of the bone surfaces and separation of hard bone and bone marrow can be achieved. For photon irradiation, the sensitivity of the model on organ doses or the effective dose equivalent is important for operational radiation protection

  1. Sustainable Organic Farming For Environmental Health A Social Development Model

    Directory of Open Access Journals (Sweden)

    Ijun Rijwan Susanto

    2015-05-01

    Full Text Available ABSTRACT In this study the researcher attempted 1 to understand the basic features of organic farming in The Paguyuban Pasundans Cianjur 2 to describe and understand how the stakeholders were are able to internalize the challenges of organic farming on their lived experiences in the community 3 to describe and understand how the stakeholders were are able to internalize and applied the values of benefits of organic farming in support of environmental health on their lived experiences in the community 4 The purpose was to describe and understand how the stakeholders who are able to articulate their ideas regarding the model of sustainable organic farming 5 The Policy Recommendation for Organic Farming. The researcher employed triangulation thorough finding that provides breadth and depth to an investigation offering researchers a more accurate picture of the phenomenon. In the implementation of triangulation researchers conducted several interviews to get saturation. After completion of the interview results are written compiled and shown to the participants to check every statement by every participant. In addition researchers also checked the relevant documents and direct observation in the field The participants of this study were the stakeholders namely 1 The leader of Paguyuban Pasundans Organic Farmer Cianjur PPOFC 2 Members of Paguyuban Pasundans Organic FarmersCianjur 3 Leader of NGO 4 Government officials of agriculture 5 Business of organic food 6 and Consumer of organic food. Generally the findings of the study revealed the following 1 PPOFC began to see the reality as the impact of modern agriculture showed in fertility problems due to contaminated soil by residues of agricultural chemicals such as chemical fertilizers and chemical pesticides. So he wants to restore the soil fertility through environmentally friendly of farming practices 2 the challenges of organic farming on their lived experiences in the community farmers did not

  2. Branching and self-organization in marine modular colonial organisms: a model.

    Science.gov (United States)

    Sánchez, Juan Armando; Lasker, Howard R; Nepomuceno, Erivelton G; Sánchez, J Dario; Woldenberg, Michael J

    2004-03-01

    Despite the universality of branching patterns in marine modular colonial organisms, there is neither a clear explanation about the growth of their branching forms nor an understanding of how these organisms conserve their shape during development. This study develops a model of branching and colony growth using parameters and variables related to actual modular structures (e.g., branches) in Caribbean gorgonian corals (Cnidaria). Gorgonians exhibiting treelike networks branch subapically, creating hierarchical mother-daughter relationships among branches. We modeled both the intrinsic subapical branching along with an ecological-physiological limit to growth or maximum number of mother branches (k). Shape is preserved by maintaining a constant ratio (c) between the total number of branches and the mother branches. The size frequency distribution of mother branches follows a scaling power law suggesting self-organized criticality. Differences in branching among species with the same k values are determined by r (branching rate) and c. Species with rr/2 or c>r>0). Ecological/physiological constraints limit growth without altering colony form or the interaction between r and c. The model described the branching dynamics giving the form to colonies and how colony growth declines over time without altering the branching pattern. This model provides a theoretical basis to study branching as a simple function of the number of branches independently of ordering- and bifurcation-based schemes.

  3. EcoClimate: a database of climate data from multiple models for past, present, and future for macroecologists and biogeographers

    Directory of Open Access Journals (Sweden)

    Matheus Souza Lima-Ribeiro

    2015-08-01

    Full Text Available Studies in biogeography and macroecology have been increasing massively since climate and biodiversity databases became easily accessible. Climate simulations for past, present, and future have enabled macroecologists and biogeographers to combine data on species’ occurrences with detailed information on climatic conditions through time to predict biological responses across large spatial and temporal scales. Here we present and describe ecoClimate, a free and open data repository developed to serve useful climate data to macroecologists and biogeographers. ecoClimate arose from the need for climate layers with which to build ecological niche models and test macroecological and biogeographic hypotheses in the past, present, and future. ecoClimate offers a suite of processed, multi-temporal climate data sets from the most recent multi-model ensembles developed by the Coupled Modeling Intercomparison Projects (CMIP5 and Paleoclimate Modeling Intercomparison Projects (PMIP3 across past, present, and future time frames, at global extents and 0.5° spatial resolution, in convenient formats for analysis and manipulation. A priority of ecoClimate is consistency across these diverse data, but retaining information on uncertainties among model predictions. The ecoClimate research group intends to maintain the web repository updated continuously as new model outputs become available, as well as software that makes our workflows broadly accessible.

  4. Finite-element model of the active organ of Corti

    Science.gov (United States)

    Elliott, Stephen J.; Baumgart, Johannes

    2016-01-01

    The cochlear amplifier that provides our hearing with its extraordinary sensitivity and selectivity is thought to be the result of an active biomechanical process within the sensory auditory organ, the organ of Corti. Although imaging techniques are developing rapidly, it is not currently possible, in a fully active cochlea, to obtain detailed measurements of the motion of individual elements within a cross section of the organ of Corti. This motion is predicted using a two-dimensional finite-element model. The various solid components are modelled using elastic elements, the outer hair cells (OHCs) as piezoelectric elements and the perilymph and endolymph as viscous and nearly incompressible fluid elements. The model is validated by comparison with existing measurements of the motions within the passive organ of Corti, calculated when it is driven either acoustically, by the fluid pressure or electrically, by excitation of the OHCs. The transverse basilar membrane (BM) motion and the shearing motion between the tectorial membrane and the reticular lamina are calculated for these two excitation modes. The fully active response of the BM to acoustic excitation is predicted using a linear superposition of the calculated responses and an assumed frequency response for the OHC feedback. PMID:26888950

  5. A geographic information system on the potential distribution and abundance of Fasciola hepatica and F. gigantica in east Africa based on Food and Agriculture Organization databases.

    Science.gov (United States)

    Malone, J B; Gommes, R; Hansen, J; Yilma, J M; Slingenberg, J; Snijders, F; Nachtergaele, F; Ataman, E

    1998-07-31

    An adaptation of a previously developed climate forecast computer model and digital agroecologic database resources available from FAO for developing countries were used to develop a geographic information system risk assessment model for fasciolosis in East Africa, a region where both F. hepatica and F. gigantica occur as a cause of major economic losses in livestock. Regional F. hepatica and F. gigantica forecast index maps were created. Results were compared to environmental data parameters, known life cycle micro-environment requirements and to available Fasciola prevalence survey data and distribution patterns reported in the literature for each species (F. hepatica above 1200 m elevation, F. gigantica below 1800 m, both at 1200-1800 m). The greatest risk, for both species, occurred in areas of extended high annual rainfall associated with high soil moisture and surplus water, with risk diminishing in areas of shorter wet season and/or lower temperatures. Arid areas were generally unsuitable (except where irrigation, water bodies or floods occur) due to soil moisture deficit and/or, in the case of F. hepatica, high average annual mean temperature >23 degrees C. Regions in the highlands of Ethiopia and Kenya were identified as unsuitable for F. gigantica due to inadequate thermal regime, below the 600 growing degree days required for completion of the life cycle in a single year. The combined forecast index (F. hepatica+F. gigantica) was significantly correlated to prevalence data available for 260 of the 1220 agroecologic crop production system zones (CPSZ) and to average monthly normalized difference vegetation index (NDVI) values derived from the advanced very high resolution radiometer (AVHRR) sensor on board the NOAA polar-orbiting satellites. For use in Fasciola control programs, results indicate that monthly forecast parameters, developed in a GIS with digital agroecologic zone databases and monthly climate databases, can be used to define the

  6. Invertebrates as model organisms for research on aging biology.

    Science.gov (United States)

    Murthy, Mahadev; Ram, Jeffrey L

    2015-01-30

    Invertebrate model systems, such as nematodes and fruit flies, have provided valuable information about the genetics and cellular biology involved in aging. However, limitations of these simple, genetically tractable organisms suggest the need for other model systems, some of them invertebrate, to facilitate further advances in the understanding of mechanisms of aging and longevity in mammals, including humans. This paper introduces 10 review articles about the use of invertebrate model systems for the study of aging by authors who participated in an 'NIA-NIH symposium on aging in invertebrate model systems' at the 2013 International Congress for Invertebrate Reproduction and Development. In contrast to the highly derived characteristics of nematodes and fruit flies as members of the superphylum Ecdysozoa, cnidarians, such as Hydra, are more 'basal' organisms that have a greater number of genetic orthologs in common with humans. Moreover, some other new model systems, such as the urochordate Botryllus schlosseri , the tunicate Ciona , and the sea urchins (Echinodermata) are members of the Deuterostomia, the same superphylum that includes all vertebrates, and thus have mechanisms that are likely to be more closely related to those occurring in humans. Additional characteristics of these new model systems, such as the recent development of new molecular and genetic tools and a more similar pattern to humans of regeneration and stem cell function suggest that these new model systems may have unique advantages for the study of mechanisms of aging and longevity.

  7. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  8. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  9. Organizing the space and behavior of semantic models.

    Science.gov (United States)

    Rubin, Timothy N; Kievit-Kylar, Brent; Willits, Jon A; Jones, Michael N

    Semantic models play an important role in cognitive science. These models use statistical learning to model word meanings from co-occurrences in text corpora. A wide variety of semantic models have been proposed, and the literature has typically emphasized situations in which one model outperforms another. However, because these models often vary with respect to multiple sub-processes (e.g., their normalization or dimensionality-reduction methods), it can be difficult to delineate which of these processes are responsible for observed performance differences. Furthermore, the fact that any two models may vary along multiple dimensions makes it difficult to understand where these models fall within the space of possible psychological theories. In this paper, we propose a general framework for organizing the space of semantic models. We then illustrate how this framework can be used to understand model comparisons in terms of individual manipulations along sub-processes. Using several artificial datasets we show how both representational structure and dimensionality-reduction influence a model's ability to pick up on different types of word relationships.

  10. Generation of Comprehensive Surrogate Kinetic Models and Validation Databases for Simulating Large Molecular Weight Hydrocarbon Fuels

    Science.gov (United States)

    2012-10-25

    counterflow burner, a vaporization system, flow controllers, an online Fourier transform infrared ( FTIR ) spectrometer, and a laser induced fluorescence...plane Laser sheet for LIF Air Heater Heater N2 Fuel Atomization & evaporation Temperature measurements 59 vaporizing temperature, an FTIR ...amounts of indene being formed. The model simulates the fuel decay and formation of most of the intermediates accurately for all the experimental data

  11. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  12. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    Science.gov (United States)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  13. IT Business Value Model for Information Intensive Organizations

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Gastaud Maçada

    2012-01-01

    Full Text Available Many studies have highlighted the capacity Information Technology (IT has for generating value for organizations. Investments in IT made by organizations have increased each year. Therefore, the purpose of the present study is to analyze the IT Business Value for Information Intensive Organizations (IIO - e.g. banks, insurance companies and securities brokers. The research method consisted of a survey that used and combined the models from Weill and Broadbent (1998 and Gregor, Martin, Fernandez, Stern and Vitale (2006. Data was gathered using an adapted instrument containing 5 dimensions (Strategic, Informational, Transactional, Transformational and Infra-structure with 27 items. The instrument was refined by employing statistical techniques such as Exploratory and Confirmatory Factorial Analysis through Structural Equations (first and second order Model Measurement. The final model is composed of four factors related to IT Business Value: Strategic, Informational, Transactional and Transformational, arranged in 15 items. The dimension Infra-structure was excluded during the model refinement process because it was discovered during interviews that managers were unable to perceive it as a distinct dimension of IT Business Value.

  14. PERANCANGAN MODEL NETWORK PADA MESIN DATABASE NON SPATIAL UNTUK MANUVER JARINGAN LISTRIK SEKTOR DISTRIBUSI DENGAN PL SQ

    Directory of Open Access Journals (Sweden)

    I Made Sukarsa

    2009-06-01

    Full Text Available Saat ini aplikasi di bidang SIG telah banyak yang dikembangkan berbasis mesin DBMS (Database Management System non spatial sehingga mampu mendukung model penyajian data secara client server dan menangani data dalam jumlah yang besar. Salah satunya telah dikembangkan untuk menangani data jaringan listrik.Kenyataannya, mesin-mesin DBMS belum dilengkapi dengan kemampuan untuk melakukan analisis network seperti manuver jaringan dan merupakan dasar untuk pengembangan berbagai aplikasi lainnya. Oleh karena itu,perlu dikembangkan suatu model network untuk manuver jaringan listrik dengan berbagai kekhasannya.Melalui beberapa tahapan penelitian yang dilakukan, telah dapat dikembangkan suatu model network yangdapat digunakan untuk menangani manuver jaringan. Model ini dibangun dengan memperhatikan kepentingan pengintegrasian dengan sistem eksisting dengan meminimalkan adanya perubahan pada aplikasi eksisting.Pemilihan implementasi berbasis PL SQL (Pragrammable Language Structure Query Language akan memberikan berbagai keuntungan termasuk unjuk kerja sistem. Model ini telah diujikan untuk simulasi pemadaman,menghitung perubahan struktur pembebanan jaringan dan dapat dikembangkan untuk analisis sistem tenaga listrik seperti rugi-rugi, load flow dan sebagainya sehingga pada akhirnya aplikasi SIG akan mampu mensubstitusi danmengatasi kelemahan aplikasi analisis sistem tenaga yang banyak dipakai saat ini seperti EDSA (Electrical DesignSystem Anaysis .

  15. Prediction of residual stress in the welding zone of dissimilar metals using data-based models and uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Dong Hyuk; Bae, In Ho [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of); Na, Man Gyun, E-mail: magyna@chosun.ac.k [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of); Kim, Jin Weon [Department of Nuclear Engineering, Chosun University, 375 Seosuk-dong, Dong-gu, Gwangju 501-759 (Korea, Republic of)

    2010-10-15

    Since welding residual stress is one of the major factors in the generation of primary water stress-corrosion cracking (PWSCC), it is essential to examine the welding residual stress to prevent PWSCC. Therefore, several artificial intelligence methods have been developed and studied to predict these residual stresses. In this study, three data-based models, support vector regression (SVR), fuzzy neural network (FNN), and their combined (FNN + SVR) models were used to predict the residual stress for dissimilar metal welding under a variety of welding conditions. By using a subtractive clustering (SC) method, informative data that demonstrate the characteristic behavior of the system were selected to train the models from the numerical data obtained from finite element analysis under a range of welding conditions. The FNN model was optimized using a genetic algorithm. The statistical and analytical uncertainty analysis methods of the models were applied, and their uncertainties were evaluated using 60 sampled training and optimization data sets, as well as a fixed test data set.

  16. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data.

    Science.gov (United States)

    Zhu, Xiao; Kruhlak, Naomi L

    2014-07-03

    Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure-activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI. Published by Elsevier Ireland Ltd.

  17. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data

    International Nuclear Information System (INIS)

    Zhu, Xiao; Kruhlak, Naomi L.

    2014-01-01

    Graphical abstract: - Abstract: Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure–activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI

  18. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  19. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  20. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  1. Designing Predictive Models for Beta-Lactam Allergy Using the Drug Allergy and Hypersensitivity Database.

    Science.gov (United States)

    Chiriac, Anca Mirela; Wang, Youna; Schrijvers, Rik; Bousquet, Philippe Jean; Mura, Thibault; Molinari, Nicolas; Demoly, Pascal

    Beta-lactam antibiotics represent the main cause of allergic reactions to drugs, inducing both immediate and nonimmediate allergies. The diagnosis is well established, usually based on skin tests and drug provocation tests, but cumbersome. To design predictive models for the diagnosis of beta-lactam allergy, based on the clinical history of patients with suspicions of allergic reactions to beta-lactams. The study included a retrospective phase, in which records of patients explored for a suspicion of beta-lactam allergy (in the Allergy Unit of the University Hospital of Montpellier between September 1996 and September 2012) were used to construct predictive models based on a logistic regression and decision tree method; a prospective phase, in which we performed an external validation of the chosen models in patients with suspicion of beta-lactam allergy recruited from 3 allergy centers (Montpellier, Nîmes, Narbonne) between March and November 2013. Data related to clinical history and allergy evaluation results were retrieved and analyzed. The retrospective and prospective phases included 1991 and 200 patients, respectively, with a different prevalence of confirmed beta-lactam allergy (23.6% vs 31%, P = .02). For the logistic regression method, performances of the models were similar in both samples: sensitivity was 51% (vs 60%), specificity 75% (vs 80%), positive predictive value 40% (vs 57%), and negative predictive value 83% (vs 82%). The decision tree method reached a sensitivity of 29.5% (vs 43.5%), specificity of 96.4% (vs 94.9%), positive predictive value of 71.6% (vs 79.4%), and negative predictive value of 81.6% (vs 81.3%). Two different independent methods using clinical history predictors were unable to accurately predict beta-lactam allergy and replace a conventional allergy evaluation for suspected beta-lactam allergy. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  2. Mobility dependent recombination models for organic solar cells

    Science.gov (United States)

    Wagenpfahl, Alexander

    2017-09-01

    Modern solar cell technologies are driven by the effort to enhance power conversion efficiencies. A main mechanism limiting power conversion efficiencies is charge carrier recombination which is a direct function of the encounter probability of both recombination partners. In inorganic solar cells with rather high charge carrier mobilities, charge carrier recombination is often dominated by energetic states which subsequently trap both recombination partners for recombination. Free charge carriers move fast enough for Coulomb attraction to be irrelevant for the encounter probability. Thus, charge carrier recombination is independent of charge carrier mobilities. In organic semiconductors charge carrier mobilities are much lower. Therefore, electrons and holes have more time react to mutual Coulomb-forces. This results in the strong charge carrier mobility dependencies of the observed charge carrier recombination rates. In 1903 Paul Langevin published a fundamental model to describe the recombination of ions in gas-phase or aqueous solutions, known today as Langevin recombination. During the last decades this model was used to interpret and model recombination in organic semiconductors. However, certain experiments especially with bulk-heterojunction solar cells reveal much lower recombination rates than predicted by Langevin. In search of an explanation, many material and device properties such as morphology and energetic properties have been examined in order to extend the validity of the Langevin model. A key argument for most of these extended models is, that electron and hole must find each other at a mutual spatial location. This encounter may be limited for instance by trapping of charges in trap states, by selective electrodes separating electrons and holes, or simply by the morphology of the involved semiconductors, making it impossible for electrons and holes to recombine at high rates. In this review, we discuss the development of mobility limited

  3. Absence of respiratory inflammatory reaction of elemental sulfur using the California Pesticide Illness Database and a mouse model.

    Science.gov (United States)

    Lee, Kiyoung; Smith, Jodi L; Last, Jerold A

    2005-01-01

    Elemental sulfur, a natural substance, is used as a fungicide. Elemental sulfur is the most heavily used agricultural chemical in California. In 2003, annual sulfur usage in California was about 34% of the total weight of pesticide active ingredient used in production agriculture. Even though sulfur is mostly used in dust form, the respiratory health effects of elemental sulfur are not well documented. The purpose of this paper is to address the possible respiratory effect of elemental sulfur using the California Pesticide Illness Database and laboratory experiments with mice. We analyzed the California Pesticide Illness Database between 1991 and 2001. Among 127 reports of definite, probable, and possible illness involving sulfur, 21 cases (16%) were identified as respiratory related. A mouse model was used to examine whether there was an inflammatory or fibrotic response to elemental sulfur. Dust solutions were injected intratracheally into ovalbumin sensitized mice and lung damage was evaluated. Lung inflammatory response was analyzed via total lavage cell counts and differentials, and airway collagen content was analyzed histologically and biochemically. No significant differences from controls were seen in animals exposed to sulfur particles. The findings suggest that acute exposure of elemental sulfur itself may not cause an inflammatory reaction. However, further studies are needed to understand the possible health effects of chronic sulfur exposure and environmental weathering of sulfur dust.

  4. Model checking software for phylogenetic trees using distribution and database methods.

    Science.gov (United States)

    Requeno, José Ignacio; Colom, José Manuel

    2013-12-01

    Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence) and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  5. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  6. Technical Note: High-resolution mineralogical database of dust-productive soils for atmospheric dust modeling

    Directory of Open Access Journals (Sweden)

    S. Nickovic

    2012-01-01

    Full Text Available Dust storms and associated mineral aerosol transport are driven primarily by meso- and synoptic-scale atmospheric processes. It is therefore essential that the dust aerosol process and background atmospheric conditions that drive dust emissions and atmospheric transport are represented with sufficiently well-resolved spatial and temporal features. The effects of airborne dust interactions with the environment determine the mineral composition of dust particles. The fractions of various minerals in aerosol are determined by the mineral composition of arid soils; therefore, a high-resolution specification of the mineral and physical properties of dust sources is needed.

    Several current dust atmospheric models simulate and predict the evolution of dust concentrations; however, in most cases, these models do not consider the fractions of minerals in the dust. The accumulated knowledge about the impacts of the mineral composition in dust on weather and climate processes emphasizes the importance of including minerals in modeling systems. Accordingly, in this study, we developed a global dataset consisting of the mineral composition of the current potentially dust-producing soils. In our study, we (a mapped mineral data to a high-resolution 30 s grid, (b included several mineral-carrying soil types in dust-productive regions that were not considered in previous studies, and (c included phosphorus.

  7. Data-based model and parameter evaluation in dynamic transcriptional regulatory networks.

    Science.gov (United States)

    Cavelier, German; Anastassiou, Dimitris

    2004-05-01

    Finding the causality and strength of connectivity in transcriptional regulatory networks from time-series data will provide a powerful tool for the analysis of cellular states. Presented here is the design of tools for the evaluation of the network's model structure and parameters. The most effective tools are found to be based on evolution strategies. We evaluate models of increasing complexity, from lumped, algebraic phenomenological models to Hill functions and thermodynamically derived functions. These last functions provide the free energies of binding of transcription factors to their operators, as well as cooperativity energies. Optimization results based on published experimental data from a synthetic network in Escherichia coli are presented. The free energies of binding and cooperativity found by our tools are in the same physiological ranges as those experimentally derived in the bacteriophage lambda system. We also use time-series data from high-density oligonucleotide microarrays of yeast meiotic expression patterns. The algorithm appropriately finds the parameters of pairs of regulated regulatory yeast genes, showing that for related genes an overall reasonable computation effort is sufficient to find the strength and causality of the connectivity of large numbers of them. Copyright 2004 Wiley-Liss, Inc.

  8. A taxonomy of nursing care organization models in hospitals.

    Science.gov (United States)

    Dubois, Carl-Ardy; D'Amour, Danielle; Tchouaket, Eric; Rivard, Michèle; Clarke, Sean; Blais, Régis

    2012-08-28

    Over the last decades, converging forces in hospital care, including cost-containment policies, rising healthcare demands and nursing shortages, have driven the search for new operational models of nursing care delivery that maximize the use of available nursing resources while ensuring safe, high-quality care. Little is known, however, about the distinctive features of these emergent nursing care models. This article contributes to filling this gap by presenting a theoretically and empirically grounded taxonomy of nursing care organization models in the context of acute care units in Quebec and comparing their distinctive features. This study was based on a survey of 22 medical units in 11 acute care facilities in Quebec. Data collection methods included questionnaire, interviews, focus groups and administrative data census. The analytical procedures consisted of first generating unit profiles based on qualitative and quantitative data collected at the unit level, then applying hierarchical cluster analysis to the units' profile data. The study identified four models of nursing care organization: two professional models that draw mainly on registered nurses as professionals to deliver nursing services and reflect stronger support to nurses' professional practice, and two functional models that draw more significantly on licensed practical nurses (LPNs) and assistive staff (orderlies) to deliver nursing services and are characterized by registered nurses' perceptions that the practice environment is less supportive of their professional work. This study showed that medical units in acute care hospitals exhibit diverse staff mixes, patterns of skill use, work environment design, and support for innovation. The four models reflect not only distinct approaches to dealing with the numerous constraints in the nursing care environment, but also different degrees of approximations to an "ideal" nursing professional practice model described by some leaders in the

  9. Modeling regional secondary organic aerosol using the Master Chemical Mechanism

    Science.gov (United States)

    Li, Jingyi; Cleveland, Meredith; Ziemba, Luke D.; Griffin, Robert J.; Barsanti, Kelley C.; Pankow, James F.; Ying, Qi

    2015-02-01

    A modified near-explicit Master Chemical Mechanism (MCM, version 3.2) with 5727 species and 16,930 reactions and an equilibrium partitioning module was incorporated into the Community Air Quality Model (CMAQ) to predict the regional concentrations of secondary organic aerosol (SOA) from volatile organic compounds (VOCs) in the eastern United States (US). In addition to the semi-volatile SOA from equilibrium partitioning, reactive surface uptake processes were used to simulate SOA formation due to isoprene epoxydiol, glyoxal and methylglyoxal. The CMAQ-MCM-SOA model was applied to simulate SOA formation during a two-week episode from August 28 to September 7, 2006. The southeastern US has the highest SOA, with a maximum episode-averaged concentration of ∼12 μg m-3. Primary organic aerosol (POA) and SOA concentrations predicted by CMAQ-MCM-SOA agree well with AMS-derived hydrocarbon-like organic aerosol (HOA) and oxygenated organic aerosol (OOA) urban concentrations at the Moody Tower at the University of Houston. Predicted molecular properties of SOA (O/C, H/C, N/C and OM/OC ratios) at the site are similar to those reported in other urban areas, and O/C values agree with measured O/C at the same site. Isoprene epoxydiol is predicted to be the largest contributor to total SOA concentration in the southeast US, followed by methylglyoxal and glyoxal. The semi-volatile SOA components are dominated by products from β-caryophyllene oxidation, but the major species and their concentrations are sensitive to errors in saturation vapor pressure estimation. A uniform decrease of saturation vapor pressure by a factor of 100 for all condensable compounds can lead to a 150% increase in total SOA. A sensitivity simulation with UNIFAC-calculated activity coefficients (ignoring phase separation and water molecule partitioning into the organic phase) led to a 10% change in the predicted semi-volatile SOA concentrations.

  10. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  11. The geothermal energy potential in Denmark - updating the database and new structural and thermal models

    Science.gov (United States)

    Nielsen, Lars Henrik; Sparre Andersen, Morten; Balling, Niels; Boldreel, Lars Ole; Fuchs, Sven; Leth Hjuler, Morten; Kristensen, Lars; Mathiesen, Anders; Olivarius, Mette; Weibel, Rikke

    2017-04-01

    Knowledge of structural, hydraulic and thermal conditions of the subsurface is fundamental for the planning and use of hydrothermal energy. In the framework of a project under the Danish Research program 'Sustainable Energy and Environment' funded by the 'Danish Agency for Science, Technology and Innovation', fundamental geological and geophysical information of importance for the utilization of geothermal energy in Denmark was compiled, analyzed and re-interpreted. A 3D geological model was constructed and used as structural basis for the development of a national subsurface temperature model. In that frame, all available reflection seismic data were interpreted, quality controlled and integrated to improve the regional structural understanding. The analyses and interpretation of available relevant data (i.e. old and new seismic profiles, core and well-log data, literature data) and a new time-depth conversion allowed a consistent correlation of seismic surfaces for whole Denmark and across tectonic features. On this basis, new topologically consistent depth and thickness maps for 16 geological units from the top pre-Zechstein to the surface were drawn. A new 3D structural geological model was developed with special emphasis on potential geothermal reservoirs. The interpretation of petrophysical data (core data and well-logs) allows to evaluate the hydraulic and thermal properties of potential geothermal reservoirs and to develop a parameterized numerical 3D conductive subsurface temperature model. Reservoir properties and quality were estimated by integrating petrography and diagenesis studies with porosity-permeability data. Detailed interpretation of the reservoir quality of the geological formations was made by estimating net reservoir sandstone thickness based on well-log analysis, determination of mineralogy including sediment provenance analysis, and burial history data. New local surface heat-flow values (range: 64-84 mW/m2) were determined for the Danish

  12. Organic carbon stock modelling for the quantification of the carbon sinks in terrestrial ecosystems

    Science.gov (United States)

    Durante, Pilar; Algeet, Nur; Oyonarte, Cecilio

    2017-04-01

    Given the recent environmental policies derived from the serious threats caused by global change, practical measures to decrease net CO2 emissions have to be put in place. Regarding this, carbon sequestration is a major measure to reduce atmospheric CO2 concentrations within a short and medium term, where terrestrial ecosystems play a basic role as carbon sinks. Development of tools for quantification, assessment and management of organic carbon in ecosystems at different scales and management scenarios, it is essential to achieve these commitments. The aim of this study is to establish a methodological framework for the modeling of this tool, applied to a sustainable land use planning and management at spatial and temporal scale. The methodology for carbon stock estimation in ecosystems is based on merger techniques between carbon stored in soils and aerial biomass. For this purpose, both spatial variability map of soil organic carbon (SOC) and algorithms for calculation of forest species biomass will be created. For the modelling of the SOC spatial distribution at different map scales, it is necessary to fit in and screen the available information of soil database legacy. Subsequently, SOC modelling will be based on the SCORPAN model, a quantitative model use to assess the correlation among soil-forming factors measured at the same site location. These factors will be selected from both static (terrain morphometric variables) and dynamic variables (climatic variables and vegetation indexes -NDVI-), providing to the model the spatio-temporal characteristic. After the predictive model, spatial inference techniques will be used to achieve the final map and to extrapolate the data to unavailable information areas (automated random forest regression kriging). The estimated uncertainty will be calculated to assess the model performance at different scale approaches. Organic carbon modelling of aerial biomass will be estimate using LiDAR (Light Detection And Ranging

  13. Modeling the effect of age in T1-2 breast cancer using the SEER database

    Directory of Open Access Journals (Sweden)

    Lee Sang-Joon

    2005-10-01

    Full Text Available Abstract Background Modeling the relationship between age and mortality for breast cancer patients may have important prognostic and therapeutic implications. Methods Data from 9 registries of the Surveillance, Epidemiology, and End Results Program (SEER of the United States were used. This study employed proportional hazards to model mortality in women with T1-2 breast cancers. The residuals of the model were used to examine the effect of age on mortality. This procedure was applied to node-negative (N0 and node-positive (N+ patients. All causes mortality and breast cancer specific mortality were evaluated. Results The relationship between age and mortality is biphasic. For both N0 and N+ patients among the T1-2 group, the analysis suggested two age components. One component is linear and corresponds to a natural increase of mortality with each year of age. The other component is quasi-quadratic and is centered around age 50. This component contributes to an increased risk of mortality as age increases beyond 50. It suggests a hormonally related process: the farther from menopause in either direction, the more prognosis is adversely influenced by the quasi-quadratic component. There is a complex relationship between hormone receptor status and other prognostic factors, like age. Conclusion The present analysis confirms the findings of many epidemiological and clinical trials that the relationship between age and mortality is biphasic. Compared with older patients, young women experience an abnormally high risk of death. Among elderly patients, the risk of death from breast cancer does not decrease with increasing age. These facts are important in the discussion of options for adjuvant treatment with breast cancer patients.

  14. Modeling the effect of age in T1-2 breast cancer using the SEER database

    Science.gov (United States)

    Tai, Patricia; Cserni, Gábor; Van De Steene, Jan; Vlastos, Georges; Voordeckers, Mia; Royce, Melanie; Lee, Sang-Joon; Vinh-Hung, Vincent; Storme, Guy

    2005-01-01

    Background Modeling the relationship between age and mortality for breast cancer patients may have important prognostic and therapeutic implications. Methods Data from 9 registries of the Surveillance, Epidemiology, and End Results Program (SEER) of the United States were used. This study employed proportional hazards to model mortality in women with T1-2 breast cancers. The residuals of the model were used to examine the effect of age on mortality. This procedure was applied to node-negative (N0) and node-positive (N+) patients. All causes mortality and breast cancer specific mortality were evaluated. Results The relationship between age and mortality is biphasic. For both N0 and N+ patients among the T1-2 group, the analysis suggested two age components. One component is linear and corresponds to a natural increase of mortality with each year of age. The other component is quasi-quadratic and is centered around age 50. This component contributes to an increased risk of mortality as age increases beyond 50. It suggests a hormonally related process: the farther from menopause in either direction, the more prognosis is adversely influenced by the quasi-quadratic component. There is a complex relationship between hormone receptor status and other prognostic factors, like age. Conclusion The present analysis confirms the findings of many epidemiological and clinical trials that the relationship between age and mortality is biphasic. Compared with older patients, young women experience an abnormally high risk of death. Among elderly patients, the risk of death from breast cancer does not decrease with increasing age. These facts are important in the discussion of options for adjuvant treatment with breast cancer patients. PMID:16212670

  15. Modeling of the hERG K+ Channel Blockage Using Online Chemical Database and Modeling Environment (OCHEM).

    Science.gov (United States)

    Li, Xiao; Zhang, Yuan; Li, Huanhuan; Zhao, Yong

    2017-12-01

    Human ether-a-go-go related gene (hERG) K+ channel plays an important role in cardiac action potential. Blockage of hERG channel may result in long QT syndrome (LQTS), even cause sudden cardiac death. Many drugs have been withdrawn from the market because of the serious hERG-related cardiotoxicity. Therefore, it is quite essential to estimate the chemical blockage of hERG in the early stage of drug discovery. In this study, a diverse set of 3721 compounds with hERG inhibition data was assembled from literature. Then, we make full use of the Online Chemical Modeling Environment (OCHEM), which supplies rich machine learning methods and descriptor sets, to build a series of classification models for hERG blockage. We also generated two consensus models based on the top-performing individual models. The consensus models performed much better than the individual models both on 5-fold cross validation and external validation. Especially, consensus model II yielded the prediction accuracy of 89.5 % and MCC of 0.670 on external validation. This result indicated that the predictive power of consensus model II should be stronger than most of the previously reported models. The 17 top-performing individual models and the consensus models and the data sets used for model development are available at https://ochem.eu/article/103592. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Lean construction as an effective organization model in Arctic

    Directory of Open Access Journals (Sweden)

    Balashova Elena S.

    2017-01-01

    Full Text Available In recent time, due to the sharp climatic changes, the Arctic attracts an increased interest of the world powers as a strategically important object. In 2013, the development strategy of the Arctic zone of the Russian Federation and national security for the period up to 2020 was approved by the President. In this strategy, the socio-economic development of the region in terms of improving the quality of life, expressed in the implementation of housing and civil engineering is very important. The goal of the study is to identify effective organization model of construction in the Arctic zone of the Russian Federation. Lean construction as a dynamically developing methodology abroad is analyzed. Characteristics of this organization model of construction meet the necessary requirements for the construction of various infrastructure objects in the Arctic. Therefore, the concept of lean construction can be an effective strategy of development of the Arctic regions of Russia as well as other Arctic countries.

  17. Database of Pb - free soldering materials, surface tension and density, experiment vs. Modeling

    Directory of Open Access Journals (Sweden)

    Z Moser

    2006-01-01

    Full Text Available Experimental studies of surface tension and density by the maximum bubble pressure method and dilatometric technique were undertaken and the accumulated data for liquid pure components, binary, ternary and multicomponent alloys were used to create the SURDAT data base for Pb-free soldering materials. The data base enabled, also to compare the experimental results with those obtained by the Butler’s model and with the existing literature data. This comparison has been extended by including the experimental data of Sn-Ag-Cu-Sb alloys.

  18. VizieR Online Data Catalog: Lowell Photometric Database asteroid models (Durech+, 2016)

    Science.gov (United States)

    Durech, J.; Hanus, J.; Oszkiewicz, D.; Vanco, R.

    2016-01-01

    List of new asteroid models. For each asteroid, there is one or two pole directions in the ecliptic coordinates, the sidereal rotation period, rotation period from LCDB and its quality code (if available), the minimum and maximum lightcurve amplitude, the number of data points, and the method which was used to derive the unique rotation period. The accuracy of the sidereal rotation period is of the order of the last decimal place given. Asteroids marked with asterisk were independently confirmed by Hanus et al. (2016A&A...586A.108H). (2 data files).

  19. Grid Database - Management, OGSA and Integration

    Directory of Open Access Journals (Sweden)

    Florentina Ramona PAVEL (EL BAABOUA

    2011-06-01

    Full Text Available The problem description of data models and types of databases has generated and gives rise to extensive controversy generated by their complexity, the many factors involved in the actual process of implementation. Grids encourage and promote the publication, sharing and integration of scientific data, distributed across Virtual Organizations. Scientists and researchers work on huge, complex and growing datasets. The complexity of data management within a grid environment comes from the distribution, heterogeneity and number of data sources.Early Grid applications focused principally on the storage, replication and movement of file-based data.. Many Grid applications already use databases for managing metadata, but increasingly many are associated with large databases of domain-specific information. In this paper we will talk about the fundamental concepts related to grid-database access, management, OGSA and integration.

  20. Understanding rare disease pathogenesis: a grand challenge for model organisms.

    Science.gov (United States)

    Hieter, Philip; Boycott, Kym M

    2014-10-01

    In this commentary, Philip Hieter and Kym Boycott discuss the importance of model organisms for understanding pathogenesis of rare human genetic diseases, and highlight the work of Brooks et al., "Dysfunction of 60S ribosomal protein L10 (RPL10) disrupts neurodevelopment and causes X-linked microcephaly in humans," published in this issue of GENETICS. Copyright © 2014 by the Genetics Society of America.

  1. Quasi-dynamic model for an organic Rankine cycle

    International Nuclear Information System (INIS)

    Bamgbopa, Musbaudeen O.; Uzgoren, Eray

    2013-01-01

    Highlights: • Study presents a simplified transient modeling approach for an ORC under variable heat input. • The ORC model is presented as a synthesis of its models of its sub-components. • The model is compared to benchmark numerical simulations and experimental data at different stages. - Abstract: When considering solar based thermal energy input to an organic Rankine cycle (ORC), intermittent nature of the heat input does not only adversely affect the power output but also it may prevent ORC to operate under steady state conditions. In order to identify reliability and efficiency of such systems, this paper presents a simplified transient modeling approach for an ORC operating under variable heat input. The approach considers that response of the system to heat input variations is mainly dictated by the evaporator. Consequently, overall system is assembled using dynamic models for the heat exchangers (evaporator and condenser) and static models of the pump and the expander. In addition, pressure drop within heat exchangers is neglected. The model is compared to benchmark numerical and experimental data showing that the underlying assumptions are reasonable for cases where thermal input varies in time. Furthermore, the model is studied on another configuration and mass flow rates of both the working fluid and hot water and hot water’s inlet temperature to the ORC unit are shown to have direct influence on the system’s response

  2. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  3. Uncertainty in geochemical modelling of CO2 and calcite dissolution in NaCl solutions due to different modelling codes and thermodynamic databases

    International Nuclear Information System (INIS)

    Haase, Christoph; Dethlefsen, Frank; Ebert, Markus; Dahmke, Andreas

    2013-01-01

    Highlights: • CO 2 and calcite dissolution is calculated. • The codes PHREEQC, Geochemist’s Workbench, EQ3/6, and FactSage are used. • Comparison with Duan and Li (2008) shows lowest deviation using phreeqc.dat and wateq4f.dat. • Using Pitzer databases does not improve accurate calculations. • Uncertainty in dissolved CO 2 is largest using the geochemical models. - Abstract: A prognosis of the geochemical effects of CO 2 storage induced by the injection of CO 2 into geologic reservoirs or by CO 2 leakage into the overlaying formations can be performed by numerical modelling (non-invasive) and field experiments. Until now the research has been focused on the geochemical processes of the CO 2 reacting with the minerals of the storage formation, which mostly consists of quartzitic sandstones. Regarding the safety assessment the reactions between the CO 2 and the overlaying formations in the case of a CO 2 leakage are of equal importance as the reactions in the storage formation. In particular, limestone formations can react very sensitively to CO 2 intrusion. The thermodynamic parameters necessary to model these reactions are not determined explicitly through experiments at the total range of temperature and pressure conditions and are thus extrapolated by the simulation code. The differences in the calculated results lead to different calcite and CO 2 solubilities and can influence the safety issues. This uncertainty study is performed by comparing the computed results, applying the geochemical modelling software codes The Geochemist’s Workbench, EQ3/6, PHREEQC and FactSage/ChemApp and their thermodynamic databases. The input parameters (1) total concentration of the solution, (2) temperature and (3) fugacity are varied within typical values for CO 2 reservoirs, overlaying formations and close-to-surface aquifers. The most sensitive input parameter in the system H 2 O–CO 2 –NaCl–CaCO 3 for the calculated range of dissolved calcite and CO 2 is the

  4. Turbulence and Self-Organization Modeling Astrophysical Objects

    CERN Document Server

    Marov, Mikhail Ya

    2013-01-01

    This book focuses on the development of continuum models of natural turbulent media. It provides a theoretical approach to the solutions of different problems related to the formation, structure and evolution of astrophysical and geophysical objects. A stochastic modeling approach is used in the mathematical treatment of these problems, which reflects self-organization processes in open dissipative systems. The authors also consider examples of ordering for various objects in space throughout their evolutionary processes. This volume is aimed at graduate students and researchers in the fields of mechanics, astrophysics, geophysics, planetary and space science.

  5. AGRICULTURAL COOPERATION IN RUSSIA: THE PROBLEM OF ORGANIZATION MODEL CHOICE

    Directory of Open Access Journals (Sweden)

    J. Nilsson

    2008-09-01

    Full Text Available In today's Russia many agricultural co-operatives are established from the top downwards. The national project "Development of Agroindustrial Complex" and other governmental programs initiate the formation of cooperative societies. These cooperatives are organized in accordance with the traditional cooperative model. Many of them do, however, not have any real business activities. The aim of this paper to investigate if traditional cooperatives (following principles such as collective ownership, one member one vote, equal treatment, and solidarity, etc. constitute the best organizational model for cooperatives societies under the present conditions in the Russian agriculture.

  6. Mapping model behaviour using Self-Organizing Maps

    Directory of Open Access Journals (Sweden)

    M. Herbst

    2009-03-01

    Full Text Available Hydrological model evaluation and identification essentially involves extracting and processing information from model time series. However, the type of information extracted by statistical measures has only very limited meaning because it does not relate to the hydrological context of the data. To overcome this inadequacy we exploit the diagnostic evaluation concept of Signature Indices, in which model performance is measured using theoretically relevant characteristics of system behaviour. In our study, a Self-Organizing Map (SOM is used to process the Signatures extracted from Monte-Carlo simulations generated by the distributed conceptual watershed model NASIM. The SOM creates a hydrologically interpretable mapping of overall model behaviour, which immediately reveals deficits and trade-offs in the ability of the model to represent the different functional behaviours of the watershed. Further, it facilitates interpretation of the hydrological functions of the model parameters and provides preliminary information regarding their sensitivities. Most notably, we use this mapping to identify the set of model realizations (among the Monte-Carlo data that most closely approximate the observed discharge time series in terms of the hydrologically relevant characteristics, and to confine the parameter space accordingly. Our results suggest that Signature Index based SOMs could potentially serve as tools for decision makers inasmuch as model realizations with specific Signature properties can be selected according to the purpose of the model application. Moreover, given that the approach helps to represent and analyze multi-dimensional distributions, it could be used to form the basis of an optimization framework that uses SOMs to characterize the model performance response surface. As such it provides a powerful and useful way to conduct model identification and model uncertainty analyses.

  7. [Geothermal system temperature-depth database and model for data analysis]. 5. quarterly technical progress report

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, D.D.

    1998-04-25

    During this first quarter of the second year of the contract activity has involved several different tasks. The author has continued to work on three tasks most intensively during this quarter: the task of implementing the data base for geothermal system temperature-depth, the maintenance of the WWW site with the heat flow and gradient data base, and finally the development of a modeling capability for analysis of the geothermal system exploration data. The author has completed the task of developing a data base template for geothermal system temperature-depth data that can be used in conjunction with the regional data base that he had already developed and is now implementing it. Progress is described.

  8. Bridge with a left ventricular assist device to a simultaneous heart and kidney transplant: Review of the United Network for Organ Sharing database.

    Science.gov (United States)

    Gaffey, Ann C; Chen, Carol W; Chung, Jennifer; Grandin, Edward Wilson; Porrett, Paige M; Acker, Michael A; Atluri, Pavan

    2017-03-01

    Left ventricular assist device (LVAD) implantation as a bridge to cardiac transplantation (BTT) is an effective treatment for end-stage heart failure patients. Currently, there is an increasing number of patients with a LVAD who need a heart and kidney transplant (HKT). Little is known of the prognostic outcomes in these patients. This study was undertaken to determine whether an equivalent outcome would be present in HKTs as compared to a non-LVAD primary HKT cohort. We reviewed the United Network for Organ Sharing database from 2004 to 2013. Orthotropic heart transplant recipients (n = 49 799) were subcategorized as dual organ HKT (n = 1 921) and then divided into cohorts of HKT following continuous flow left ventricular assist device placement (CF-VAD-HKT, n = 113) or no LVAD placement (HKT, n = 1 808). Survival after transplantation was analyzed. For CF-LVAD-HKT and HKT cohorts, preoperative characteristics were similar regarding age (50.8 ± 13.7, 50.1 ± 13.7, p = 0.75) and panel reactive antibody (12.3 ± 18.4 vs 7.1 ± 18.4, p = 0.06). Donors were similar in age, gender, creatinine, and ejection fraction. Post-transplant, there was no difference in complications. Survival for CF-LVAD-HKT and HKT were similar at 1 year (77% vs 82%) and 3 years (75% vs 77%, log rank p = 0.2814). For patients with advanced heart failure and persistent renal dysfunction, simultaneous HKT is a safe option. Survival after CF-LVAD-HKT is equivalent to conventional HKT. © 2017 Wiley Periodicals, Inc.

  9. Fruit tree model for uptake of organic compounds from soil

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rasmussen, D.; Samsoe-Petersen, L.

    2003-01-01

    rences: 20 [ view related records ] Citation Map Abstract: Apples and other fruits are frequently cultivated in gardens and are part of our daily diet. Uptake of pollutants into apples may therefore contribute to the human daily intake of toxic substances. In current risk assessment of polluted...... soils, regressions or models are in use, which were not intended to be used for tree fruits. A simple model for uptake of neutral organic contaminants into fruits is developed. It considers xylem and phloem transport to fruits through the stem. The mass balance is solved for the steady......-state, and an example calculation is given. The Fruit Tree Model is compared to the empirical equation of Travis and Arms (T&A), and to results from fruits, collected in contaminated areas. For polar compounds, both T&A and the Fruit Tree Model predict bioconcentration factors fruit to soil (BCF, wet weight based...

  10. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  11. The NorWeST Stream Temperature Database, Model, and Climate Scenarios for the Northwest U.S. (Invited)

    Science.gov (United States)

    Isaak, D.; Wenger, S.; Peterson, E.; Ver Hoef, J.; Luce, C.; Hostetler, S. W.; Kershner, J.; Dunham, J.; Nagel, D.; Roper, B.

    2013-12-01

    Anthropogenic climate change is warming the Earth's rivers and streams and threatens significant changes to aquatic biodiversity. Effective threat response will require prioritization of limited conservation resources and coordinated interagency efforts guided by accurate information about climate, and climate change, at scales relevant to the distributions of species across landscapes. Here, we describe the NorWeST (i.e., NorthWest Stream Temperature) project to develop a comprehensive interagency stream temperature database and high-resolution climate scenarios across Washington, Oregon, Idaho, Montana, and Wyoming (~400,000 stream kilometers). The NorWeST database consists of stream temperature data contributed by >60 state, federal, tribal, and private resource agencies and may be the largest of its kind in the world (>45,000,000 hourly temperature recordings at >15,000 unique monitoring sites). These data are being used with spatial statistical network models to accurately downscale (R2 = 90%; RMSE networks at 1-kilometer resolution. Historic stream temperature scenarios are developed using air temperature data from RegCM3 runs for the NCEP historical reanalysis and future scenarios (2040s and 2080s) are developed by applying bias corrected air temperature and discharge anomalies from ensemble climate and hydrology model runs for A1B and A2 warming trajectories. At present, stream temperature climate scenarios have been developed for 230,000 stream kilometers across Idaho and western Montana using data from more than 7,000 monitoring sites. The raw temperature data and stream climate scenarios are made available as ArcGIS geospatial products for download through the NorWeST website as individual river basins are completed (http://www.fs.fed.us/rm/boise/AWAE/projects/NorWeST.shtml). By providing open access to temperature data and scenarios, the project is fostering new research on stream temperatures and better collaborative management of aquatic resources

  12. System hazards in managing laboratory test requests and results in primary care: medical protection database analysis and conceptual model.

    Science.gov (United States)

    Bowie, Paul; Price, Julie; Hepworth, Neil; Dinwoodie, Mark; McKay, John

    2015-11-27

    To analyse a medical protection organisation's database to identify hazards related to general practice systems for ordering laboratory tests, managing test results and communicating test result outcomes to patients. To integrate these data with other published evidence sources to inform design of a systems-based conceptual model of related hazards. A retrospective database analysis. General practices in the UK and Ireland. 778 UK and Ireland general practices participating in a medical protection organisation's clinical risk self-assessment (CRSA) programme from January 2008 to December 2014. Proportion of practices with system risks; categorisation of identified hazards; most frequently occurring hazards; development of a conceptual model of hazards; and potential impacts on health, well-being and organisational performance. CRSA visits were undertaken to 778 UK and Ireland general practices of which a range of systems hazards were recorded across the laboratory test ordering and results management systems in 647 practices (83.2%). A total of 45 discrete hazard categories were identified with a mean of 3.6 per practice (SD=1.94). The most frequently occurring hazard was the inadequate process for matching test requests and results received (n=350, 54.1%). Of the 1604 instances where hazards were recorded, the most frequent was at the 'postanalytical test stage' (n=702, 43.8%), followed closely by 'communication outcomes issues' (n=628, 39.1%). Based on arguably the largest data set currently available on the subject matter, our study findings shed new light on the scale and nature of hazards related to test results handling systems, which can inform future efforts to research and improve the design and reliability of these systems. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Modelling erosion and its interaction with soil organic carbon.

    Science.gov (United States)

    Oyesiku-Blakemore, Joseph; Verrot, Lucile; Geris, Josie; Zhang, Ganlin; Peng, Xinhua; Hallett, Paul; Smith, Jo

    2017-04-01

    Water driven soil erosion removes and relocates a significant quantity of soil organic carbon. In China the quantity of carbon removed from the soil through water erosion has been reported to be 180+/-80 Mt y-1 (Yue et al., 2011). Being able to effectively model the movement of such a large quantity of carbon is important for the assessment of soil quality and carbon storage in the region and further afield. A large selection of erosion models are available and much work has been done on evaluating the performance of these in developed countries (Merritt et al., 2006). Fewer studies have evaluated the application of these models on soils in developing countries. Here we evaluate and compare the performance of two of these models, WEPP (Laflen et al., 1997) and RUSLE (Renard et al., 1991), for simulations of soil erosion and deposition at the slope scale on a Chinese Red Soil under cultivation using measurements taken at the site. We also describe work to dynamically couple the movement of carbon presented in WEPP to a model of soil organic matter and nutrient turnover, ECOSSE (Smith et al., 2010). This aims to improve simulations of both erosion and carbon cycling by using the simulated rates of erosion to alter the distribution of soil carbon, the depth of soil and the clay content across the slopes, changing the simulated rate of carbon turnover. This, in turn, affects the soil carbon available to be eroded in the next timestep, so improving estimates of carbon erosion. We compare the simulations of this coupled modelling approach with those of the unaltered ECOSSE and WEPP models to determine the importance of coupling erosion and turnover models on the simulation of carbon losses at catchment scale.

  14. Ecotoxicological modelling of cosmetics for aquatic organisms: A QSTR approach.

    Science.gov (United States)

    Khan, K; Roy, K

    2017-07-01

    In this study, externally validated quantitative structure-toxicity relationship (QSTR) models were developed for toxicity of cosmetic ingredients on three different ecotoxicologically relevant organisms, namely Pseudokirchneriella subcapitata, Daphnia magna and Pimephales promelas following the OECD guidelines. The final models were developed by partial least squares (PLS) regression technique, which is more robust than multiple linear regression. The obtained model for P. subcapitata shows that molecular size and complexity have significant impacts on the toxicity of cosmetics. In case of P. promelas and D. magna, we found that the largest contribution to the toxicity was shown by hydrophobicity and van der Waals surface area, respectively. All models were validated using both internal and test compounds employing multiple strategies. For each QSTR model, applicability domain studies were also performed using the "Distance to Model in X-space" method. A comparison was made with the ECOSAR predictions in order to prove the good predictive performances of our developed models. Finally, individual models were applied to predict toxicity for an external set of 596 personal care products having no experimental data for at least one of the endpoints, and the compounds were ranked based on a decreasing order of toxicity using a scaling approach.

  15. An Instructional Development Model for Global Organizations: The GOaL Model.

    Science.gov (United States)

    Hara, Noriko; Schwen, Thomas M.

    1999-01-01

    Presents an instructional development model, GOaL (Global Organization Localization), for use by global organizations. Topics include gaps in language, culture, and needs; decentralized processes; collaborative efforts; predetermined content; multiple perspectives; needs negotiation; learning within context; just-in-time training; and bilingual…

  16. Examples of New Models Applied in Selected Simulation Systems with Respect to Database

    Directory of Open Access Journals (Sweden)

    Z. Ignaszak

    2013-01-01

    Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared thenew challenges and expectations for permanent development of process virtualization in the mechanical engineering industry.Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage– and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage– and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.

  17. Examples of New Models Applied in Selected Simulation Systems with Respect to Database

    Directory of Open Access Journals (Sweden)

    Ignaszak Z.

    2013-03-01

    Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared the new challenges and expectations for permanent development of process virtualization in the mechanical engineering industry. Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage- and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage- and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.

  18. VEMAP Phase 2 bioclimatic database. I. Gridded historical (20th century) climate for modeling ecosystem dynamics across the conterminous USA

    Science.gov (United States)

    Kittel, T.G.F.; Rosenbloom, N.A.; Royle, J. Andrew; Daly, Christopher; Gibson, W.P.; Fisher, H.H.; Thornton, P.; Yates, D.N.; Aulenbach, S.; Kaufman, C.; McKeown, R.; Bachelet, D.; Schimel, D.S.; Neilson, R.; Lenihan, J.; Drapek, R.; Ojima, D.S.; Parton, W.J.; Melillo, J.M.; Kicklighter, D.W.; Tian, H.; McGuire, A.D.; Sykes, M.T.; Smith, B.; Cowling, S.; Hickler, T.; Prentice, I.C.; Running, S.; Hibbard, K.A.; Post, W.M.; King, A.W.; Smith, T.; Rizzo, B.; Woodward, F.I.

    2004-01-01

    Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), a biogeochemical and dynamic vegetation model intercomparison. The dataset covers the period 1895-1993 on a 0.5?? latitude/longitude grid. Climate is represented at both monthly and daily timesteps. Variables are: precipitation, mininimum and maximum temperature, total incident solar radiation, daylight-period irradiance, vapor pressure, and daylight-period relative humidity. The dataset was derived from US Historical Climate Network (HCN), cooperative network, and snowpack telemetry (SNOTEL) monthly precipitation and mean minimum and maximum temperature station data. We employed techniques that rely on geostatistical and physical relationships to create the temporally and spatially complete dataset. We developed a local kriging prediction model to infill discontinuous and limited-length station records based on spatial autocorrelation structure of climate anomalies. A spatial interpolation model (PRISM) that accounts for physiographic controls was used to grid the infilled monthly station data. We implemented a stochastic weather generator (modified WGEN) to disaggregate the gridded monthly series to dailies. Radiation and humidity variables were estimated from the dailies using a physically-based empirical surface climate model (MTCLIM3). Derived datasets include a 100 yr model spin-up climate and a historical Palmer Drought Severity Index (PDSI) dataset. The VEMAP dataset exhibits statistically significant trends in temperature, precipitation, solar radiation, vapor pressure, and PDSI for US National Assessment regions. The historical climate and

  19. A Data-Based Approach for Modeling and Analysis of Vehicle Collision by LPV-ARMAX Models

    Directory of Open Access Journals (Sweden)

    Qiugang Lu

    2013-01-01

    Full Text Available Vehicle crash test is considered to be the most direct and common approach to assess the vehicle crashworthiness. However, it suffers from the drawbacks of high experiment cost and huge time consumption. Therefore, the establishment of a mathematical model of vehicle crash which can simplify the analysis process is significantly attractive. In this paper, we present the application of LPV-ARMAX model to simulate the car-to-pole collision with different initial impact velocities. The parameters of the LPV-ARMAX are assumed to have dependence on the initial impact velocities. Instead of establishing a set of LTI models for vehicle crashes with various impact velocities, the LPV-ARMAX model is comparatively simple and applicable to predict the responses of new collision situations different from the ones used for identification. Finally, the comparison between the predicted response and the real test data is conducted, which shows the high fidelity of the LPV-ARMAX model.

  20. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  1. A three-dimensional cellular automata model coupled with finite element method and thermodynamic database for alloy solidification

    Science.gov (United States)

    Zhao, Y.; Qin, R. S.; Chen, D. F.

    2013-08-01

    A three-dimensional (3D) cellular automata (CA) model has been developed for the simulation of microstructure evolution in alloy solidification. The governing rule for the CA model is associated with the phase transition driving force which is obtained via a thermodynamic database. This determines the migration rate of the non-equilibrium solid-liquid (SL) interface and is calculated according to the local temperature and chemical composition. The curvature of the interface and the anisotropic property of the surface energy are taken into consideration. A 3D finite element (FE) method is applied for the calculation of transient heat and mass transfer. Numerical calculations for the solidification of Fe-1.5 wt% C alloy have been performed. The morphological evolution of dendrites, carbon segregation and temperature distribution in both isothermal and non-isothermal conditions are studied. The parameters affecting the growth of equiaxed and columnar dendrites are discussed. The calculated results are verified using the analytical model and previous experiments. The method provides a sophisticated approach to the solidification of multi-phase and multi-component systems.

  2. Real-time model-based image reconstruction with a prior calculated database for electrical capacitance tomography

    Science.gov (United States)

    Rodriguez Frias, Marco A.; Yang, Wuqiang

    2017-04-01

    Image reconstruction for electrical capacitance tomography is a challenging task due to the severely underdetermined nature of the inverse problem. A model-based algorithm tackles this problem by reducing the number of unknowns to be calculated from the limited number of independent measurements. The conventional model-based algorithm is implemented with a finite element method to solve the forward problem at each iteration and can produce good results. However, it is time-consuming and hence the algorithm can be used for off-line image reconstruction only. In this paper, a solution to this limitation is proposed. The model-based algorithm is implemented with a database containing a set of prior solved forward problems. In this way, the time required to perform image reconstruction is drastically reduced without sacrificing accuracy, and real-time image reconstruction achieved with up to 100 frames s-1. Further enhancement in speed may be accomplished by implementing the reconstruction algorithm in a parallel processing general purpose graphics process unit.

  3. Spatiotemporal Organization of Spin-Coated Supported Model Membranes

    Science.gov (United States)

    Simonsen, Adam Cohen

    All cells of living organisms are separated from their surroundings and organized internally by means of flexible lipid membranes. In fact, there is consensus that the minimal requirements for self-replicating life processes include the following three features: (1) information carriers (DNA, RNA), (2) a metabolic system, and (3) encapsulation in a container structure [1]. Therefore, encapsulation can be regarded as an essential part of life itself. In nature, membranes are highly diverse interfacial structures that compartmentalize cells [2]. While prokaryotic cells only have an outer plasma membrane and a less-well-developed internal membrane structure, eukaryotic cells have a number of internal membranes associated with the organelles and the nucleus. Many of these membrane structures, including the plasma membrane, are complex layered systems, but with the basic structure of a lipid bilayer. Biomembranes contain hundreds of different lipid species in addition to embedded or peripherally associated membrane proteins and connections to scaffolds such as the cytoskeleton. In vitro, lipid bilayers are spontaneously self-organized structures formed by a large group of amphiphilic lipid molecules in aqueous suspensions. Bilayer formation is driven by the entropic properties of the hydrogen bond network in water in combination with the amphiphilic nature of the lipids. The molecular shapes of the lipid constituents play a crucial role in bilayer formation, and only lipids with approximately cylindrical shapes are able to form extended bilayers. The bilayer structure of biomembranes was discovered by Gorter and Grendel in 1925 [3] using monolayer studies of lipid extracts from red blood cells. Later, a number of conceptual models were developed to rationalize the organization of lipids and proteins in biological membranes. One of the most celebrated is the fluid-mosaic model by Singer and Nicolson (1972) [4]. According to this model, the lipid bilayer component of

  4. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  5. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation......, complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database...

  6. Danish Gynecological Cancer Database

    DEFF Research Database (Denmark)

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie

    2016-01-01

    AIM OF DATABASE: The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures...... for gynecological cancer. STUDY POPULATION: DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. MAIN VARIABLES: DGCD data are organized within separate...... is the registration of oncological treatment data, which is incomplete for a large number of patients. CONCLUSION: The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation...

  7. Enabling Semantic Queries Against the Spatial Database

    Directory of Open Access Journals (Sweden)

    PENG, X.

    2012-02-01

    Full Text Available The spatial database based upon the object-relational database management system (ORDBMS has the merits of a clear data model, good operability and high query efficiency. That is why it has been widely used in spatial data organization and management. However, it cannot express the semantic relationships among geospatial objects, making the query results difficult to meet the user's requirement well. Therefore, this paper represents an attempt to combine the Semantic Web technology with the spatial database so as to make up for the traditional database's disadvantages. In this way, on the one hand, users can take advantages of ORDBMS to store and manage spatial data; on the other hand, if the spatial database is released in the form of Semantic Web, the users could describe a query more concisely with the cognitive pattern which is similar to that of daily life. As a consequence, this methodology enables the benefits of both Semantic Web and the object-relational database (ORDB available. The paper discusses systematically the semantic enriched spatial database's architecture, key technologies and implementation. Subsequently, we demonstrate the function of spatial semantic queries via a practical prototype system. The query results indicate that the method used in this study is feasible.

  8. OBJECT ORIENTED MODELLING, A MODELLING METHOD OF AN ECONOMIC ORGANIZATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    TĂNĂSESCU ANA

    2014-05-01

    Full Text Available Now, most economic organizations use different information systems types in order to facilitate their activity. There are different methodologies, methods and techniques that can be used to design information systems. In this paper, I propose to present the advantages of using the object oriented modelling at the information system design of an economic organization. Thus, I have modelled the activity of a photo studio, using Visual Paradigm for UML as a modelling tool. For this purpose, I have identified the use cases for the analyzed system and I have presented the use case diagram. I have, also, realized the system static and dynamic modelling, through the most known UML diagrams.

  9. MIANN models in medicinal, physical and organic chemistry.

    Science.gov (United States)

    González-Díaz, Humberto; Arrasate, Sonia; Sotomayor, Nuria; Lete, Esther; Munteanu, Cristian R; Pazos, Alejandro; Besada-Porto, Lina; Ruso, Juan M

    2013-01-01

    Reducing costs in terms of time, animal sacrifice, and material resources with computational methods has become a promising goal in Medicinal, Biological, Physical and Organic Chemistry. There are many computational techniques that can be used in this sense. In any case, almost all these methods focus on few fundamental aspects including: type (1) methods to quantify the molecular structure, type (2) methods to link the structure with the biological activity, and others. In particular, MARCH-INSIDE (MI), acronym for Markov Chain Invariants for Networks Simulation and Design, is a well-known method for QSAR analysis useful in step (1). In addition, the bio-inspired Artificial-Intelligence (AI) algorithms called Artificial Neural Networks (ANNs) are among the most powerful type (2) methods. We can combine MI with ANNs in order to seek QSAR models, a strategy which is called herein MIANN (MI & ANN models). One of the first applications of the MIANN strategy was in the development of new QSAR models for drug discovery. MIANN strategy has been expanded to the QSAR study of proteins, protein-drug interactions, and protein-protein interaction networks. In this paper, we review for the first time many interesting aspects of the MIANN strategy including theoretical basis, implementation in web servers, and examples of applications in Medicinal and Biological chemistry. We also report new applications of the MIANN strategy in Medicinal chemistry and the first examples in Physical and Organic Chemistry, as well. In so doing, we developed new MIANN models for several self-assembly physicochemical properties of surfactants and large reaction networks in organic synthesis. In some of the new examples we also present experimental results which were not published up to date.

  10. Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics

    Science.gov (United States)

    Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.

    2012-12-01

    Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.

  11. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us DMPD Database Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects... pathway models of transcriptional regulation and signal transduction in CSML format for dymamic simulation base

  12. Database and Expert Systems Applications

    DEFF Research Database (Denmark)

    Viborg Andersen, Kim; Debenham, John; Wagner, Roland

    submissions. The papers are organized in topical sections on workflow automation, database queries, data classification and recommendation systems, information retrieval in multimedia databases, Web applications, implementational aspects of databases, multimedia databases, XML processing, security, XML......This book constitutes the refereed proceedings of the 16th International Conference on Database and Expert Systems Applications, DEXA 2005, held in Copenhagen, Denmark, in August 2005.The 92 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 390...... schemata, query evaluation, semantic processing, information retrieval, temporal and spatial databases, querying XML, organisational aspects of databases, natural language processing, ontologies, Web data extraction, semantic Web, data stream management, data extraction, distributed database systems...

  13. Genome Editing and Its Applications in Model Organisms

    Directory of Open Access Journals (Sweden)

    Dongyuan Ma

    2015-12-01

    Full Text Available Technological advances are important for innovative biological research. Development of molecular tools for DNA manipulation, such as zinc finger nucleases (ZFNs, transcription activator-like effector nucleases (TALENs, and the clustered regularly-interspaced short palindromic repeat (CRISPR/CRISPR-associated (Cas, has revolutionized genome editing. These approaches can be used to develop potential therapeutic strategies to effectively treat heritable diseases. In the last few years, substantial progress has been made in CRISPR/Cas technology, including technical improvements and wide application in many model systems. This review describes recent advancements in genome editing with a particular focus on CRISPR/Cas, covering the underlying principles, technological optimization, and its application in zebrafish and other model organisms, disease modeling, and gene therapy used for personalized medicine.

  14. Genome Editing and Its Applications in Model Organisms.

    Science.gov (United States)

    Ma, Dongyuan; Liu, Feng

    2015-12-01

    Technological advances are important for innovative biological research. Development of molecular tools for DNA manipulation, such as zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and the clustered regularly-interspaced short palindromic repeat (CRISPR)/CRISPR-associated (Cas), has revolutionized genome editing. These approaches can be used to develop potential therapeutic strategies to effectively treat heritable diseases. In the last few years, substantial progress has been made in CRISPR/Cas technology, including technical improvements and wide application in many model systems. This review describes recent advancements in genome editing with a particular focus on CRISPR/Cas, covering the underlying principles, technological optimization, and its application in zebrafish and other model organisms, disease modeling, and gene therapy used for personalized medicine. Copyright © 2016 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  15. Geochemical databases. Part 1. Pmatch: a program to manage thermochemical data. Part 2. The experimental validation of geochemical computer models

    International Nuclear Information System (INIS)

    Pearson, F.J. Jr.; Avis, J.D.; Nilsson, K.; Skytte Jensen, B.

    1993-01-01

    This work is carried out under cost-sharing contract with European Atomic Energy Community in the framework of its programme on Management and Storage of Radioactive Wastes. Part 1: PMATCH, A Program to Manage Thermochemical Data, describes the development and use of a computer program, by means of which new thermodynamic data from literature may be referenced to a common frame and thereby become internally consistent with an existing database. The report presents the relevant thermodynamic expressions and their use in the program is discussed. When there is not sufficient thermodynamic data available to describe a species behaviour under all conceivable conditions, the problems arising are thoroughly discussed and the available data is handled by approximating expressions. Part II: The Experimental Validation of Geochemical Computer models are the results of experimental investigations of the equilibria established in aqueous suspensions of mixtures of carbonate minerals (Calcium, magnesium, manganese and europium carbonates) compared with theoretical calculations made by means of the geochemical JENSEN program. The study revealed that the geochemical computer program worked well, and that its database was of sufficient validity. However, it was observed that experimental difficulties could hardly be avoided, when as here a gaseous component took part in the equilibria. Whereas the magnesium and calcium carbonates did not demonstrate mutual solid solubility, this produced abnormal effects when manganese and calcium carbonates were mixed resulting in a diminished solubility of both manganese and calcium. With tracer amounts of europium added to a suspension of calcite in sodium carbonate solutions long term experiments revealed a transition after 1-2 months, whereby the tracer became more strongly adsorbed onto calcite. The transition is interpreted as the nucleation and formation of a surface phase incorporating the 'species' NaEu(Co 3 ) 2

  16. Developing an Enzyme Mediated Soil Organic Carbon Decomposition Model

    Science.gov (United States)

    Mayes, M. A.; Post, W. M.; Wang, G.; Jagadamma, S.; Steinweg, J. M.; Schadt, C. W.

    2012-12-01

    We developed the Microbial-ENzyme-mediated Decomposition (MEND) model in order to mechanistically model the decomposition of soil organic carbon (C). This presentation is an overview of the concept and development of the model and of the design of complementary lab-scale experiments. The model divides soil C into five pools of particulate, mineral-associated, dissolved, microbial, and enzyme organic C (Wang et al. 2012). There are three input types - cellulose, lignin, and dissolved C. Decomposition is mediated via microbial extracellular enzymes using the Michaelis-Menten equation, resulting in the production of a common pool of dissolved organic C. Parameters for the Michaelis-Menten equation are obtained through a literature review (Wang and Post, 2012a). The dissolved C is taken up by microbial biomass and proportioned according to microbial maintenance and growth, which were recalculated according to Wang and Post (2012b). The model allows dissolved C to undergo adsorption and desorption reactions with the mineral-associated C, which was also parameterized based upon a literature review and complementary laboratory experiments. In the lab, four 14C-labeled substrates (cellulose, fatty acid, glucose, and lignin-like) were incubated with either the particulate C pool, the mineral-associated C pool, or to bulk soils. The rate of decomposition was measured via the production of 14CO2 over time, along with incorporation into microbial biomass, production of dissolved C, and estimation of sorbed C. We performed steady-state and dynamic simulations and sensitivity analyses under temperature increases of 1-5°C for a period of 100 y. Simulations indicated an initial decrease in soil organic C consisting of both cellulose and lignin pools. Over longer time intervals (> 6 y), however, a shrinking microbial population, a concomitant decrease in enzyme production, and a decrease in microbial carbon use efficiency together decreased CO2 production and resulted in greater

  17. Comprehensive mollusk acute toxicity database improves the use of Interspecies Correlation Estimation (ICE) models to predict toxicity of untested freshwater and endangered mussel species

    Science.gov (United States)

    Interspecies correlation estimation (ICE) models extrapolate acute toxicity data from surrogate test species to untested taxa. A suite of ICE models developed from a comprehensive database is available on the US Environmental Protection Agency’s web-based application, Web-I...

  18. Thermodynamic Modeling of Organic-Inorganic Aerosols with the Group-Contribution Model AIOMFAC

    Science.gov (United States)

    Zuend, A.; Marcolli, C.; Luo, B. P.; Peter, T.

    2009-04-01

    Liquid aerosol particles are - from a physicochemical viewpoint - mixtures of inorganic salts, acids, water and a large variety of organic compounds (Rogge et al., 1993; Zhang et al., 2007). Molecular interactions between these aerosol components lead to deviations from ideal thermodynamic behavior. Strong non-ideality between organics and dissolved ions may influence the aerosol phases at equilibrium by means of liquid-liquid phase separations into a mainly polar (aqueous) and a less polar (organic) phase. A number of activity models exists to successfully describe the thermodynamic equilibrium of aqueous electrolyte solutions. However, the large number of different, often multi-functional, organic compounds in mixed organic-inorganic particles is a challenging problem for the development of thermodynamic models. The group-contribution concept as introduced in the UNIFAC model by Fredenslund et al. (1975), is a practical method to handle this difficulty and to add a certain predictability for unknown organic substances. We present the group-contribution model AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients), which explicitly accounts for molecular interactions between solution constituents, both organic and inorganic, to calculate activities, chemical potentials and the total Gibbs energy of mixed systems (Zuend et al., 2008). This model enables the computation of vapor-liquid (VLE), liquid-liquid (LLE) and solid-liquid (SLE) equilibria within one framework. Focusing on atmospheric applications we considered eight different cations, five anions and a wide range of alcohols/polyols as organic compounds. With AIOMFAC, the activities of the components within an aqueous electrolyte solution are very well represented up to high ionic strength. We show that the semi-empirical middle-range parametrization of direct organic-inorganic interactions in alcohol-water-salt solutions enables accurate computations of vapor-liquid and liquid

  19. Development of the Croatian model of organ donation and transplantation

    Science.gov (United States)

    Živčić-Ćosić, Stela; Bušić, Mirela; Župan, Željko; Pelčić, Gordana; Anušić Juričić, Martina; Jurčić, Željka; Ivanovski, Mladen; Rački, Sanjin

    2013-01-01

    During the past ten years, the efforts to improve and organize the national transplantation system in Croatia have resulted in a steadily growing donor rate, which reached its highest level in 2011, with 33.6 utilized donors per million population (p.m.p.). Nowadays, Croatia is one of the leading countries in the world according to deceased donation and transplantation rates. Between 2008 and 2011, the waiting list for kidney transplantation decreased by 37.2% (from 430 to 270 persons waiting for a transplant) and the median waiting time decreased from 46 to 24 months. The Croatian model has been internationally recognized as successful and there are plans for its implementation in other countries. We analyzed the key factors that contributed to the development of this successful model for organ donation and transplantation. These are primarily the appointment of hospital and national transplant coordinators, implementation of a new financial model with donor hospital reimbursement, public awareness campaign, international cooperation, adoption of new legislation, and implementation of a donor quality assurance program. The selection of key factors is based on the authors' opinions; we are open for further discussion and propose systematic research into the issue. PMID:23444248

  20. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    Science.gov (United States)

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological databases has increased significantly, with the continuous growth in the number of users and rate of queries; such that some databases have reached the terabyte size. There is therefore, the increasing need to access databases at the fastest rates possible. In this paper, the decision tree indexing model (PDTIM) was parallelised, using a hybrid of distributed and shared memory on resident database; with horizontal and vertical growth through Message Passing Interface (MPI) and POSIX Thread (PThread), to accelerate the index building time. The PDTIM was implemented using 1, 2, 4 and 5 processors on 1, 2, 3 and 4 threads respectively. The results show that the hybrid technique improved the speedup, compared to a sequential version. It could be concluded from results that the proposed PDTIM is appropriate for large data sets, in terms of index building time.

  1. Modeling of Electronic Properties in Organic Semiconductor Device Structures

    Science.gov (United States)

    Chang, Hsiu-Chuang

    Organic semiconductors (OSCs) have recently become viable for a wide range of electronic devices, some of which have already been commercialized. With the mechanical flexibility of organic materials and promising performance of organic field effect transistors (OFETs) and organic bulk heterojunction devices, OSCs have been demonstrated in applications such as radio frequency identification tags, flexible displays, and photovoltaic cells. Transient phenomena play decisive roles in the performance of electronic devices and OFETs in particular. The dynamics of the establishment and depletion of the conducting channel in OFETs are investigated theoretically. The device structures explored resemble typical organic thin-film transistors with one of the channel contacts removed. By calculating the displacement current associated with charging and discharging of the channel in these capacitors, transient effects on the carrier transport in OSCs may be studied. In terms of the relevant models it is shown that the non-linearity of the process plays a key role. The non-linearity arises in the simplest case from the fact that channel resistance varies during the charging and discharging phases. Traps can be introduced into the models and their effects examined in some detail. When carriers are injected into the device, a conducting channel is established with traps that are initially empty. Gradual filling of the traps then modifies the transport characteristics of the injected charge carriers. In contrast, dc measurements as they are typically performed to characterize the transport properties of organic semiconductor channels investigate a steady state with traps partially filled. Numerical and approximate analytical models of the formation of the conducting channel and the resulting displacement currents are presented. For the process of transient carrier extraction, it is shown that if the channel capacitance is partially or completely discharged through the channel

  2. A Revised Iranian Model of Organ Donation as an Answer to the Current Organ Shortage Crisis.

    Science.gov (United States)

    Hamidian Jahromi, Alireza; Fry-Revere, Sigrid; Bastani, Bahar

    2015-09-01

    Kidney transplantation has become the treatment of choice for patients with end-stage renal disease. Six decades of success in the field of transplantation have made it possible to save thousands of lives every year. Unfortunately, in recent years success has been overshadowed by an ever-growing shortage of organs. In the United States, there are currently more than 100 000 patients waiting for kidneys. However, the supply of kidneys (combined cadaveric and live donations) has stagnated around 17 000 per year. The ever-widening gap between demand and supply has resulted in an illegal black market and unethical transplant tourism of global proportions. While we believe there is much room to improve the Iranian model of regulated incentivized live kidney donation, with some significant revisions, the Iranian Model could serve as an example for how other countries could make significant strides to lessening their own organ shortage crises.

  3. A Multiagent Modeling Environment for Simulating Work Practice in Organizations

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; vanHoof, Ron

    2004-01-01

    In this paper we position Brahms as a tool for simulating organizational processes. Brahms is a modeling and simulation environment for analyzing human work practice, and for using such models to develop intelligent software agents to support the work practice in organizations. Brahms is the result of more than ten years of research at the Institute for Research on Learning (IRL), NYNEX Science & Technology (the former R&D institute of the Baby Bell telephone company in New York, now Verizon), and for the last six years at NASA Ames Research Center, in the Work Systems Design and Evaluation group, part of the Computational Sciences Division (Code IC). Brahms has been used on more than ten modeling and simulation research projects, and recently has been used as a distributed multiagent development environment for developing work practice support tools for human in-situ science exploration on planetary surfaces, in particular a human mission to Mars. Brahms was originally conceived of as a business process modeling and simulation tool that incorporates the social systems of work, by illuminating how formal process flow descriptions relate to people s actual located activities in the workplace. Our research started in the early nineties as a reaction to experiences with work process modeling and simulation . Although an effective tool for convincing management of the potential cost-savings of the newly designed work processes, the modeling and simulation environment was only able to describe work as a normative workflow. However, the social systems, uncovered in work practices studied by the design team played a significant role in how work actually got done-actual lived work. Multi- tasking, informal assistance and circumstantial work interactions could not easily be represented in a tool with a strict workflow modeling paradigm. In response, we began to develop a tool that would have the benefits of work process modeling and simulation, but be distinctively able to

  4. Brasilia’s Database Administrators

    Directory of Open Access Journals (Sweden)

    Jane Adriana

    2016-06-01

    Full Text Available Database administration has gained an essential role in the management of new database technologies. Different data models are being created for supporting the enormous data volume, from the traditional relational database. These new models are called NoSQL (Not only SQL databases. The adoption of best practices and procedures, has become essential for the operation of database management systems. Thus, this paper investigates some of the techniques and tools used by database administrators. The study highlights features and particularities in databases within the area of Brasilia, the Capital of Brazil. The results point to which new technologies regarding database management are currently the most relevant, as well as the central issues in this area.

  5. Modeling the adsorption of weak organic acids on goethite : the ligand and charge distribution model

    NARCIS (Netherlands)

    Filius, J.D.

    2001-01-01

    A detailed study is presented in which the CD-MUSIC modeling approach is used in a new modeling approach that can describe the binding of large organic molecules by metal (hydr)oxides taking the full speciation of the adsorbed molecule into account. Batch equilibration experiments were

  6. Organic polyaromatic hydrocarbons as sensitizing model dyes for semiconductor nanoparticles.

    Science.gov (United States)

    Zhang, Yongyi; Galoppini, Elena

    2010-04-26

    The study of interfacial charge-transfer processes (sensitization) of a dye bound to large-bandgap nanostructured metal oxide semiconductors, including TiO(2), ZnO, and SnO(2), is continuing to attract interest in various areas of renewable energy, especially for the development of dye-sensitized solar cells (DSSCs). The scope of this Review is to describe how selected model sensitizers prepared from organic polyaromatic hydrocarbons have been used over the past 15 years to elucidate, through a variety of techniques, fundamental aspects of heterogeneous charge transfer at the surface of a semiconductor. This Review does not focus on the most recent or efficient dyes, but rather on how model dyes prepared from aromatic hydrocarbons have been used, over time, in key fundamental studies of heterogeneous charge transfer. In particular, we describe model chromophores prepared from anthracene, pyrene, perylene, and azulene. As the level of complexity of the model dye-bridge-anchor group compounds has increased, the understanding of some aspects of very complex charge transfer events has improved. The knowledge acquired from the study of the described model dyes is of importance not only for DSSC development but also to other fields of science for which electronic processes at the molecule/semiconductor interface are relevant.

  7. LSOT: A Lightweight Self-Organized Trust Model in VANETs

    Directory of Open Access Journals (Sweden)

    Zhiquan Liu

    2016-01-01

    Full Text Available With the advances in automobile industry and wireless communication technology, Vehicular Ad hoc Networks (VANETs have attracted the attention of a large number of researchers. Trust management plays an important role in VANETs. However, it is still at the preliminary stage and the existing trust models cannot entirely conform to the characteristics of VANETs. This work proposes a novel Lightweight Self-Organized Trust (LSOT model which contains trust certificate-based and recommendation-based trust evaluations. Both the supernodes and trusted third parties are not needed in our model. In addition, we comprehensively consider three factor weights to ease the collusion attack in trust certificate-based trust evaluation, and we utilize the testing interaction method to build and maintain the trust network and propose a maximum local trust (MLT algorithm to identify trustworthy recommenders in recommendation-based trust evaluation. Furthermore, a fully distributed VANET scenario is deployed based on the famous Advogato dataset and a series of simulations and analysis are conducted. The results illustrate that our LSOT model significantly outperforms the excellent experience-based trust (EBT and Lightweight Cross-domain Trust (LCT models in terms of evaluation performance and robustness against the collusion attack.

  8. Assessment of predictivity of volatile organic compounds carcinogenicity and mutagenicity by freeware in silico models.

    Science.gov (United States)

    Guerra, Lília Ribeiro; de Souza, Alessandra Mendonça Teles; Côrtes, Juliana Alves; Lione, Viviane de Oliveira Freitas; Castro, Helena Carla; Alves, Gutemberg Gomes

    2017-12-01

    The application of in silico methods is increasing on toxicological risk prediction for human and environmental health. This work aimed to evaluate the performance of three in silico freeware models (OSIRIS v.2.0, LAZAR, and Toxtree) on the prediction of carcinogenicity and mutagenicity of thirty-eight volatile organic compounds (VOC) related to chemical risk assessment for occupational exposure. Theoretical data were compared with assessments available in international databases. Confusion matrices and ROC curves were used to evaluate the sensitivity, specificity, and accuracy of each model. All three models (OSIRIS, LAZAR and Toxtree) were able to identify VOC with a potential carcinogenicity or mutagenicity risk for humans, however presenting differences concerning the specificity, sensitivity, and accuracy. The best predictive performances were found for OSIRIS and LAZAR for carcinogenicity and OSIRIS for mutagenicity, as these softwares presented a combination of negative predictive power and lower risk of false positives (high specificity) for those endpoints. The heterogeneity of results found with different softwares reinforce the importance of using a combination of in silico models to occupational toxicological risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Modeling financial markets by self-organized criticality

    Science.gov (United States)

    Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea

    2015-10-01

    We present a financial market model, characterized by self-organized criticality, that is able to generate endogenously a realistic price dynamics and to reproduce well-known stylized facts. We consider a community of heterogeneous traders, composed by chartists and fundamentalists, and focus on the role of informative pressure on market participants, showing how the spreading of information, based on a realistic imitative behavior, drives contagion and causes market fragility. In this model imitation is not intended as a change in the agent's group of origin, but is referred only to the price formation process. We introduce in the community also a variable number of random traders in order to study their possible beneficial role in stabilizing the market, as found in other studies. Finally, we also suggest some counterintuitive policy strategies able to dampen fluctuations by means of a partial reduction of information.

  10. Models of charge pair generation in organic solar cells.

    Science.gov (United States)

    Few, Sheridan; Frost, Jarvist M; Nelson, Jenny

    2015-01-28

    Efficient charge pair generation is observed in many organic photovoltaic (OPV) heterojunctions, despite nominal electron-hole binding energies which greatly exceed the average thermal energy. Empirically, the efficiency of this process appears to be related to the choice of donor and acceptor materials, the resulting sequence of excited state energy levels and the structure of the interface. In order to establish a suitable physical model for the process, a range of different theoretical studies have addressed the nature and energies of the interfacial states, the energetic profile close to the heterojunction and the dynamics of excited state transitions. In this paper, we review recent developments underpinning the theory of charge pair generation and phenomena, focussing on electronic structure calculations, electrostatic models and approaches to excited state dynamics. We discuss the remaining challenges in achieving a predictive approach to charge generation efficiency.

  11. Self-Organized Criticality Theory Model of Thermal Sandpile

    International Nuclear Information System (INIS)

    Peng Xiao-Dong; Qu Hong-Peng; Xu Jian-Qiang; Han Zui-Jiao

    2015-01-01

    A self-organized criticality model of a thermal sandpile is formulated for the first time to simulate the dynamic process with interaction between avalanche events on the fast time scale and diffusive transports on the slow time scale. The main characteristics of the model are that both particle and energy avalanches of sand grains are considered simultaneously. Properties of intermittent transport and improved confinement are analyzed in detail. The results imply that the intermittent phenomenon such as blobs in the low confinement mode as well as edge localized modes in the high confinement mode observed in tokamak experiments are not only determined by the edge plasma physics, but also affected by the core plasma dynamics. (paper)

  12. Modeling the role of microplastics in Bioaccumulation of organic chemicals to marine aquatic organisms. Critical Review

    NARCIS (Netherlands)

    Koelmans, A.A.

    2015-01-01

    It has been shown that ingestion of microplastics may increase bioaccumulation of organic chemicals by aquatic organisms. This paper critically reviews the literature on the effects of plastic ingestion on the bioaccumulation of organic chemicals, emphasizing quantitative approaches and mechanistic

  13. Partitioning of Nanoparticles into Organic Phases and Model Cells

    Energy Technology Data Exchange (ETDEWEB)

    Posner, J.D.; Westerhoff, P.; Hou, W-C.

    2011-08-25

    There is a recognized need to understand and predict the fate, transport and bioavailability of engineered nanoparticles (ENPs) in aquatic and soil ecosystems. Recent research focuses on either collection of empirical data (e.g., removal of a specific NP through water or soil matrices under variable experimental conditions) or precise NP characterization (e.g. size, degree of aggregation, morphology, zeta potential, purity, surface chemistry, and stability). However, it is almost impossible to transition from these precise measurements to models suitable to assess the NP behavior in the environment with complex and heterogeneous matrices. For decades, the USEPA has developed and applies basic partitioning parameters (e.g., octanol-water partition coefficients) and models (e.g., EPI Suite, ECOSAR) to predict the environmental fate, bioavailability, and toxicity of organic pollutants (e.g., pesticides, hydrocarbons, etc.). In this project we have investigated the hypothesis that NP partition coefficients between water and organic phases (octanol or lipid bilayer) is highly dependent on their physiochemical properties, aggregation, and presence of natural constituents in aquatic environments (salts, natural organic matter), which may impact their partitioning into biological matrices (bioaccumulation) and human exposure (bioavailability) as well as the eventual usage in modeling the fate and bioavailability of ENPs. In this report, we use the terminology "partitioning" to operationally define the fraction of ENPs distributed among different phases. The mechanisms leading to this partitioning probably involve both chemical force interactions (hydrophobic association, hydrogen bonding, ligand exchange, etc.) and physical forces that bring the ENPs in close contact with the phase interfaces (diffusion, electrostatic interactions, mixing turbulence, etc.). Our work focuses on partitioning, but also provides insight into the relative behavior of ENPs as either "more like

  14. Safety of blood group A2-to-O liver transplantation: an analysis of the United Network of Organ Sharing database.

    Science.gov (United States)

    Kluger, Michael D; Guarrera, James V; Olsen, Sonja K; Brown, Robert S; Emond, Jean C; Cherqui, Daniel

    2012-09-15

    ABO-incompatible organ transplantation typically induces hyperacute rejection. A2-to-O liver transplantations have been successful. This study compared overall and graft survival in O recipients of A2 and O grafts based on Organ Procurement and Transplantation Network data. Scientific Registry of Transplant Recipients data were used. The first A2-to-O liver transplantation was entered on March 11, 1990; all previous transplantations were excluded. Between March 11, 1990, and September 3, 2010, 43,335 O recipients underwent transplanation, of whom 358 received A2 grafts. There were no significant differences in age, sex, and race between the groups. Recipients of A2 grafts versus O grafts were significantly more likely to be hospitalized at transplantation (45% vs. 38%, P≤0.05) and to have a higher mean (SD) model for end-stage liver disease score (24 [11] vs. 22 [10], P≤0.05). 10% of A2 recipients and 9% of O recipients underwent retransplantation. No significant differences existed in rejection during the transplantation admission and at 12 months: 7% versus 6% and 20% versus 22% for A2 recipients and O recipients, respectively; and there were no significant differences in contributing factors to graft failure or cause of death. At 5 years, overall survival of A2 and O graft recipients was 77% and 74%, respectively (log rank=0.71). At 5 years, graft survival was 66% in both groups (log rank=0.52). Donor blood group was insignificant on Cox regression for overall and graft survival. Using Organ Procurement and Transplantation Network/Scientific Registry of Transplant Recipients data, we present the largest series of A2-to-O liver transplantations and conclude this mismatch option to be safe with similar overall and graft survival. This opens possibilities to further meet the demands of a shrinking organ supply, especially with regard to expanding living-donor options.

  15. Mathematical modeling of wastewater-derived biodegradable dissolved organic nitrogen.

    Science.gov (United States)

    Simsek, Halis

    2016-11-01

    Wastewater-derived dissolved organic nitrogen (DON) typically constitutes the majority of total dissolved nitrogen (TDN) discharged to surface waters from advanced wastewater treatment plants (WWTPs). When considering the stringent regulations on nitrogen discharge limits in sensitive receiving waters, DON becomes problematic and needs to be reduced. Biodegradable DON (BDON) is a portion of DON that is biologically degradable by bacteria when the optimum environmental conditions are met. BDON in a two-stage trickling filter WWTP was estimated using artificial intelligence techniques, such as adaptive neuro-fuzzy inference systems, multilayer perceptron, radial basis neural networks (RBNN), and generalized regression neural networks. Nitrite, nitrate, ammonium, TDN, and DON data were used as input neurons. Wastewater samples were collected from four different locations in the plant. Model performances were evaluated using root mean square error, mean absolute error, mean bias error, and coefficient of determination statistics. Modeling results showed that the R(2) values were higher than 0.85 in all four models for all wastewater samples, except only R(2) in the final effluent sample for RBNN modeling was low (0.52). Overall, it was found that all four computing techniques could be employed successfully to predict BDON.

  16. Database Description - ClEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us ClEST Database Description General information of database Database name ClEST Alternative n... Tsukuba, Ibaraki 305-8566 Japan Tel: +81-29-861-6812 (ext.222-36812) Fax: +81-29-861-6812 E-mail: Database ...classification Nucleotide Sequence Databases Organism Taxonomy Name: Cimex lectul...arius Taxonomy ID: 79782 Database description Expressed sequence tags (EST) database of unique organs and wh...ole bodies of the bedbug, Cimex lectularius Features and manner of utilization of database The bedbug Cimex

  17. A Guide RNA Sequence Design Platform for the CRISPR/Cas9 System for Model Organism Genomes

    Directory of Open Access Journals (Sweden)

    Ming Ma

    2013-01-01

    Full Text Available Cas9/CRISPR has been reported to efficiently induce targeted gene disruption and homologous recombination in both prokaryotic and eukaryotic cells. Thus, we developed a Guide RNA Sequence Design Platform for the Cas9/CRISPR silencing system for model organisms. The platform is easy to use for gRNA design with input query sequences. It finds potential targets by PAM and ranks them according to factors including uniqueness, SNP, RNA secondary structure, and AT content. The platform allows users to upload and share their experimental results. In addition, most guide RNA sequences from published papers have been put into our database.

  18. Coupling databases and advanced analytic tools (R)

    OpenAIRE

    Seakomo, Saviour Sedem Kofi

    2014-01-01

    Today, several contemporary organizations collect various kinds of data, creating large data repositories. But the capacity to perform advanced analytics over these large amount of data stored in databases remains a significant challenge to statistical software (R, S, SAS, SPSS, etc) and data management systems (DBMSs). This is because while statistical software provide comprehensive analytics and modelling functionalities, they can only handle limited amounts of data. The data management sys...

  19. Hierarchical Fuzzy Sets To Query Possibilistic Databases

    OpenAIRE

    Thomopoulos, Rallou; Buche, Patrice; Haemmerlé, Ollivier

    2008-01-01

    Within the framework of flexible querying of possibilistic databases, based on the fuzzy set theory, this chapter focuses on the case where the vocabulary used both in the querying language and in the data is hierarchically organized, which occurs in systems that use ontologies. We give an overview of previous works concerning two issues: firstly, flexible querying of imprecise data in the relational model; secondly, the introduction of fuzziness in hierarchies. Concerning the latter point, w...

  20. Modeling evolutionary dynamics of epigenetic mutations in hierarchically organized tumors.

    Directory of Open Access Journals (Sweden)

    Andrea Sottoriva

    2011-05-01

    Full Text Available The cancer stem cell (CSC concept is a highly debated topic in cancer research. While experimental evidence in favor of the cancer stem cell theory is apparently abundant, the results are often criticized as being difficult to interpret. An important reason for this is that most experimental data that support this model rely on transplantation studies. In this study we use a novel cellular Potts model to elucidate the dynamics of established malignancies that are driven by a small subset of CSCs. Our results demonstrate that epigenetic mutations that occur during mitosis display highly altered dynamics in CSC-driven malignancies compared to a classical, non-hierarchical model of growth. In particular, the heterogeneity observed in CSC-driven tumors is considerably higher. We speculate that this feature could be used in combination with epigenetic (methylation sequencing studies of human malignancies to prove or refute the CSC hypothesis in established tumors without the need for transplantation. Moreover our tumor growth simulations indicate that CSC-driven tumors display evolutionary features that can be considered beneficial during tumor progression. Besides an increased heterogeneity they also exhibit properties that allow the escape of clones from local fitness peaks. This leads to more aggressive phenotypes in the long run and makes the neoplasm more adaptable to stringent selective forces such as cancer treatment. Indeed when therapy is applied the clone landscape of the regrown tumor is more aggressive with respect to the primary tumor, whereas the classical model demonstrated similar patterns before and after therapy. Understanding these often counter-intuitive fundamental properties of (non-hierarchically organized malignancies is a crucial step in validating the CSC concept as well as providing insight into the therapeutical consequences of this model.