WorldWideScience

Sample records for model organism databases

  1. The Zebrafish Model Organism Database (ZFIN)

    Data.gov (United States)

    U.S. Department of Health & Human Services — ZFIN serves as the zebrafish model organism database. It aims to: a) be the community database resource for the laboratory use of zebrafish, b) develop and support...

  2. Xanthusbase: adapting wikipedia principles to a model organism database

    OpenAIRE

    Arshinoff, Bradley I.; Suen, Garret; Just, Eric M.; Merchant, Sohel M.; Kibbe, Warren A.; Chisholm, Rex L.; Welch, Roy D.

    2006-01-01

    xanthusBase () is the official model organism database (MOD) for the social bacterium Myxococcus xanthus. In many respects, M.xanthus represents the pioneer model organism (MO) for studying the genetic, biochemical, and mechanistic basis of prokaryotic multicellularity, a topic that has garnered considerable attention due to the significance of biofilms in both basic and applied microbiology research. To facilitate its utility, the design of xanthusBase incorporates open-source software, leve...

  3. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    Science.gov (United States)

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

  4. Table of 3D organ model IDs and organ names (PART-OF Tree) - BodyParts3D | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us BodyParts3D Table of 3D organ model IDs and organ names (PART-OF Tree) Data detail Data name Table of 3D org...an model IDs and organ names (PART-OF Tree) DOI 10.18908/lsdba.nbdc00837-002 Description of ...data contents List of downloadable 3D organ models in a tab-delimited text file format, describing the correspondence between 3D org...an model IDs and organ names available in PART-OF Tree. D...atabase Site Policy | Contact Us Table of 3D organ model IDs and organ names (PART-OF Tree) - BodyParts3D | LSDB Archive ...

  5. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    Directory of Open Access Journals (Sweden)

    Lemoine Nicholas R

    2007-11-01

    of genes associated with the progression of cancer, cross-platform meta-analysis, SNP selection for pancreatic cancer association studies, cancer gene promoter analysis as well as mining cancer ontology information. The data model is generic and can be easily extended and applied to other types of cancer. The database is available online with no restrictions for the scientific community at http://www.pancreasexpression.org/.

  6. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms.

    Science.gov (United States)

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-12-03

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes.

  7. Classical databases and knowledge organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2015-01-01

    This paper considers classical bibliographic databases based on the Boolean retrieval model (such as MEDLINE and PsycInfo). This model is challenged by modern search engines and information retrieval (IR) researchers, who often consider Boolean retrieval a less efficient approach. The paper...

  8. Database for propagation models

    Science.gov (United States)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  9. MaizeGDB: The Maize Model Organism Database for Basic, Translational, and Applied Research

    OpenAIRE

    Lawrence, Carolyn J.; Harper, Lisa C.; Schaeffer, Mary L.; Sen, Taner Z.; Seigfried, Trent E.; Campbell, Darwin A.

    2008-01-01

    In 2001 maize became the number one production crop in the world with the Food and Agriculture Organization of the United Nations reporting over 614 million tonnes produced. Its success is due to the high productivity per acre in tandem with a wide variety of commercial uses. Not only is maize an excellent source of food, feed, and fuel, but also its by-products are used in the production of various commercial products. Maize's unparalleled success in agriculture stems from basic research, th...

  10. Croatian Cadastre Database Modelling

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2013-04-01

    Full Text Available The Cadastral Data Model has been developed as a part of a larger programme to improve products and production environment of the Croatian Cadastral Service of the State Geodetic Administration (SGA. The goal of the project was to create a cadastral data model conforming to relevant standards and specifications in the field of geoinformation (GI adapted by international organisations for standardisation under the competence of GI (ISO TC211 and OpenGIS and it implementations.The main guidelines during the project have been object-oriented conceptual modelling of the updated users' requests and a "new" cadastral data model designed by SGA - Faculty of Geodesy - Geofoto LLC project team. The UML of the conceptual model is given per all feature categories and is described only at class level. The next step was the UML technical model, which was developed from the UML conceptual model. The technical model integrates different UML schemas in one united schema.XML (eXtensible Markup Language was applied for XML description of UML models, and then the XML schema was transferred into GML (Geography Markup Language application schema. With this procedure we have completely described the behaviour of each cadastral feature and rules for the transfer and storage of cadastral features into the database.

  11. HIERARCHICAL ORGANIZATION OF INFORMATION, IN RELATIONAL DATABASES

    Directory of Open Access Journals (Sweden)

    Demian Horia

    2008-05-01

    Full Text Available In this paper I will present different types of representation, of hierarchical information inside a relational database. I also will compare them to find the best organization for specific scenarios.

  12. The PMDB Protein Model Database

    Science.gov (United States)

    Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna

    2006-01-01

    The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873

  13. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    Science.gov (United States)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  14. Modeling biology using relational databases.

    Science.gov (United States)

    Peitzsch, Robert M

    2003-02-01

    There are several different methodologies that can be used for designing a database schema; no one is the best for all occasions. This unit demonstrates two different techniques for designing relational tables and discusses when each should be used. These two techniques presented are (1) traditional Entity-Relationship (E-R) modeling and (2) a hybrid method that combines aspects of data warehousing and E-R modeling. The method of choice depends on (1) how well the information and all its inherent relationships are understood, (2) what types of questions will be asked, (3) how many different types of data will be included, and (4) how much data exists.

  15. Function and organization of CPC database system

    International Nuclear Information System (INIS)

    Yoshida, Tohru; Tomiyama, Mineyoshi.

    1986-02-01

    It is very time-consuming and expensive work to develop computer programs. Therefore, it is desirable to effectively use the existing program. For this purpose, it is required for researchers and technical staffs to obtain the relevant informations easily. CPC (Computer Physics Communications) is a journal published to facilitate the exchange of physics programs and of the relevant information about the use of computers in the physics community. There are about 1300 CPC programs in JAERI computing center, and the number of programs is increasing. A new database system (CPC database) has been developed to manage the CPC programs and their information. Users obtain information about all the programs stored in the CPC database. Also users can find and copy the necessary program by inputting the program name, the catalogue number and the volume number. In this system, each operation is done by menu selection. Every CPC program is compressed and stored in the database; the required storage size is one third of the non-compressed format. Programs unused for a long time are moved to magnetic tape. The present report describes the CPC database system and the procedures for its use. (author)

  16. Software Engineering Laboratory (SEL) database organization and user's guide

    Science.gov (United States)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  17. Generic Database Cost Models for Hierarchical Memory Systems

    OpenAIRE

    Manegold, Stefan; Boncz, Peter; Kersten, Martin

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is more and more becoming a significant---if not the major---cost component of database operations. If used properly, fast but small cache memories---usually organized in cascading hierarchy between CPU ...

  18. Data-based mechanistic modeling of dissolved organic carbon load through storms using continuous 15-minute resolution observations within UK upland watersheds

    Science.gov (United States)

    Jones, T.; Chappell, N. A.

    2013-12-01

    Few watershed modeling studies have addressed DOC dynamics through storm hydrographs (notable exceptions include Boyer et al., 1997 Hydrol Process; Jutras et al., 2011 Ecol Model; Xu et al., 2012 Water Resour Res). In part this has been a consequence of an incomplete understanding of the biogeochemical processes leading to DOC export to streams (Neff & Asner, 2001, Ecosystems) & an insufficient frequency of DOC monitoring to capture sometimes complex time-varying relationships between DOC & storm hydrographs (Kirchner et al., 2004, Hydrol Process). We present the results of a new & ongoing UK study that integrates two components - 1/ New observations of DOC concentrations (& derived load) continuously monitored at 15 minute intervals through multiple seasons for replicated watersheds; & 2/ A dynamic modeling technique that is able to quantify storage-decay effects, plus hysteretic, nonlinear, lagged & non-stationary relationships between DOC & controlling variables (including rainfall, streamflow, temperature & specific biogeochemical variables e.g., pH, nitrate). DOC concentration is being monitored continuously using the latest generation of UV spectrophotometers (i.e. S::CAN spectro::lysers) with in situ calibrations to laboratory analyzed DOC. The controlling variables are recorded simultaneously at the same stream stations. The watersheds selected for study are among the most intensively studied basins in the UK uplands, namely the Plynlimon & Llyn Brianne experimental basins. All contain areas of organic soils, with three having improved grasslands & three conifer afforested. The dynamic response characteristics (DRCs) that describe detailed DOC behaviour through sequences of storms are simulated using the latest identification routines for continuous time transfer function (CT-TF) models within the Matlab-based CAPTAIN toolbox (some incorporating nonlinear components). To our knowledge this is the first application of CT-TFs to modelling DOC processes

  19. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2011-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fifth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rules

  20. Database modeling and design logical design

    CERN Document Server

    Teorey, Toby J; Nadeau, Tom; Jagadish, HV

    2005-01-01

    Database systems and database design technology have undergone significant evolution in recent years. The relational data model and relational database systems dominate business applications; in turn, they are extended by other technologies like data warehousing, OLAP, and data mining. How do you model and design your database application in consideration of new technology or new business needs? In the extensively revised fourth edition, you'll get clear explanations, lots of terrific examples and an illustrative case, and the really practical advice you have come to count on--with design rul

  1. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes

    DEFF Research Database (Denmark)

    Santos Delgado, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNAand protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface...

  2. Solid Waste Projection Model: Database User's Guide

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  3. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  4. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  5. Towards a common thermodynamic database for speciation models

    International Nuclear Information System (INIS)

    Lee, J. van der; Lomenech, C.

    2004-01-01

    Bio-geochemical speciation models and reactive transport models are reaching an operational stage, allowing simulation of complex dynamic experiments and description of field observations. For decades, the main focus has been on model performance but at present, the availability and reliability of thermodynamic data is the limiting factor of the models. Thermodynamic models applied to real and complex geochemical systems require much more extended thermodynamic databases with many minerals, colloidal phases, humic and fulvic acids, cementitious phases and (dissolved) organic complexing agents. Here we propose a methodological approach to achieve, ultimately, a common, operational database including the reactions and constants of these phases. Provided they are coherent with the general thermodynamic laws, sorption reactions are included as well. We therefore focus on sorption reactions and parameter values associated with specific sorption models. The case of sorption on goethite has been used to illustrate the way the methodology handles the problem of inconsistency and data quality. (orig.)

  6. Assessment of the SFC database for analysis and modeling

    Science.gov (United States)

    Centeno, Martha A.

    1994-01-01

    SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.

  7. ZZ HATCHES-18, Database for radiochemical modelling

    International Nuclear Information System (INIS)

    Heath, T.G.

    2008-01-01

    1 - Description of program or function: HATCHES is a referenced, quality assured, thermodynamic database, developed by Serco Assurance for Nirex. Although originally compiled for use in radiochemical modelling work, HATCHES also includes data suitable for many other applications e.g. toxic waste disposal, effluent treatment and chemical processing. It is used in conjunction with chemical and geochemical computer programs, to simulate a wide variety of reactions in aqueous environments. The database includes thermodynamic data (the log formation constant and the enthalpy of formation for the chemical species) for the actinides, fission products and decay products. The datasets for Ni, Tc, U, Np, Pu and Am are based on the NEA reviews of the chemical thermodynamics of these elements. The data sets for these elements with oxalate, citrate and EDTA are based on the NEA-selected values. For iso-saccharinic acid, additional data (non-selected values) have been included from the NEA review as well as data derived from other sources. HATCHES also includes data for many toxic metals and for elements commonly found in groundwaters or geological materials. HARPHRQ operates by reference to the PHREEQE master species list. Thus the thermodynamic information supplied is: a) the log equilibrium constant for the formation reaction of the requested species from the PHREEQE master species for the corresponding elements; b) the enthalpy of reaction for the formation reaction of the requested species from the PHREEQE master species for the corresponding elements. This version of HATCHES has been updated since the previous release to provide consistency with the selected data from two recent publications in the OECD Nuclear Energy Agency series on chemical thermodynamics: Chemical Thermodynamics Series Volume 7 (2005): Chemical Thermodynamics of Selenium by Aeke Olin (Chairman), Bengt Nolaeng, Lars-Olof Oehman, Evgeniy Osadchii and Erik Rosen and Chemical Thermodynamics Series Volume 8

  8. Carotenoids Database: structures, chemical fingerprints and distribution among organisms.

    Science.gov (United States)

    Yabuzaki, Junko

    2017-01-01

    To promote understanding of how organisms are related via carotenoids, either evolutionarily or symbiotically, or in food chains through natural histories, we built the Carotenoids Database. This provides chemical information on 1117 natural carotenoids with 683 source organisms. For extracting organisms closely related through the biosynthesis of carotenoids, we offer a new similarity search system 'Search similar carotenoids' using our original chemical fingerprint 'Carotenoid DB Chemical Fingerprints'. These Carotenoid DB Chemical Fingerprints describe the chemical substructure and the modification details based upon International Union of Pure and Applied Chemistry (IUPAC) semi-systematic names of the carotenoids. The fingerprints also allow (i) easier prediction of six biological functions of carotenoids: provitamin A, membrane stabilizers, odorous substances, allelochemicals, antiproliferative activity and reverse MDR activity against cancer cells, (ii) easier classification of carotenoid structures, (iii) partial and exact structure searching and (iv) easier extraction of structural isomers and stereoisomers. We believe this to be the first attempt to establish fingerprints using the IUPAC semi-systematic names. For extracting close profiled organisms, we provide a new tool 'Search similar profiled organisms'. Our current statistics show some insights into natural history: carotenoids seem to have been spread largely by bacteria, as they produce C30, C40, C45 and C50 carotenoids, with the widest range of end groups, and they share a small portion of C40 carotenoids with eukaryotes. Archaea share an even smaller portion with eukaryotes. Eukaryotes then have evolved a considerable variety of C40 carotenoids. Considering carotenoids, eukaryotes seem more closely related to bacteria than to archaea aside from 16S rRNA lineage analysis. : http://carotenoiddb.jp. © The Author(s) 2017. Published by Oxford University Press.

  9. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  10. On the modelling of microsegregation in steels involving thermodynamic databases

    International Nuclear Information System (INIS)

    You, D; Bernhard, C; Michelic, S; Wieser, G; Presoly, P

    2016-01-01

    A microsegregation model involving thermodynamic database based on Ohnaka's model is proposed. In the model, the thermodynamic database is applied for equilibrium calculation. Multicomponent alloy effects on partition coefficients and equilibrium temperatures are accounted for. Microsegregation and partition coefficients calculated using different databases exhibit significant differences. The segregated concentrations predicted using the optimized database are in good agreement with the measured inter-dendritic concentrations. (paper)

  11. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  12. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  13. First Database Course--Keeping It All Organized

    Science.gov (United States)

    Baugh, Jeanne M.

    2015-01-01

    All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…

  14. Parameters for Organism Grouping - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Parameters for Organism Grouping Data detail Data name Parameters for Organism...his Database Site Policy | Contact Us Parameters for Organism Grouping - Gclust Server | LSDB Archive ...

  15. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  16. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  17. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  18. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  19. Functional Decomposition of Modeling and Simulation Terrain Database Generation Process

    National Research Council Canada - National Science Library

    Yakich, Valerie R; Lashlee, J. D

    2008-01-01

    .... This report documents the conceptual procedure as implemented by Lockheed Martin Simulation, Training, and Support and decomposes terrain database construction using the Integration Definition for Function Modeling (IDEF...

  20. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    Science.gov (United States)

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. © The Author(s) 2016. Published by Oxford University Press.

  1. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  2. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  3. Solid Waste Projection Model: Database (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement

  4. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    Science.gov (United States)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  5. Solid Waste Projection Model: Database (Version 1.4)

    International Nuclear Information System (INIS)

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193)

  6. Framework Model for Database Replication within the Availability Zones

    OpenAIRE

    Al-Mughrabi, Ala'a Atallah; Owaied, Hussein

    2013-01-01

    This paper presents a proposed model for database replication model in private cloud availability regions, which is an enhancement of the SQL Server AlwaysOn Layers of Protection Model presents by Microsoft in 2012. The enhancement concentrates in the database replication for private cloud availability regions through the use of primary and secondary servers. The processes of proposed model during the client send Write/Read Request to the server, in synchronous and semi synchronous replicatio...

  7. Databases for highway inventories. Proposal for a new model

    Energy Technology Data Exchange (ETDEWEB)

    Perez Casan, J.A.

    2016-07-01

    Database models for highway inventories are based on classical schemes for relational databases: many related tables, in which the database designer establishes, a priori, every detail that they consider relevant for inventory management. This kind of database presents several problems. First, adapting the model and its applications when new database features appear is difficult. In addition, the different needs of different sets of road inventory users are difficult to fulfil with these schemes. For example, maintenance management services, road authorities and emergency services have different needs. In addition, this kind of database cannot be adapted to new scenarios, such as other countries and regions (that may classify roads or name certain elements differently). The problem is more complex if the language used in these scenarios is not the same as that used in the database design. In addition, technicians need a long time to learn to use the database efficiently. This paper proposes a flexible, multilanguage and multipurpose database model, which gives an effective and simple solution to the aforementioned problems. (Author)

  8. Expanding on Successful Concepts, Models, and Organization

    Science.gov (United States)

    If the goal of the AEP framework was to replace existing exposure models or databases for organizing exposure data with a concept, we would share Dr. von Göetz concerns. Instead, the outcome we promote is broader use of an organizational framework for exposure science. The f...

  9. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes.

    Science.gov (United States)

    Santos, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    The eukaryotic cell division cycle is a highly regulated process that consists of a complex series of events and involves thousands of proteins. Researchers have studied the regulation of the cell cycle in several organisms, employing a wide range of high-throughput technologies, such as microarray-based mRNA expression profiling and quantitative proteomics. Due to its complexity, the cell cycle can also fail or otherwise change in many different ways if important genes are knocked out, which has been studied in several microscopy-based knockdown screens. The data from these many large-scale efforts are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase--available at http://www.cyclebase.org--an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNA and protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface, designed around an overview figure that summarizes all the cell-cycle-related data for a gene. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Building spatio-temporal database model based on ontological approach using relational database environment

    International Nuclear Information System (INIS)

    Mahmood, N.; Burney, S.M.A.

    2017-01-01

    Everything in this world is encapsulated by space and time fence. Our daily life activities are utterly linked and related with other objects in vicinity. Therefore, a strong relationship exist with our current location, time (including past, present and future) and event through with we are moving as an object also affect our activities in life. Ontology development and its integration with database are vital for the true understanding of the complex systems involving both spatial and temporal dimensions. In this paper we propose a conceptual framework for building spatio-temporal database model based on ontological approach. We have used relational data model for modelling spatio-temporal data content and present our methodology with spatio-temporal ontological accepts and its transformation into spatio-temporal database model. We illustrate the implementation of our conceptual model through a case study related to cultivated land parcel used for agriculture to exhibit the spatio-temporal behaviour of agricultural land and related entities. Moreover, it provides a generic approach for designing spatiotemporal databases based on ontology. The proposed model is capable to understand the ontological and somehow epistemological commitments and to build spatio-temporal ontology and transform it into a spatio-temporal data model. Finally, we highlight the existing and future research challenges. (author)

  11. Generic Database Cost Models for Hierarchical Memory Systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite for database query optimization. Although extensively studied for conventional disk-based DBMSs, cost modeling in main-memory DBMSs is still an open issue. Recent database research has demonstrated that memory access is

  12. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  13. RANCANGAN DATABASE SUBSISTEM PRODUKSI DENGAN PENDEKATAN SEMANTIC OBJECT MODEL

    Directory of Open Access Journals (Sweden)

    Oviliani Yenty Yuliana

    2002-01-01

    Full Text Available To compete in the global market, business performer who active in industry fields should have and get information quickly and accurately, so they could make the precise decision. Traditional cost accounting system cannot give sufficient information, so many industries shift to Activity-Based Costing system (ABC. ABC system is more complex and need more data that should be save and process, so it should be applied information technology and database than traditional cost accounting system. The development of the software technology recently makes the construction of application program is not problem again. The primary problem is how to design database that presented information quickly and accurately. For that reason it necessary to make the model first. This paper discusses database modelling with semantic object model approach. This model is easier to use and is generate more normal database design than entity relationship model approach. Abstract in Bahasa Indonesia : Dalam persaingan di pasar bebas, para pelaku bisnis di bidang industri dalam membuat suatu keputusan yang tepat memerlukan informasi secara cepat dan akurat. Sistem akuntansi biaya tradisional tidak dapat menyediakan informasi yang memadai, sehingga banyak perusahaan industri yang beralih ke sistem Activity-Based Costing (ABC. Tetapi, sistem ABC merupakan sistem yang kompleks dan memerlukan banyak data yang harus disimpan dan diolah, sehingga harus menggunakan teknologi informasi dan database. Kemajuan di bidang perangkat lunak mengakibatkan pembuatan aplikasi program bukan masalah lagi. Permasalahan utama adalah bagaimana merancang database, agar dapat menyajikan informasi secara cepat dan akurat. Untuk itu, dalam makalah ini dibahas pemodelan database dengan pendekatan semantic object model. Model data ini lebih mudah digunakan dan menghasilkan transformasi yang lebih normal, jika dibandingkan dengan entity relationship model yang umum digunakan. Kata kunci: Sub Sistem

  14. Verification and Validation of Tropospheric Model/Database

    National Research Council Canada - National Science Library

    Junho, choi

    1998-01-01

    A verification and validation of tropospheric models and databases has been performed based on ray tracing algorithm, statistical analysis, test on real time system operation, and other technical evaluation process...

  15. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  16. Database structure for plasma modeling programs

    International Nuclear Information System (INIS)

    Dufresne, M.; Silvester, P.P.

    1993-01-01

    Continuum plasma models often use a finite element (FE) formulation. Another approach is simulation models based on particle-in-cell (PIC) formulation. The model equations generally include four nonlinear differential equations specifying the plasma parameters. In simulation a large number of equations must be integrated iteratively to determine the plasma evolution from an initial state. The complexity of the resulting programs is a combination of the physics involved and the numerical method used. The data structure requirements of plasma programs are stated by defining suitable abstract data types. These abstractions are then reduced to data structures and a group of associated algorithms. These are implemented in an object oriented language (C++) as object classes. Base classes encapsulate data management into a group of common functions such as input-output management, instance variable updating and selection of objects by Boolean operations on their instance variables. Operations are thereby isolated from specific element types and uniformity of treatment is guaranteed. Creation of the data structures and associated functions for a particular plasma model is reduced merely to defining the finite element matrices for each equation, or the equations of motion for PIC models. Changes in numerical method or equation alterations are readily accommodated through the mechanism of inheritance, without modification of the data management software. The central data type is an n-relation implemented as a tuple of variable internal structure. Any finite element program may be described in terms of five relational tables: nodes, boundary conditions, sources, material/particle descriptions, and elements. Equivalently, plasma simulation programs may be described using four relational tables: cells, boundary conditions, sources, and particle descriptions

  17. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    Science.gov (United States)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  18. Imprecision and Uncertainty in the UFO Database Model.

    Science.gov (United States)

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  19. Using LUCAS topsoil database to estimate soil organic carbon content in local spectral libraries

    Science.gov (United States)

    Castaldi, Fabio; van Wesemael, Bas; Chabrillat, Sabine; Chartin, Caroline

    2017-04-01

    The quantification of the soil organic carbon (SOC) content over large areas is mandatory to obtain accurate soil characterization and classification, which can improve site specific management at local or regional scale exploiting the strong relationship between SOC and crop growth. The estimation of the SOC is not only important for agricultural purposes: in recent years, the increasing attention towards global warming highlighted the crucial role of the soil in the global carbon cycle. In this context, soil spectroscopy is a well consolidated and widespread method to estimate soil variables exploiting the interaction between chromophores and electromagnetic radiation. The importance of spectroscopy in soil science is reflected by the increasing number of large soil spectral libraries collected in the world. These large libraries contain soil samples derived from a consistent number of pedological regions and thus from different parent material and soil types; this heterogeneity entails, in turn, a large variability in terms of mineralogical and organic composition. In the light of the huge variability of the spectral responses to SOC content and composition, a rigorous classification process is necessary to subset large spectral libraries and to avoid the calibration of global models failing to predict local variation in SOC content. In this regard, this study proposes a method to subset the European LUCAS topsoil database into soil classes using a clustering analysis based on a large number of soil properties. The LUCAS database was chosen to apply a standardized multivariate calibration approach valid for large areas without the need for extensive field and laboratory work for calibration of local models. Seven soil classes were detected by the clustering analyses and the samples belonging to each class were used to calibrate specific partial least square regression (PLSR) models to estimate SOC content of three local libraries collected in Belgium (Loam belt

  20. Database organization for computer-aided characterization of laser diode

    International Nuclear Information System (INIS)

    Oyedokun, Z.O.

    1988-01-01

    Computer-aided data logging involves a huge amount of data which must be properly managed for optimized storage space, easy access, retrieval and utilization. An organization method is developed to enhance the advantages of computer-based data logging of the testing of the semiconductor injection laser which optimize storage space, permit authorized user easy access and inhibits penetration. This method is based on unique file identification protocol tree structure and command file-oriented access procedures

  1. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  2. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    Science.gov (United States)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  3. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting....... By adopting such an abstract data model, the platform enables new and improved services for researchers, librarians and administrators....

  4. Analysis of a virtual memory model for maintaining database views

    Science.gov (United States)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  5. 3MdB: the Mexican Million Models database

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  6. Modelling antibody side chain conformations using heuristic database search.

    Science.gov (United States)

    Ritchie, D W; Kemp, G J

    1997-01-01

    We have developed a knowledge-based system which models the side chain conformations of residues in the variable domains of antibody Fv fragments. The system is written in Prolog and uses an object-oriented database of aligned antibody structures in conjunction with a side chain rotamer library. The antibody database provides 3-dimensional clusters of side chain conformations which can be copied en masse into the model structure. The object-oriented database architecture facilitates a navigational style of database access, necessary to assemble side chains clusters. Around 60% of the model is built using side chain clusters and this eliminates much of the combinatorial complexity associated with many other side chain placement algorithms. Construction and placement of side chain clusters is guided by a heuristic cost function based on a simple model of side chain packing interactions. Even with a simple model, we find that a large proportion of side chain conformations are modelled accurately. We expect our approach could be used with other homologous protein families, in addition to antibodies, both to improve the quality of model structures and to give a "smart start" to the side chain placement problem.

  7. Prefix list for each organism - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Prefix list for each organism Data detail Data name Prefix list for each organi...sm DOI 10.18908/lsdba.nbdc00464-006 Description of data contents List of prefixes for organisms used in Gcl...ust. Each prefix is applied to the top of the sequence ID according to each organism. The first line specifies the number of organi...sm species (95). From the second line, the prefix of each organi... Database Site Policy | Contact Us Prefix list for each organism - Gclust Server | LSDB Archive ...

  8. Generic database cost models for hierarchical memory systems

    NARCIS (Netherlands)

    S. Manegold (Stefan); P.A. Boncz (Peter); M.L. Kersten (Martin)

    2002-01-01

    textabstractAccurate prediction of operator execution time is a prerequisite fordatabase query optimization. Although extensively studied for conventionaldisk-based DBMSs, cost modeling in main-memory DBMSs is still an openissue. Recent database research has demonstrated that memory access ismore

  9. Comparison of thermodynamic databases used in geochemical modelling

    International Nuclear Information System (INIS)

    Chandratillake, M.R.; Newton, G.W.A.; Robinson, V.J.

    1988-05-01

    Four thermodynamic databases used by European groups for geochemical modelling have been compared. Thermodynamic data for both aqueous species and solid species have been listed. When the values are directly comparable any differences between them have been highlighted at two levels of significance. (author)

  10. Property Modelling and Databases in Product-Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Sansonetti, Sascha

    of the PC-SAFT is used. The developed database and property prediction models have been combined into a properties-software that allows different product-process design related applications. The presentation will also briefly highlight applications of the software for virtual product-process design...

  11. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation

    Science.gov (United States)

    2009-09-01

    Introduction This presentation summarizes recent activity in monitoring spacecraft health status using passive remote optical nonimaging ...Approved for public release; distribution is unlimited. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation...It is beneficial to the observer/analyst to understand the fundamental optical signature variability associated with these detection and

  12. Schema Design and Normalization Algorithm for XML Databases Model

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2009-06-01

    Full Text Available In this paper we study the problem of schema design and normalization in XML databases model. We show that, like relational databases, XML documents may contain redundant information, and this redundancy may cause update anomalies. Furthermore, such problems are caused by certain functional dependencies among paths in the document. Based on our research works, in which we presented the functional dependencies and normal forms of XML Schema, we present the decomposition algorithm for converting any XML Schema into normalized one, that satisfies X-BCNF.

  13. From ISIS to CouchDB: Databases and Data Models for Bibliographic Records

    Directory of Open Access Journals (Sweden)

    Luciano Ramalho

    2011-04-01

    Full Text Available For decades bibliographic data has been stored in non-relational databases, and thousands of libraries in developing countries still use ISIS databases to run their OPACs. Fast forward to 2010 and the NoSQL movement has shown that non-relational databases are good enough for Google, Amazon.com and Facebook. Meanwhile, several Open Source NoSQL systems have appeared. This paper discusses the data model of one class of NoSQL products, semistructured, document-oriented databases exemplified by Apache CouchDB and MongoDB, and why they are well-suited to collective cataloging applications. Also shown are the methods, tools, and scripts used to convert, from ISIS to CouchDB, bibliographic records of LILACS, a key Latin American and Caribbean health sciences index operated by the Pan-American Health Organization.

  14. Technical Work Plan for: Thermodynamic Databases for Chemical Modeling

    International Nuclear Information System (INIS)

    C.F. Jovecolon

    2006-01-01

    The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parameter values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species

  15. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  16. NGNP Risk Management Database: A Model for Managing Risk

    International Nuclear Information System (INIS)

    Collins, John

    2009-01-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft(reg s ign) Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool's design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  17. NGNP Risk Management Database: A Model for Managing Risk

    Energy Technology Data Exchange (ETDEWEB)

    John Collins

    2009-09-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft® Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool’s design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  18. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Science.gov (United States)

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  19. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Directory of Open Access Journals (Sweden)

    Ahmad Tamimi

    Full Text Available Profile Hidden Markov Model (Profile-HMM is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  20. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    Energy Technology Data Exchange (ETDEWEB)

    Parisien, Lia [The Environmental Council Of The States, Washington, DC (United States)

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  1. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  2. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    Science.gov (United States)

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  3. Verification of road databases using multiple road models

    Science.gov (United States)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  4. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002.

    Science.gov (United States)

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics's usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics. © The Author(s) 2015. Published by Oxford University Press.

  5. A database model for the radiological characterization of the RA reactor in the 'Vinca' Institute

    International Nuclear Information System (INIS)

    Steljic, M.; Ljubenov, V.

    2004-01-01

    During the preparation and realization of the radiological characterization of nuclear facility it is necessary to organize, store, review and process large amount of various data types. The documentation has to be treated according to the quality assurance (QA) programme requirements, and to ultimate goal would be to establish the unique record management system (RMS) for the nuclear facility decommissioning project. This paper presents the design details of the database model for the radiological characterization of the RA research reactor (author) [sr

  6. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002

    OpenAIRE

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present Cyan...

  7. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    Energy Technology Data Exchange (ETDEWEB)

    Hagan, Ross F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  8. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  9. Database and Interim Glass Property Models for Hanford HLW Glasses

    International Nuclear Information System (INIS)

    Hrma, Pavel R; Piepel, Gregory F; Vienna, John D; Cooley, Scott K; Kim, Dong-Sang; Russell, Renee L

    2001-01-01

    The purpose of this report is to provide a methodology for an increase in the efficiency and a decrease in the cost of vitrifying high-level waste (HLW) by optimizing HLW glass formulation. This methodology consists in collecting and generating a database of glass properties that determine HLW glass processability and acceptability and relating these properties to glass composition. The report explains how the property-composition models are developed, fitted to data, used for glass formulation optimization, and continuously updated in response to changes in HLW composition estimates and changes in glass processing technology. Further, the report reviews the glass property-composition literature data and presents their preliminary critical evaluation and screening. Finally the report provides interim property-composition models for melt viscosity, for liquidus temperature (with spinel and zircon primary crystalline phases), and for the product consistency test normalized releases of B, Na, and Li. Models were fitted to a subset of the screened database deemed most relevant for the current HLW composition region

  10. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    Science.gov (United States)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  11. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    Science.gov (United States)

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  12. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    OpenAIRE

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reportin...

  13. On the Perceptual Organization of Image Databases Using Cognitive Discriminative Biplots

    Directory of Open Access Journals (Sweden)

    Spiros Fotopoulos

    2007-01-01

    Full Text Available A human-centered approach to image database organization is presented in this study. The management of a generic image database is pursued using a standard psychophysical experimental procedure followed by a well-suited data analysis methodology that is based on simple geometrical concepts. The end result is a cognitive discriminative biplot, which is a visualization of the intrinsic organization of the image database best reflecting the user's perception. The discriminating power of the introduced cognitive biplot constitutes an appealing tool for image retrieval and a flexible interface for visual data mining tasks. These ideas were evaluated in two ways. First, the separability of semantically distinct image classes was measured according to their reduced representations on the biplot. Then, a nearest-neighbor retrieval scheme was run on the emerged low-dimensional terrain to measure the suitability of the biplot for performing content-based image retrieval (CBIR. The achieved organization performance when compared with the performance of a contemporary system was found superior. This promoted the further discussion of packing these ideas into a realizable algorithmic procedure for an efficient and effective personalized CBIR system.

  14. Construction and completion of flux balance models from pathway databases.

    Science.gov (United States)

    Latendresse, Mario; Krummenacker, Markus; Trupp, Miles; Karp, Peter D

    2012-02-01

    Flux balance analysis (FBA) is a well-known technique for genome-scale modeling of metabolic flux. Typically, an FBA formulation requires the accurate specification of four sets: biochemical reactions, biomass metabolites, nutrients and secreted metabolites. The development of FBA models can be time consuming and tedious because of the difficulty in assembling completely accurate descriptions of these sets, and in identifying errors in the composition of these sets. For example, the presence of a single non-producible metabolite in the biomass will make the entire model infeasible. Other difficulties in FBA modeling are that model distributions, and predicted fluxes, can be cryptic and difficult to understand. We present a multiple gap-filling method to accelerate the development of FBA models using a new tool, called MetaFlux, based on mixed integer linear programming (MILP). The method suggests corrections to the sets of reactions, biomass metabolites, nutrients and secretions. The method generates FBA models directly from Pathway/Genome Databases. Thus, FBA models developed in this framework are easily queried and visualized using the Pathway Tools software. Predicted fluxes are more easily comprehended by visualizing them on diagrams of individual metabolic pathways or of metabolic maps. MetaFlux can also remove redundant high-flux loops, solve FBA models once they are generated and model the effects of gene knockouts. MetaFlux has been validated through construction of FBA models for Escherichia coli and Homo sapiens. Pathway Tools with MetaFlux is freely available to academic users, and for a fee to commercial users. Download from: biocyc.org/download.shtml. mario.latendresse@sri.com Supplementary data are available at Bioinformatics online.

  15. Multiscale modeling of alloy solidification using a database approach

    Science.gov (United States)

    Tan, Lijian; Zabaras, Nicholas

    2007-11-01

    A two-scale model based on a database approach is presented to investigate alloy solidification. Appropriate assumptions are introduced to describe the behavior of macroscopic temperature, macroscopic concentration, liquid volume fraction and microstructure features. These assumptions lead to a macroscale model with two unknown functions: liquid volume fraction and microstructure features. These functions are computed using information from microscale solutions of selected problems. This work addresses the selection of sample problems relevant to the interested problem and the utilization of data from the microscale solution of the selected sample problems. A computationally efficient model, which is different from the microscale and macroscale models, is utilized to find relevant sample problems. In this work, the computationally efficient model is a sharp interface solidification model of a pure material. Similarities between the sample problems and the problem of interest are explored by assuming that the liquid volume fraction and microstructure features are functions of solution features extracted from the solution of the computationally efficient model. The solution features of the computationally efficient model are selected as the interface velocity and thermal gradient in the liquid at the time the sharp solid-liquid interface passes through. An analytical solution of the computationally efficient model is utilized to select sample problems relevant to solution features obtained at any location of the domain of the problem of interest. The microscale solution of selected sample problems is then utilized to evaluate the two unknown functions (liquid volume fraction and microstructure features) in the macroscale model. The temperature solution of the macroscale model is further used to improve the estimation of the liquid volume fraction and microstructure features. Interpolation is utilized in the feature space to greatly reduce the number of required

  16. BUSINESS MODELLING AND DATABASE DESIGN IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Mihai-Constantin AVORNICULUI

    2015-04-01

    Full Text Available Electronic commerce is growing constantly from one year to another in the last decade, few are the areas that also register such a growth. It covers the exchanges of computerized data, but also electronic messaging, linear data banks and electronic transfer payment. Cloud computing, a relatively new concept and term, is a model of access services via the internet to distributed systems of configurable calculus resources at request which can be made available quickly with minimum management effort and intervention from the client and the provider. Behind an electronic commerce system in cloud there is a data base which contains the necessary information for the transactions in the system. Using business modelling, we get many benefits, which makes the design of the database used by electronic commerce systems in cloud considerably easier.

  17. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  18. Developing High-resolution Soil Database for Regional Crop Modeling in East Africa

    Science.gov (United States)

    Han, E.; Ines, A. V. M.

    2014-12-01

    The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.

  19. SynechoNET: integrated protein-protein interaction database of a model cyanobacterium Synechocystis sp. PCC 6803

    OpenAIRE

    Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon

    2008-01-01

    Background Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. Description We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactio...

  20. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    Science.gov (United States)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into

  1. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  2. Virtual Organizations: Trends and Models

    Science.gov (United States)

    Nami, Mohammad Reza; Malekpour, Abbaas

    The Use of ICT in business has changed views about traditional business. With VO, organizations with out physical, geographical, or structural constraint can collaborate with together in order to fulfill customer requests in a networked environment. This idea improves resource utilization, reduces development process and costs, and saves time. Virtual Organization (VO) is always a form of partnership and managing partners and handling partnerships are crucial. Virtual organizations are defined as a temporary collection of enterprises that cooperate and share resources, knowledge, and competencies to better respond to business opportunities. This paper presents an overview of virtual organizations and main issues in collaboration such as security and management. It also presents a number of different model approaches according to their purpose and applications.

  3. MetRxn: a knowledgebase of metabolites and reactions spanning metabolic models and databases

    Directory of Open Access Journals (Sweden)

    Kumar Akhil

    2012-01-01

    Full Text Available Abstract Background Increasingly, metabolite and reaction information is organized in the form of genome-scale metabolic reconstructions that describe the reaction stoichiometry, directionality, and gene to protein to reaction associations. A key bottleneck in the pace of reconstruction of new, high-quality metabolic models is the inability to directly make use of metabolite/reaction information from biological databases or other models due to incompatibilities in content representation (i.e., metabolites with multiple names across databases and models, stoichiometric errors such as elemental or charge imbalances, and incomplete atomistic detail (e.g., use of generic R-group or non-explicit specification of stereo-specificity. Description MetRxn is a knowledgebase that includes standardized metabolite and reaction descriptions by integrating information from BRENDA, KEGG, MetaCyc, Reactome.org and 44 metabolic models into a single unified data set. All metabolite entries have matched synonyms, resolved protonation states, and are linked to unique structures. All reaction entries are elementally and charge balanced. This is accomplished through the use of a workflow of lexicographic, phonetic, and structural comparison algorithms. MetRxn allows for the download of standardized versions of existing genome-scale metabolic models and the use of metabolic information for the rapid reconstruction of new ones. Conclusions The standardization in description allows for the direct comparison of the metabolite and reaction content between metabolic models and databases and the exhaustive prospecting of pathways for biotechnological production. This ever-growing dataset currently consists of over 76,000 metabolites participating in more than 72,000 reactions (including unresolved entries. MetRxn is hosted on a web-based platform that uses relational database models (MySQL.

  4. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  5. Database and prediction model for CANDU pressure tube diameter

    Energy Technology Data Exchange (ETDEWEB)

    Jung, J.Y.; Park, J.H. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2014-07-01

    The pressure tube (PT) diameter is basic data in evaluating the CCP (critical channel power) of a CANDU reactor. Since the CCP affects the operational margin directly, an accurate prediction of the PT diameter is important to assess the operational margin. However, the PT diameter increases by creep owing to the effects of irradiation by neutron flux, stress, and reactor operating temperatures during the plant service period. Thus, it has been necessary to collect the measured data of the PT diameter and establish a database (DB) and develop a prediction model of PT diameter. Accordingly, in this study, a DB for the measured PT diameter data was established and a neural network (NN) based diameter prediction model was developed. The established DB included not only the measured diameter data but also operating conditions such as the temperature, pressure, flux, and effective full power date. The currently developed NN based diameter prediction model considers only extrinsic variables such as the operating conditions, and will be enhanced to consider the effect of intrinsic variables such as the micro-structure of the PT material. (author)

  6. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Olson, R.J.

    2002-03-19

    NPP for 0.5{sup o}-grid cells for which inventory, modeling, or remote-sensing tools were used to scale up the point measurements. Documentation of the content and organization of the EMDI databases are provided.

  7. Model of organ dose combination

    International Nuclear Information System (INIS)

    Valley, J.-F.; Lerch, P.

    1977-01-01

    The ICRP recommendations are based on the limitation of the dose to each organ. In the application and for a unique source the critical organ concept allows to limit the calculation and represents the irradiation status of an individuum. When several sources of radiation are involved the derivation of the dose contribution of each source to each organ is necessary. In order to represent the irradiation status a new parameter is to be defined. Propositions have been made by some authors, in particular by Jacobi introducing at this level biological parameters like the incidence rate of detriment and its severity. The new concept is certainly richer than a simple dose notion. However, in the actual situation of knowledge about radiation effects an intermediate parameter, using only physical concepts and the maximum permissible doses to the organs, seems more appropriate. The model, which is a generalization of the critical organ concept and shall be extended in the future to take the biological effects into account, will be presented [fr

  8. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  9. MicrobesFlux: a web platform for drafting metabolic models from the KEGG database

    Directory of Open Access Journals (Sweden)

    Feng Xueyang

    2012-08-01

    Full Text Available Abstract Background Concurrent with the efforts currently underway in mapping microbial genomes using high-throughput sequencing methods, systems biologists are building metabolic models to characterize and predict cell metabolisms. One of the key steps in building a metabolic model is using multiple databases to collect and assemble essential information about genome-annotations and the architecture of the metabolic network for a specific organism. To speed up metabolic model development for a large number of microorganisms, we need a user-friendly platform to construct metabolic networks and to perform constraint-based flux balance analysis based on genome databases and experimental results. Results We have developed a semi-automatic, web-based platform (MicrobesFlux for generating and reconstructing metabolic models for annotated microorganisms. MicrobesFlux is able to automatically download the metabolic network (including enzymatic reactions and metabolites of ~1,200 species from the KEGG database (Kyoto Encyclopedia of Genes and Genomes and then convert it to a metabolic model draft. The platform also provides diverse customized tools, such as gene knockouts and the introduction of heterologous pathways, for users to reconstruct the model network. The reconstructed metabolic network can be formulated to a constraint-based flux model to predict and analyze the carbon fluxes in microbial metabolisms. The simulation results can be exported in the SBML format (The Systems Biology Markup Language. Furthermore, we also demonstrated the platform functionalities by developing an FBA model (including 229 reactions for a recent annotated bioethanol producer, Thermoanaerobacter sp. strain X514, to predict its biomass growth and ethanol production. Conclusion MicrobesFlux is an installation-free and open-source platform that enables biologists without prior programming knowledge to develop metabolic models for annotated microorganisms in the KEGG

  10. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    Science.gov (United States)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  11. GIS-based hydrogeological databases and groundwater modelling

    Science.gov (United States)

    Gogu, Radu Constantin; Carabin, Guy; Hallet, Vincent; Peters, Valerie; Dassargues, Alain

    2001-12-01

    Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high-quality data. Putting all data into a coherent and logical structure supported by a computing environment helps ensure validity and availability and provides a powerful tool for hydrogeological studies. A hydrogeological geographic information system (GIS) database that offers facilities for groundwater-vulnerability analysis and hydrogeological modelling has been designed in Belgium for the Walloon region. Data from five river basins, chosen for their contrasting hydrogeological characteristics, have been included in the database, and a set of applications that have been developed now allow further advances. Interest is growing in the potential for integrating GIS technology and groundwater simulation models. A "loose-coupling" tool was created between the spatial-database scheme and the groundwater numerical model interface GMS (Groundwater Modelling System). Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. Résumé. La validité et la reproductibilité de l'analyse d'un aquifère dépend étroitement de la disponibilité de grandes quantités de données de très bonne qualité. Le fait de mettre toutes les données dans une structure cohérente et logique soutenue par les logiciels nécessaires aide à assurer la validité et la disponibilité et fournit un outil puissant pour les études hydrogéologiques. Une base de données pour un système d'information géographique (SIG) hydrogéologique qui offre toutes les facilités pour l'analyse de la vulnérabilité des eaux souterraines et la modélisation hydrogéologique a été établi en Belgique pour la région Wallonne. Les données de cinq bassins de rivières, choisis pour leurs caractéristiques hydrogéologiques différentes, ont été introduites dans la base de données, et un ensemble d

  12. Modelling of physical properties - databases, uncertainties and predictive power

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    in the estimated/predicted property values, how to assess the quality and reliability of the estimated/predicted property values? The paper will review a class of models for prediction of physical and thermodynamic properties of organic chemicals and their mixtures based on the combined group contribution – atom......Physical and thermodynamic property in the form of raw data or estimated values for pure compounds and mixtures are important pre-requisites for performing tasks such as, process design, simulation and optimization; computer aided molecular/mixture (product) design; and, product-process analysis...

  13. Discovery of possible gene relationships through the application of self-organizing maps to DNA microarray databases.

    Science.gov (United States)

    Chavez-Alvarez, Rocio; Chavoya, Arturo; Mendez-Vazquez, Andres

    2014-01-01

    DNA microarrays and cell cycle synchronization experiments have made possible the study of the mechanisms of cell cycle regulation of Saccharomyces cerevisiae by simultaneously monitoring the expression levels of thousands of genes at specific time points. On the other hand, pattern recognition techniques can contribute to the analysis of such massive measurements, providing a model of gene expression level evolution through the cell cycle process. In this paper, we propose the use of one of such techniques--an unsupervised artificial neural network called a Self-Organizing Map (SOM)-which has been successfully applied to processes involving very noisy signals, classifying and organizing them, and assisting in the discovery of behavior patterns without requiring prior knowledge about the process under analysis. As a test bed for the use of SOMs in finding possible relationships among genes and their possible contribution in some biological processes, we selected 282 S. cerevisiae genes that have been shown through biological experiments to have an activity during the cell cycle. The expression level of these genes was analyzed in five of the most cited time series DNA microarray databases used in the study of the cell cycle of this organism. With the use of SOM, it was possible to find clusters of genes with similar behavior in the five databases along two cell cycles. This result suggested that some of these genes might be biologically related or might have a regulatory relationship, as was corroborated by comparing some of the clusters obtained with SOMs against a previously reported regulatory network that was generated using biological knowledge, such as protein-protein interactions, gene expression levels, metabolism dynamics, promoter binding, and modification, regulation and transport of proteins. The methodology described in this paper could be applied to the study of gene relationships of other biological processes in different organisms.

  14. Model Adequacy Analysis of Matching Record Versions in Nosql Databases

    Directory of Open Access Journals (Sweden)

    E. V. Tsviashchenko

    2015-01-01

    Full Text Available The article investigates a model of matching record versions. The goal of this work is to analyse the model adequacy. This model allows estimating a user’s processing time distribution of the record versions and a distribution of the record versions count. The second option of the model was used, according to which, for a client the time to process record versions depends explicitly on the number of updates, performed by the other users between the sequential updates performed by a current client. In order to prove the model adequacy the real experiment was conducted in the cloud cluster. The cluster contains 10 virtual nodes, provided by DigitalOcean Company. The Ubuntu Server 14.04 was used as an operating system (OS. The NoSQL system Riak was chosen for experiments. In the Riak 2.0 version and later provide “dotted vector versions” (DVV option, which is an extension of the classic vector clock. Their use guarantees, that the versions count, simultaneously stored in DB, will not exceed the count of clients, operating in parallel with a record. This is very important while conducting experiments. For developing the application the java library, provided by Riak, was used. The processes run directly on the nodes. In experiment two records were used. They are: Z – the record, versions of which are handled by clients; RZ – service record, which contains record update counters. The application algorithm can be briefly described as follows: every client reads versions of the record Z, processes its updates using the RZ record counters, and saves treated record in database while old versions are deleted form DB. Then, a client rereads the RZ record and increments counters of updates for the other clients. After that, a client rereads the Z record, saves necessary statistics, and deliberates the results of processing. In the case of emerging conflict because of simultaneous updates of the RZ record, the client obtains all versions of that

  15. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    Science.gov (United States)

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  16. Conceptual data modeling on the KRR-1 and 2 decommissioning database

    International Nuclear Information System (INIS)

    Park, Hee Seoung; Park, Seung Kook; Lee, Kune Woo; Park, Jin Ho

    2002-01-01

    A study of the conceptual data modeling to realize the decommissioning database on the KRR-1 and 2 was carried out. In this study, the current state of the abroad decommissioning databased was investigated to make a reference of the database. A scope of the construction of decommissioning database has been set up based on user requirements. Then, a theory of the database construction was established and a scheme on the decommissioning information was classified. The facility information, work information, radioactive waste information, and radiological information dealing with the decommissioning database were extracted through interviews with an expert group and also decided upon the system configuration of the decommissioning database. A code which is composed of 17 bit was produced considering the construction, scheme and information. The results of the conceptual data modeling and the classification scheme will be used as basic data to create a prototype design of the decommissioning database

  17. Data-based modelling of the Earth's dynamic magnetosphere: a review

    Directory of Open Access Journals (Sweden)

    N. A. Tsyganenko

    2013-10-01

    Full Text Available This paper reviews the main advances in the area of data-based modelling of the Earth's distant magnetic field achieved during the last two decades. The essence and the principal goal of the approach is to extract maximum information from available data, using physically realistic and flexible mathematical structures, parameterized by the most relevant and routinely accessible observables. Accordingly, the paper concentrates on three aspects of the modelling: (i mathematical methods to develop a computational "skeleton" of a model, (ii spacecraft databases, and (iii parameterization of the magnetospheric models by the solar wind drivers and/or ground-based indices. The review is followed by a discussion of the main issues concerning further progress in the area, in particular, methods to assess the models' performance and the accuracy of the field line mapping. The material presented in the paper is organized along the lines of the author Julius-Bartels' Medal Lecture during the General Assembly 2013 of the European Geosciences Union.

  18. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  19. Avibase – a database system for managing and organizing taxonomic concepts

    Directory of Open Access Journals (Sweden)

    Denis Lepage

    2014-06-01

    Full Text Available Scientific names of biological entities offer an imperfect resolution of the concepts that they are intended to represent. Often they are labels applied to entities ranging from entire populations to individual specimens representing those populations, even though such names only unambiguously identify the type specimen to which they were originally attached. Thus the real-life referents of names are constantly changing as biological circumscriptions are redefined and thereby alter the sets of individuals bearing those names. This problem is compounded by other characteristics of names that make them ambiguous identifiers of biological concepts, including emendations, homonymy and synonymy. Taxonomic concepts have been proposed as a way to address issues related to scientific names, but they have yet to receive broad recognition or implementation. Some efforts have been made towards building systems that address these issues by cataloguing and organizing taxonomic concepts, but most are still in conceptual or proof-of-concept stage. We present the on-line database Avibase as one possible approach to organizing taxonomic concepts. Avibase has been successfully used to describe and organize 844,000 species-level and 705,000 subspecies-level taxonomic concepts across every major bird taxonomic checklist of the last 125 years. The use of taxonomic concepts in place of scientific names, coupled with efficient resolution services, is a major step toward addressing some of the main deficiencies in the current practices of scientific name dissemination and use.

  20. Zebrabase: An intuitive tracking solution for aquatic model organisms

    OpenAIRE

    Oltova, Jana; Bartunek, Petr; Machonova, Olga; Svoboda, Ondrej; Skuta, Ctibor; Jindrich, Jindrich

    2018-01-01

    Small fish species, like zebrafish or medaka, are constantly gaining popularity in basic research and disease modeling as a useful alternative to rodent model organisms. However, the tracking options for fish within a facility are rather limited. Here, we present an aquatic species tracking database, Zebrabase, developed in our zebrafish research and breeding facility that represents a practical and scalable solution and an intuitive platform for scientists, fish managers and caretakers, in b...

  1. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    Science.gov (United States)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  2. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    International Nuclear Information System (INIS)

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-01-01

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360° through anthropomorphic voxelized female chest and head (0° and 30° tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI vol and multiplying by a physical CTDI vol measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30° relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate total organ doses calculated using our database are

  3. Inorganic bromine in organic molecular crystals: Database survey and four case studies

    Science.gov (United States)

    Nemec, Vinko; Lisac, Katarina; Stilinović, Vladimir; Cinčić, Dominik

    2017-01-01

    We present a Cambridge Structural Database and experimental study of multicomponent molecular crystals containing bromine. The CSD study covers supramolecular behaviour of bromide and tribromide anions as well as halogen bonded dibromine molecules in crystal structures of organic salts and cocrystals, and a study of the geometries and complexities in polybromide anion systems. In addition, we present four case studies of organic structures with bromide, tribromide and polybromide anions as well as the neutral dibromine molecule. These include the first observed crystal with diprotonated phenazine, a double salt of phenazinium bromide and tribromide, a cocrystal of 4-methoxypyridine with the neutral dibromine molecule as a halogen bond donor, as well as bis(4-methoxypyridine)bromonium polybromide. Structural features of the four case studies are in the most part consistent with the statistically prevalent behaviour indicated by the CSD study for given bromine species, although they do exhibit some unorthodox structural features and in that indicate possible supramolecular causes for aberrations from the statistically most abundant (and presumably most favourable) geometries.

  4. Armada: a reference model for an evolving database system

    NARCIS (Netherlands)

    F.E. Groffen (Fabian); M.L. Kersten (Martin); S. Manegold (Stefan)

    2006-01-01

    textabstractThe current database deployment palette ranges from networked sensor-based devices to large data/compute Grids. Both extremes present common challenges for distributed DBMS technology. The local storage per device/node/site is severely limited compared to the total data volume being

  5. Data-based mathematical modeling of vectorial transport across double-transfected polarized cells.

    Science.gov (United States)

    Bartholomé, Kilian; Rius, Maria; Letschert, Katrin; Keller, Daniela; Timmer, Jens; Keppler, Dietrich

    2007-09-01

    Vectorial transport of endogenous small molecules, toxins, and drugs across polarized epithelial cells contributes to their half-life in the organism and to detoxification. To study vectorial transport in a quantitative manner, an in vitro model was used that includes polarized MDCKII cells stably expressing the recombinant human uptake transporter OATP1B3 in their basolateral membrane and the recombinant ATP-driven efflux pump ABCC2 in their apical membrane. These double-transfected cells enabled mathematical modeling of the vectorial transport of the anionic prototype substance bromosulfophthalein (BSP) that has frequently been used to examine hepatobiliary transport. Time-dependent analyses of (3)H-labeled BSP in the basolateral, intracellular, and apical compartments of cells cultured on filter membranes and efflux experiments in cells preloaded with BSP were performed. A mathematical model was fitted to the experimental data. Data-based modeling was optimized by including endogenous transport processes in addition to the recombinant transport proteins. The predominant contributions to the overall vectorial transport of BSP were mediated by OATP1B3 (44%) and ABCC2 (28%). Model comparison predicted a previously unrecognized endogenous basolateral efflux process as a negative contribution to total vectorial transport, amounting to 19%, which is in line with the detection of the basolateral efflux pump Abcc4 in MDCKII cells. Rate-determining steps in the vectorial transport were identified by calculating control coefficients. Data-based mathematical modeling of vectorial transport of BSP as a model substance resulted in a quantitative description of this process and its components. The same systems biology approach may be applied to other cellular systems and to different substances.

  6. Understanding, modeling, and improving main-memory database performance

    OpenAIRE

    Manegold, S.

    2002-01-01

    textabstractDuring the last two decades, computer hardware has experienced remarkable developments. Especially CPU (clock-)speed has been following Moore's Law, i.e., doubling every 18 months; and there is no indication that this trend will change in the foreseeable future. Recent research has revealed that database performance, even with main-memory based systems, can hardly benefit from the ever increasing CPU power. The reason for this is that the performance of other hardware components h...

  7. Relational Database Extension Oriented, Self-adaptive Imagery Pyramid Model

    Directory of Open Access Journals (Sweden)

    HU Zhenghua

    2015-06-01

    Full Text Available With the development of remote sensing technology, especially the improvement of sensor resolution, the amount of image data is increasing. This puts forward higher requirements to manage huge amount of data efficiently and intelligently. And how to access massive remote sensing data with efficiency and smartness becomes an increasingly popular topic. In this paper, against current development status of Spatial Data Management System, we proposed a self-adaptive strategy for image blocking and a method for LoD(level of detailmodel construction that adapts, with the combination of database storage, network transmission and the hardware of the client. Confirmed by experiments, this imagery management mechanism can achieve intelligent and efficient storage and access in a variety of different conditions of database, network and client. This study provides a feasible idea and method for efficient image data management, contributing to the efficient access and management for remote sensing image data which are based on database technology under network environment of C/S architecture.

  8. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  9. Modeling Spatial Data within Object Relational-Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-03-01

    Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.

  10. Modeling of non-additive mixture properties using the Online CHEmical database and Modeling environment (OCHEM

    Directory of Open Access Journals (Sweden)

    Oprisiu Ioana

    2013-01-01

    Full Text Available Abstract The Online Chemical Modeling Environment (OCHEM, http://ochem.eu is a web-based platform that provides tools for automation of typical steps necessary to create a predictive QSAR/QSPR model. The platform consists of two major subsystems: a database of experimental measurements and a modeling framework. So far, OCHEM has been limited to the processing of individual compounds. In this work, we extended OCHEM with a new ability to store and model properties of binary non-additive mixtures. The developed system is publicly accessible, meaning that any user on the Web can store new data for binary mixtures and develop models to predict their non-additive properties. The database already contains almost 10,000 data points for the density, bubble point, and azeotropic behavior of binary mixtures. For these data, we developed models for both qualitative (azeotrope/zeotrope and quantitative endpoints (density and bubble points using different learning methods and specially developed descriptors for mixtures. The prediction performance of the models was similar to or more accurate than results reported in previous studies. Thus, we have developed and made publicly available a powerful system for modeling mixtures of chemical compounds on the Web.

  11. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  12. using stereochemistry models in teaching organic compounds

    African Journals Online (AJOL)

    Preferred Customer

    The purpose of the study was to find out the effect of stereochemistry models on students' ... consistent with the names given to organic compounds. Some of ... Considering class level, what is the performance of the students in naming organic.

  13. Organization of central database for implementation of ionizing radiation protection in the Republic of Croatia

    International Nuclear Information System (INIS)

    Kubelka, D.; Svilicic, N.

    2000-01-01

    The paper is intended to give an overview of the situation in the Republic of Croatia resulting from passing of the new ionizing radiation protection law. Data collecting organization and records keeping structure will be highlighted in particular, as well as data exchange between individual services involved in ionizing radiation protection. The Radiation Protection Act has been prepared in compliance with the international standards and Croatian regulations governing the ionizing radiation protection field. Its enforcement shall probably commence in October 1999, when the necessary bylaws regulating in detail numerous specific and technical issues of particular importance for ionizing radiation protection implementation are expected to be adopted. Within the Croatian Government, the Ministry of Health is charge of ionizing radiation protection. Such competence is traditional in our country and common throughout the world. This Ministry has authorized three institutions to carry out technical tasks related to the radiation protection, such as radiation sources inspections and personal dosimetry. Such distribution of work demands coordination of all involved institutions, control of their work and records keeping. The Croatian Radiation Protection Institute has been entrusted to coordinate work of these institutions, control their activities, and set up the central national registry of radiation sources and workers, as well as doses received by the staff during their work. Since the Croatian Radiation Protection Institute is a newly established institution, we could freely determine our operational framework. Due to its publicly accessible source code and wide base of users and developers, the best prospective for stability and long-term accessibility is offered by the Linux operating system. For the database development, Oracle RDBMS was used, partly because it is a leading manufacturer of database management systems, and partly because our staff is very familiar

  14. Organization of central database for implementation of ionizing radiation protection in the Republic of Croatia

    Energy Technology Data Exchange (ETDEWEB)

    Kubelka, D.; Svilicic, N. [Croatian Radiation Protection Institute, Zagreb (Croatia)

    2000-05-01

    The paper is intended to give an overview of the situation in the Republic of Croatia resulting from passing of the new ionizing radiation protection law. Data collecting organization and records keeping structure will be highlighted in particular, as well as data exchange between individual services involved in ionizing radiation protection. The Radiation Protection Act has been prepared in compliance with the international standards and Croatian regulations governing the ionizing radiation protection field. Its enforcement shall probably commence in October 1999, when the necessary bylaws regulating in detail numerous specific and technical issues of particular importance for ionizing radiation protection implementation are expected to be adopted. Within the Croatian Government, the Ministry of Health is charge of ionizing radiation protection. Such competence is traditional in our country and common throughout the world. This Ministry has authorized three institutions to carry out technical tasks related to the radiation protection, such as radiation sources inspections and personal dosimetry. Such distribution of work demands coordination of all involved institutions, control of their work and records keeping. The Croatian Radiation Protection Institute has been entrusted to coordinate work of these institutions, control their activities, and set up the central national registry of radiation sources and workers, as well as doses received by the staff during their work. Since the Croatian Radiation Protection Institute is a newly established institution, we could freely determine our operational framework. Due to its publicly accessible source code and wide base of users and developers, the best prospective for stability and long-term accessibility is offered by the Linux operating system. For the database development, Oracle RDBMS was used, partly because it is a leading manufacturer of database management systems, and partly because our staff is very familiar

  15. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  16. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  17. Tree-Structured Digital Organisms Model

    Science.gov (United States)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  18. Designing the database for a reliability aware Model-Based System Engineering process

    International Nuclear Information System (INIS)

    Cressent, Robin; David, Pierre; Idasiak, Vincent; Kratz, Frederic

    2013-01-01

    This article outlines the need for a reliability database to implement model-based description of components failure modes and dysfunctional behaviors. We detail the requirements such a database should honor and describe our own solution: the Dysfunctional Behavior Database (DBD). Through the description of its meta-model, the benefits of integrating the DBD in the system design process is highlighted. The main advantages depicted are the possibility to manage feedback knowledge at various granularity and semantic levels and to ease drastically the interactions between system engineering activities and reliability studies. The compliance of the DBD with other reliability database such as FIDES is presented and illustrated. - Highlights: ► Model-Based System Engineering is more and more used in the industry. ► It results in a need for a reliability database able to deal with model-based description of dysfunctional behavior. ► The Dysfunctional Behavior Database aims to fulfill that need. ► It helps dealing with feedback management thanks to its structured meta-model. ► The DBD can profit from other reliability database such as FIDES.

  19. Air Quality Modelling and the National Emission Database

    DEFF Research Database (Denmark)

    Jensen, S. S.

    The project focuses on development of institutional strengthening to be able to carry out national air emission inventories based on the CORINAIR methodology. The present report describes the link between emission inventories and air quality modelling to ensure that the new national air emission...... inventory is able to take into account the data requirements of air quality models...

  20. Modelling organic particles in the atmosphere

    International Nuclear Information System (INIS)

    Couvidat, Florian

    2012-01-01

    Organic aerosol formation in the atmosphere is investigated via the development of a new model named H 2 O (Hydrophilic/Hydrophobic Organics). First, a parameterization is developed to take into account secondary organic aerosol formation from isoprene oxidation. It takes into account the effect of nitrogen oxides on organic aerosol formation and the hydrophilic properties of the aerosols. This parameterization is then implemented in H 2 O along with some other developments and the results of the model are compared to organic carbon measurements over Europe. Model performance is greatly improved by taking into account emissions of primary semi-volatile compounds, which can form secondary organic aerosols after oxidation or can condense when temperature decreases. If those emissions are not taken into account, a significant underestimation of organic aerosol concentrations occurs in winter. The formation of organic aerosols over an urban area was also studied by simulating organic aerosols concentration over the Paris area during the summer campaign of Megapoli (July 2009). H 2 O gives satisfactory results over the Paris area, although a peak of organic aerosol concentrations from traffic, which does not appear in the measurements, appears in the model simulation during rush hours. It could be due to an underestimation of the volatility of organic aerosols. It is also possible that primary and secondary organic compounds do not mix well together and that primary semi volatile compounds do not condense on an organic aerosol that is mostly secondary and highly oxidized. Finally, the impact of aqueous-phase chemistry was studied. The mechanism for the formation of secondary organic aerosol includes in-cloud oxidation of glyoxal, methylglyoxal, methacrolein and methylvinylketone, formation of methyltetrols in the aqueous phase of particles and cloud droplets, and the in-cloud aging of organic aerosols. The impact of wet deposition is also studied to better estimate the

  1. Liver Transplantation for Urea Cycle Disorders: Analysis of the United Network for Organ Sharing Database.

    Science.gov (United States)

    Yu, L; Rayhill, S C; Hsu, E K; Landis, C S

    2015-10-01

    Urea cycle disorders (UCD) are caused by rare inherited defects in the urea cycle enzymes leading to diminished ability to convert ammonia to urea in the liver. The resulting excess of circulating ammonia can lead to central nervous system toxicity and irreversible neurologic damage. Most cases are identified in children. However, UCDs can also be diagnosed in adulthood, and liver transplant is occasionally required. We examined the UNOS database to evaluate outcomes in adult and pediatric patients who underwent liver transplant as treatment for a UCD. We identified 265 pediatric and 13 adult patients who underwent liver transplant for a UCD between 1987 and 2010. The majority (68%) of these patients were transplanted before age 5 years. Ornithine transcarbamylase (OTC) deficiency was the most common UCD in both adults and children who underwent transplant. UCD patients who underwent liver transplant were younger, more likely to be male (67%), had lower pediatric end-stage liver disease/model for end-stage liver disease scores, and were more likely to be Caucasian or Asian compared with all other patients transplanted during the same time period. UCD patients did not have an increased utilization of living donor transplantation in this US cohort. Univariate and multivariate risk factor analyses were performed and did not reveal any significant factors that were predictive of post-transplant death or graft loss. Excellent outcomes were seen in both children and adults with UCDs who underwent transplant with overall 1-, 5-, and 10-year survivals of 93%, 89%, and 87%, respectively. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  3. Development and organization of scientific methodology and information databases for nuclear technology calculations

    International Nuclear Information System (INIS)

    Gritzay, O.; Kalchenko, O.

    2010-01-01

    Full text: Scientific support of NPPs has to cover several important aspects of scientific and organization activity, namely:1.Training for group of high skilled specialists to do the following work: o nuclear data generation for engineer calculations; o engineer calculations to ensure the safety operation of NPPs; o experimental-calculation support of fluence dosimetry at NPP. 2.Development of up-to-date computer base, equipped with necessary program packages for nuclear data generation and engineer calculations. 3.The updated Libraries of Evaluated Nuclear Data (ENDF), such as ENDF/B-VII (USA), JENDL-3.3 (Japan) and JEFF-3.1 (Europe), RUSFOND ( Russia) and as a result the generation of specialized nuclear data multi-group libraries for special purpose engineer calculations.To reach these purposes, the Ukrainian Nuclear Data Center (UKRNDC) was organized and developed for more, than 10 years (since 1996).The capabilities of the UKRNDC are detailed below. o Modern ENDF libraries, first of all the general purpose libraries, such as ENDF/B-7.0, -6.8, JEFF-3.1.1, JENDL-3.3, etc. These databases contain recommended, evaluated cross sections, spectra, angular distributions, fission product yields, photo-atomic and thermal scattering law data, with emphasis on neutron induced reactions.o Codes for processing these data, updated to the last versions of ENDF and other libraries. First of all these are PREPRO 2007 package (Updated March 17, 2007) and NJOY package updated to versions NJOY-158 and NJOY-253 (in 2009). These codes may give the possibilities to produce the multi-group data for needed spectrum of interacting particles (neutrons, protons, gammas) and temperatures.o Computer base of several specialized server stations, such as ESCALA- S120 (analogous to IBM -240 with RISC 6000 processor) operating under OS under OS UNIX (version AIX 5.1) and IBM PC operating under Linux Red Hat 7.2.o The set of PC computers joined in UKRNDC network, operating mainly in OS Windows

  4. Solid waste projection model: Database user's guide (Version 1.0)

    International Nuclear Information System (INIS)

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established. 3 figs., 1 tab

  5. Solid Waste Projection Model: Database user's guide (Version 1.3)

    International Nuclear Information System (INIS)

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1.3 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not provide instruction in the use of Paradox, the database management system in which the SWPM database is established

  6. Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Table of Cluster and Organism Species Number Data detail Data name Table of Cluster and Organism...resentative sequence ID of cluster, its length, the number of sequences contained in the cluster, organism s...pecies, the number of sequences belonging to the cluster for each of 95 organism ...t Us Table of Cluster and Organism Species Number - Gclust Server | LSDB Archive ...

  7. Evaluated experimental database on critical heat flux in WWER FA models

    International Nuclear Information System (INIS)

    Artamonov, S.; Sergeev, V.; Volkov, S.

    2015-01-01

    The paper presents the description of the evaluated experimental database on critical heat flux in WWER FA models of new designs. This database was developed on the basis of the experimental data obtained in the years of 2009-2012. In the course of its development, the database was reviewed in terms of completeness of the information about the experiments and its compliance with the requirements of Rostekhnadzor regulatory documents. The description of the experimental FA model characteristics and experimental conditions was specified. Besides, the experimental data were statistically processed with the aim to reject incorrect ones and the sets of experimental data on critical heat fluxes (CHF) were compared for different FA models. As a result, for the fi rst time, the evaluated database on CHF in FA models of new designs was developed, that was complemented with analysis functions, and its main purpose is to be used in the process of development, verification and upgrading of calculation techniques. The developed database incorporates the data of 4183 experimental conditions obtained in 53 WWER FA models of various designs. Keywords: WWER reactor, fuel assembly, CHF, evaluated experimental data, database, statistical analysis. (author)

  8. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  9. Completion of autobuilt protein models using a database of protein fragments

    International Nuclear Information System (INIS)

    Cowtan, Kevin

    2012-01-01

    Two developments in the process of automated protein model building in the Buccaneer software are described: the use of a database of protein fragments in improving the model completeness and the assembly of disconnected chain fragments into complete molecules. Two developments in the process of automated protein model building in the Buccaneer software are presented. A general-purpose library for protein fragments of arbitrary size is described, with a highly optimized search method allowing the use of a larger database than in previous work. The problem of assembling an autobuilt model into complete chains is discussed. This involves the assembly of disconnected chain fragments into complete molecules and the use of the database of protein fragments in improving the model completeness. Assembly of fragments into molecules is a standard step in existing model-building software, but the methods have not received detailed discussion in the literature

  10. ExtraTrain: a database of Extragenic regions and Transcriptional information in prokaryotic organisms

    Science.gov (United States)

    Pareja, Eduardo; Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Bonal, Javier; Tobes, Raquel

    2006-01-01

    Background Transcriptional regulation processes are the principal mechanisms of adaptation in prokaryotes. In these processes, the regulatory proteins and the regulatory DNA signals located in extragenic regions are the key elements involved. As all extragenic spaces are putative regulatory regions, ExtraTrain covers all extragenic regions of available genomes and regulatory proteins from bacteria and archaea included in the UniProt database. Description ExtraTrain provides integrated and easily manageable information for 679816 extragenic regions and for the genes delimiting each of them. In addition ExtraTrain supplies a tool to explore extragenic regions, named Palinsight, oriented to detect and search palindromic patterns. This interactive visual tool is totally integrated in the database, allowing the search for regulatory signals in user defined sets of extragenic regions. The 26046 regulatory proteins included in ExtraTrain belong to the families AraC/XylS, ArsR, AsnC, Cold shock domain, CRP-FNR, DeoR, GntR, IclR, LacI, LuxR, LysR, MarR, MerR, NtrC/Fis, OmpR and TetR. The database follows the InterPro criteria to define these families. The information about regulators includes manually curated sets of references specifically associated to regulator entries. In order to achieve a sustainable and maintainable knowledge database ExtraTrain is a platform open to the contribution of knowledge by the scientific community providing a system for the incorporation of textual knowledge. Conclusion ExtraTrain is a new database for exploring Extragenic regions and Transcriptional information in bacteria and archaea. ExtraTrain database is available at . PMID:16539733

  11. Hydraulic fracture propagation modeling and data-based fracture identification

    Science.gov (United States)

    Zhou, Jing

    Successful shale gas and tight oil production is enabled by the engineering innovation of horizontal drilling and hydraulic fracturing. Hydraulically induced fractures will most likely deviate from the bi-wing planar pattern and generate complex fracture networks due to mechanical interactions and reservoir heterogeneity, both of which render the conventional fracture simulators insufficient to characterize the fractured reservoir. Moreover, in reservoirs with ultra-low permeability, the natural fractures are widely distributed, which will result in hydraulic fractures branching and merging at the interface and consequently lead to the creation of more complex fracture networks. Thus, developing a reliable hydraulic fracturing simulator, including both mechanical interaction and fluid flow, is critical in maximizing hydrocarbon recovery and optimizing fracture/well design and completion strategy in multistage horizontal wells. A novel fully coupled reservoir flow and geomechanics model based on the dual-lattice system is developed to simulate multiple nonplanar fractures' propagation in both homogeneous and heterogeneous reservoirs with or without pre-existing natural fractures. Initiation, growth, and coalescence of the microcracks will lead to the generation of macroscopic fractures, which is explicitly mimicked by failure and removal of bonds between particles from the discrete element network. This physics-based modeling approach leads to realistic fracture patterns without using the empirical rock failure and fracture propagation criteria required in conventional continuum methods. Based on this model, a sensitivity study is performed to investigate the effects of perforation spacing, in-situ stress anisotropy, rock properties (Young's modulus, Poisson's ratio, and compressive strength), fluid properties, and natural fracture properties on hydraulic fracture propagation. In addition, since reservoirs are buried thousands of feet below the surface, the

  12. The European fossil-fuelled power station database used in the SEI CASM model

    International Nuclear Information System (INIS)

    Bailey, P.

    1996-01-01

    The database contains details of power stations in Europe that burn fossil-fuels. All countries are covered from Ireland to the European region of Russia as far as the Urals. The following data are given for each station: Location (country and EMEP square), capacity (net MW e and boiler size), year of commissioning, and fuels burnt. A listing of the database is included in the report. The database is primarily used for estimation of emissions and abatement costs of sulfur and nitrogen oxides in the SEI acid rain model CASM. 24 refs, tabs

  13. The European fossil-fuelled power station database used in the SEI CASM model

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, P. [comp.] [Stockholm Environment Inst. at York (United Kingdom)

    1996-06-01

    The database contains details of power stations in Europe that burn fossil-fuels. All countries are covered from Ireland to the European region of Russia as far as the Urals. The following data are given for each station: Location (country and EMEP square), capacity (net MW{sub e} and boiler size), year of commissioning, and fuels burnt. A listing of the database is included in the report. The database is primarily used for estimation of emissions and abatement costs of sulfur and nitrogen oxides in the SEI acid rain model CASM. 24 refs, tabs

  14. Organic production in a dynamic CGE model

    DEFF Research Database (Denmark)

    Jacobsen, Lars Bo

    2004-01-01

    for conventional production into land for organic production, a period of two years must pass before the land being transformed can be used for organic production. During that time, the land is counted as land of the organic industry, but it can only produce the conventional product. To handle this rule, we make......Concerns about the impact of modern agriculture on the environment have in recent years led to an interest in supporting the development of organic farming. In addition to environmental benefits, the aim is to encourage the provision of other “multifunctional” properties of organic farming...... such as rural amenities and rural development that are spillover benefit additional to the supply of food. In this paper we further develop an existing dynamic general equilibrium model of the Danish economy to specifically incorporate organic farming. In the model and input-output data each primary...

  15. MINIZOO in de Benelux : Structure and use of a database of skin irritating organisms

    NARCIS (Netherlands)

    Bronswijk, van J.E.M.H.; Reichl, E.R.

    1986-01-01

    MI NIZOO database is structured within the standard software package SIRv2 (= Scientific Information Retrieval version 2). This flexible program is installed on the university mainframe (a CYBER 180). The program dBASE II employed on a microcomputer (MICROSOL), can be used for part of data entry and

  16. MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data

    Science.gov (United States)

    1983-06-01

    segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0

  17. CycleBase.org - a comprehensive multi-organism online database of cell-cycle experiments

    DEFF Research Database (Denmark)

    Gauthier, Nicholas Paul; Larsen, Malene Erup; Wernersson, Rasmus

    2007-01-01

    The past decade has seen the publication of a large number of cell-cycle microarray studies and many more are in the pipeline. However, data from these experiments are not easy to access, combine and evaluate. We have developed a centralized database with an easy-to-use interface, Cyclebase...

  18. Content-based organization of the information space in multi-database networks

    NARCIS (Netherlands)

    Papazoglou, M.; Milliner, S.

    1998-01-01

    Abstract. Rapid growth in the volume of network-available data, complexity, diversity and terminological fluctuations, at different data sources, render network-accessible information increasingly difficult to achieve. The situation is particularly cumbersome for users of multi-database systems who

  19. Database and modeling assessments of the CANDU 3, PIUS, ALMR, and MHTGR designs

    International Nuclear Information System (INIS)

    Carlson, D.E.; Meyer, R.O.

    1994-01-01

    As part of the research program to support the preapplication reviews of the CANDU 3, PIUS, ALMR, and MHTGR designs, the NRC has completed preliminary assessments of databases and modeling capabilities. To ensure full coverage of all four designs, a detailed assessment methodology was developed that follows the broad logic of the NRC's Code Scaling, Applicability, and Uncertainty (CSAU) methodology. This paper describes the methodology of the database assessments and presents examples of the assessment process using preliminary results for the ALMR design

  20. Cardiac Electromechanical Models: From Cell to Organ

    Directory of Open Access Journals (Sweden)

    Natalia A Trayanova

    2011-08-01

    Full Text Available The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computation physiology and medicine. This review focuses on electromechanical (EM models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single cell models and the second half addresses organ models. At the subcelluar level, myofilament models represent actin-myosin interaction and Ca-based activation. Myofilament models and their refinements represent an overview of the development in the field. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered the cellular basis of the Frank-Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction-diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and

  1. Project-matrix models of marketing organization

    Directory of Open Access Journals (Sweden)

    Gutić Dragutin

    2009-01-01

    Full Text Available Unlike theory and practice of corporation organization, in marketing organization numerous forms and contents at its disposal are not reached until this day. It can be well estimated that marketing organization today in most of our companies and in almost all its parts, noticeably gets behind corporation organization. Marketing managers have always been occupied by basic, narrow marketing activities as: sales growth, market analysis, market growth and market share, marketing research, introduction of new products, modification of products, promotion, distribution etc. They rarely found it necessary to focus a bit more to different aspects of marketing management, for example: marketing planning and marketing control, marketing organization and leading. This paper deals with aspects of project - matrix marketing organization management. Two-dimensional and more-dimensional models are presented. Among two-dimensional, these models are analyzed: Market management/products management model; Products management/management of product lifecycle phases on market model; Customers management/marketing functions management model; Demand management/marketing functions management model; Market positions management/marketing functions management model. .

  2. Complex Systems and Self-organization Modelling

    CERN Document Server

    Bertelle, Cyrille; Kadri-Dahmani, Hakima

    2009-01-01

    The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.

  3. Information structure design for databases a practical guide to data modelling

    CERN Document Server

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  4. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  5. Modeling Powered Aerodynamics for the Orion Launch Abort Vehicle Aerodynamic Database

    Science.gov (United States)

    Chan, David T.; Walker, Eric L.; Robinson, Philip E.; Wilson, Thomas M.

    2011-01-01

    Modeling the aerodynamics of the Orion Launch Abort Vehicle (LAV) has presented many technical challenges to the developers of the Orion aerodynamic database. During a launch abort event, the aerodynamic environment around the LAV is very complex as multiple solid rocket plumes interact with each other and the vehicle. It is further complicated by vehicle separation events such as between the LAV and the launch vehicle stack or between the launch abort tower and the crew module. The aerodynamic database for the LAV was developed mainly from wind tunnel tests involving powered jet simulations of the rocket exhaust plumes, supported by computational fluid dynamic simulations. However, limitations in both methods have made it difficult to properly capture the aerodynamics of the LAV in experimental and numerical simulations. These limitations have also influenced decisions regarding the modeling and structure of the aerodynamic database for the LAV and led to compromises and creative solutions. Two database modeling approaches are presented in this paper (incremental aerodynamics and total aerodynamics), with examples showing strengths and weaknesses of each approach. In addition, the unique problems presented to the database developers by the large data space required for modeling a launch abort event illustrate the complexities of working with multi-dimensional data.

  6. An Object-Relational Ifc Storage Model Based on Oracle Database

    Science.gov (United States)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  7. A data model and database for high-resolution pathology analytical image informatics

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2011-01-01

    Full Text Available Background: The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. Context: This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS, and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs. Aims: (1 Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2 Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. Settings and Design: The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole

  8. A data model and database for high-resolution pathology analytical image informatics.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming

  9. High-Throughput Computational Screening of the Metal Organic Framework Database for CH4/H2 Separations.

    Science.gov (United States)

    Altintas, Cigdem; Erucar, Ilknur; Keskin, Seda

    2018-01-31

    Metal organic frameworks (MOFs) have been considered as one of the most exciting porous materials discovered in the last decade. Large surface areas, high pore volumes, and tailorable pore sizes make MOFs highly promising in a variety of applications, mainly in gas separations. The number of MOFs has been increasing very rapidly, and experimental identification of materials exhibiting high gas separation potential is simply impractical. High-throughput computational screening studies in which thousands of MOFs are evaluated to identify the best candidates for target gas separation is crucial in directing experimental efforts to the most useful materials. In this work, we used molecular simulations to screen the most complete and recent collection of MOFs from the Cambridge Structural Database to unlock their CH 4 /H 2 separation performances. This is the first study in the literature, which examines the potential of all existing MOFs for adsorption-based CH 4 /H 2 separation. MOFs (4350) were ranked based on several adsorbent evaluation metrics including selectivity, working capacity, adsorbent performance score, sorbent selection parameter, and regenerability. A large number of MOFs were identified to have extraordinarily large CH 4 /H 2 selectivities compared to traditional adsorbents such as zeolites and activated carbons. We examined the relations between structural properties of MOFs such as pore sizes, porosities, and surface areas and their selectivities. Correlations between the heat of adsorption, adsorbility, metal type of MOFs, and selectivities were also studied. On the basis of these relations, a simple mathematical model that can predict the CH 4 /H 2 selectivity of MOFs was suggested, which will be very useful in guiding the design and development of new MOFs with extraordinarily high CH 4 /H 2 separation performances.

  10. Query Monitoring and Analysis for Database Privacy - A Security Automata Model Approach.

    Science.gov (United States)

    Kumar, Anand; Ligatti, Jay; Tu, Yi-Cheng

    2015-11-01

    Privacy and usage restriction issues are important when valuable data are exchanged or acquired by different organizations. Standard access control mechanisms either restrict or completely grant access to valuable data. On the other hand, data obfuscation limits the overall usability and may result in loss of total value. There are no standard policy enforcement mechanisms for data acquired through mutual and copyright agreements. In practice, many different types of policies can be enforced in protecting data privacy. Hence there is the need for an unified framework that encapsulates multiple suites of policies to protect the data. We present our vision of an architecture named security automata model (SAM) to enforce privacy-preserving policies and usage restrictions. SAM analyzes the input queries and their outputs to enforce various policies, liberating data owners from the burden of monitoring data access. SAM allows administrators to specify various policies and enforces them to monitor queries and control the data access. Our goal is to address the problems of data usage control and protection through privacy policies that can be defined, enforced, and integrated with the existing access control mechanisms using SAM. In this paper, we lay out the theoretical foundation of SAM, which is based on an automata named Mandatory Result Automata. We also discuss the major challenges of implementing SAM in a real-world database environment as well as ideas to meet such challenges.

  11. Some aspects of the file organization and retrieval strategy in large data-bases

    International Nuclear Information System (INIS)

    Arnaudov, D.D.; Govorun, N.N.

    1977-01-01

    Methods of organizing a big information retrieval system are discribed. A special attention is paid to the file organization. An adapting file structure is described in more detail. The discussed method gives one the opportunity to organize large files in such a way that the response time of the system can be minimized, when the file is increasing. In connection with the retrieval strategy a method is proposed, which uses the frequencies of the descr/iptors and the couples of the descriptors to forecast the expected number of the relevant documents. Programmes are made, on the base of these methods, which are used in the information retrieval systems of JINR

  12. Modeling self-organization of novel organic materials

    Science.gov (United States)

    Sayar, Mehmet

    In this thesis, the structural organization of oligomeric multi-block molecules is analyzed by computational analysis of coarse-grained models. These molecules form nanostructures with different dimensionalities, and the nanostructured nature of these materials leads to novel structural properties at different length scales. Previously, a number of oligomeric triblock rodcoil molecules have been shown to self-organize into mushroom shaped noncentrosymmetric nanostructures. Interestingly, thin films of these molecules contain polar domains and a finite macroscopic polarization. However, the fully polarized state is not the equilibrium state. In the first chapter, by solving a model with dipolar and Ising-like short range interactions, we show that polar domains are stable in films composed of aggregates as opposed to isolated molecules. Unlike classical molecular systems, these nanoaggregates have large intralayer spacings (a ≈ 6 nm), leading to a reduction in the repulsive dipolar interactions that oppose polar order within layers. This enables the formation of a striped pattern with polar domains of alternating directions. The energies of the possible structures at zero temperature are computed exactly and results of Monte Carlo simulations are provided at non-zero temperatures. In the second chapter, the macroscopic polarization of such nanostructured films is analyzed in the presence of a short range surface interaction. The surface interaction leads to a periodic domain structure where the balance between the up and down domains is broken, and therefore films of finite thickness have a net macroscopic polarization. The polarization per unit volume is a function of film thickness and strength of the surface interaction. Finally, in chapter three, self-organization of organic molecules into a network of one dimensional objects is analyzed. Multi-block organic dendron rodcoil molecules were found to self-organize into supramolecular nanoribbons (threads) and

  13. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  14. Benchmarking the CEMDATA07 database to model chemical degradation of concrete using GEMS and PHREEQC

    International Nuclear Information System (INIS)

    Jacques, Diederik; Wang, Lian; Martens, Evelien; Mallants, Dirk

    2012-01-01

    Thermodynamic equilibrium modelling of degradation of cement and concrete systems by chemically detrimental reactions as carbonation, sulphate attack and decalcification or leaching processes requires a consistent thermodynamic database with the relevant aqueous species, cement minerals and hydrates. The recent and consistent database CEMDATA07 is used as the basis in the studies of the Belgian near-surface disposal concept being developed by ONDRAF/NIRAS. The database is consistent with the thermodynamic data in the Nagra/PSI-Thermodynamic Database. When used with the GEMS thermodynamic code, thermodynamic modelling can be performed at temperatures different from the standard temperature of 25 C. GEMS calculates thermodynamic equilibrium by minimizing the Gibbs free energy of the system. Alternatively, thermodynamic equilibrium can also be calculated by solving a nonlinear system of mass balance equations and mass action equations, as is done in PHREEQC. A PHREEQC-database for the cement systems at temperatures different from 25 C is derived from the thermodynamic parameters and models from GEMS. A number of benchmark simulations using PHREEQC and GEM-Selektor were done to verify the implementation of the CEMDATA07 database in PHREEQC-databases. Simulations address a series of reactions that are relevant to the assessment of long-term cement and concrete durability. Verification calculations were performed for different systems with increasing complexity: CaO-SiO 2 -CO 2 , CaO-Al 2 O 3 -SO 3 -CO 2 , and CaO-SiO 2 -Al 2 O 3 -Fe 2 O 3 -MgO-SO 3 -CO 2 . Three types of chemical degradation processes were simulated: (1) carbonation by adding CO 2 to the bulk composition, (2) sulphate attack by adding SO 3 to the bulk composition, and (3) decalcification/leaching by putting the cement solid phase sequentially in contact with pure water. An excellent agreement between the simulations with GEMS and PHREEQC was obtained

  15. Clinical Prediction Models for Cardiovascular Disease: The Tufts PACE CPM Database

    Science.gov (United States)

    Wessler, Benjamin S.; Lana Lai, YH; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S.; Kent, David M.

    2015-01-01

    Background Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease (CVD) there are numerous CPMs available though the extent of this literature is not well described. Methods and Results We conducted a systematic review for articles containing CPMs for CVD published between January 1990 through May 2012. CVD includes coronary heart disease (CHD), heart failure (HF), arrhythmias, stroke, venous thromboembolism (VTE) and peripheral vascular disease (PVD). We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. 717 (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions including 215 CPMs for patients with CAD, 168 CPMs for population samples, and 79 models for patients with HF. There are 77 distinct index/ outcome (I/O) pairings. Of the de novo models in this database 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. Conclusions There is an abundance of CPMs available for a wide assortment of CVD conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. PMID:26152680

  16. Data-mining analysis of the global distribution of soil carbon in observational databases and Earth system models

    Science.gov (United States)

    Hashimoto, Shoji; Nanko, Kazuki; Ťupek, Boris; Lehtonen, Aleksi

    2017-03-01

    Future climate change will dramatically change the carbon balance in the soil, and this change will affect the terrestrial carbon stock and the climate itself. Earth system models (ESMs) are used to understand the current climate and to project future climate conditions, but the soil organic carbon (SOC) stock simulated by ESMs and those of observational databases are not well correlated when the two are compared at fine grid scales. However, the specific key processes and factors, as well as the relationships among these factors that govern the SOC stock, remain unclear; the inclusion of such missing information would improve the agreement between modeled and observational data. In this study, we sought to identify the influential factors that govern global SOC distribution in observational databases, as well as those simulated by ESMs. We used a data-mining (machine-learning) (boosted regression trees - BRT) scheme to identify the factors affecting the SOC stock. We applied BRT scheme to three observational databases and 15 ESM outputs from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and examined the effects of 13 variables/factors categorized into five groups (climate, soil property, topography, vegetation, and land-use history). Globally, the contributions of mean annual temperature, clay content, carbon-to-nitrogen (CN) ratio, wetland ratio, and land cover were high in observational databases, whereas the contributions of the mean annual temperature, land cover, and net primary productivity (NPP) were predominant in the SOC distribution in ESMs. A comparison of the influential factors at a global scale revealed that the most distinct differences between the SOCs from the observational databases and ESMs were the low clay content and CN ratio contributions, and the high NPP contribution in the ESMs. The results of this study will aid in identifying the causes of the current mismatches between observational SOC databases and ESM outputs

  17. Estimating soil water-holding capacities by linking the Food and Agriculture Organization Soil map of the world with global pedon databases and continuous pedotransfer functions

    Science.gov (United States)

    Reynolds, C. A.; Jackson, T. J.; Rawls, W. J.

    2000-12-01

    Spatial soil water-holding capacities were estimated for the Food and Agriculture Organization (FAO) digital Soil Map of the World (SMW) by employing continuous pedotransfer functions (PTF) within global pedon databases and linking these results to the SMW. The procedure first estimated representative soil properties for the FAO soil units by statistical analyses and taxotransfer depth algorithms [Food and Agriculture Organization (FAO), 1996]. The representative soil properties estimated for two layers of depths (0-30 and 30-100 cm) included particle-size distribution, dominant soil texture, organic carbon content, coarse fragments, bulk density, and porosity. After representative soil properties for the FAO soil units were estimated, these values were substituted into three different pedotransfer functions (PTF) models by Rawls et al. [1982], Saxton et al. [1986], and Batjes [1996a]. The Saxton PTF model was finally selected to calculate available water content because it only required particle-size distribution data and results closely agreed with the Rawls and Batjes PTF models that used both particle-size distribution and organic matter data. Soil water-holding capacities were then estimated by multiplying the available water content by the soil layer thickness and integrating over an effective crop root depth of 1 m or less (i.e., encountered shallow impermeable layers) and another soil depth data layer of 2.5 m or less.

  18. High Energy Physics Model Database - HEPMDB - Towards decoding the underlying theory at the LHC

    International Nuclear Information System (INIS)

    Bondarenko, M.; Belyaev, A.; Basso, L.; Boos, E.; Bunichev, V.; Sekhar Chivukula, R.; Christensen, D.; Cox, S.; De Roeck, A.; Moretti, S.; Pukhov, A.; Sekmen, S.; Semenov, A.; Simmons, E.H.; Shepherd-Themistocleus, C.; Speckner, C.

    2012-01-01

    We present here the first stage of development of the High Energy Physics Model Data-Base (HEPMDB) which is a convenient centralized storage environment for HEP (High Energy Physics) models, and can accommodate, via web interface to the HPC cluster, the validation of models, evaluation of LHC predictions and event generation-simulation chain. The ultimate goal of HEPMDB is to perform an effective LHC data interpretation isolating the most successful theory for explaining LHC observations. (authors)

  19. Modeling and implementing a database on drugs into a hospital intranet.

    Science.gov (United States)

    François, M; Joubert, M; Fieschi, D; Fieschi, M

    1998-09-01

    Our objective was to develop a drug information service, implementing a database on drugs in our university hospitals information system. Thériaque is a database, maintained by a group of pharmacists and physicians, on all the drugs available in France. Before its implementation we modeled its content (chemical classes, active components, excipients, indications, contra-indications, side effects, and so on) according to an object-oriented method. Then we designed HTML pages whose appearance translates the structure of classes of objects of the model. Fields in pages are dynamically fulfilled by the results of queries to a relational database in which information on drugs is stored. This allowed a fast implementation and did not imply to port a client application on the thousands of workstations over the network. The interface provides end-users with an easy-to-use and natural way to access information related to drugs in an internet environment.

  20. Hydrologic Derivatives for Modeling and Analysis—A new global high-resolution database

    Science.gov (United States)

    Verdin, Kristine L.

    2017-07-17

    The U.S. Geological Survey has developed a new global high-resolution hydrologic derivative database. Loosely modeled on the HYDRO1k database, this new database, entitled Hydrologic Derivatives for Modeling and Analysis, provides comprehensive and consistent global coverage of topographically derived raster layers (digital elevation model data, flow direction, flow accumulation, slope, and compound topographic index) and vector layers (streams and catchment boundaries). The coverage of the data is global, and the underlying digital elevation model is a hybrid of three datasets: HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales), GMTED2010 (Global Multi-resolution Terrain Elevation Data 2010), and the SRTM (Shuttle Radar Topography Mission). For most of the globe south of 60°N., the raster resolution of the data is 3 arc-seconds, corresponding to the resolution of the SRTM. For the areas north of 60°N., the resolution is 7.5 arc-seconds (the highest resolution of the GMTED2010 dataset) except for Greenland, where the resolution is 30 arc-seconds. The streams and catchments are attributed with Pfafstetter codes, based on a hierarchical numbering system, that carry important topological information. This database is appropriate for use in continental-scale modeling efforts. The work described in this report was conducted by the U.S. Geological Survey in cooperation with the National Aeronautics and Space Administration Goddard Space Flight Center.

  1. The conceptual model of organization social responsibility

    OpenAIRE

    LUO, Lan; WEI, Jingfu

    2014-01-01

    With the developing of the research of CSR, people more and more deeply noticethat the corporate should take responsibility. Whether other organizations besides corporatesshould not take responsibilities beyond their field? This paper puts forward theconcept of organization social responsibility on the basis of the concept of corporate socialresponsibility and other theories. And the conceptual models are built based on theconception, introducing the OSR from three angles: the types of organi...

  2. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    Science.gov (United States)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  3. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    Science.gov (United States)

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  4. Putting "Organizations" into an Organization Theory Course: A Hybrid CAO Model for Teaching Organization Theory

    Science.gov (United States)

    Hannah, David R.; Venkatachary, Ranga

    2010-01-01

    In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…

  5. Environmental Education Organizations and Programs in Texas: Identifying Patterns through a Database and Survey Approach for Establishing Frameworks for Assessment and Progress

    Science.gov (United States)

    Lloyd-Strovas, Jenny D.; Arsuffi, Thomas L.

    2016-01-01

    We examined the diversity of environmental education (EE) in Texas, USA, by developing a framework to assess EE organizations and programs at a large scale: the Environmental Education Database of Organizations and Programs (EEDOP). This framework consisted of the following characteristics: organization/visitor demographics, pedagogy/curriculum,…

  6. Exposure Modeling Tools and Databases for Consideration for Relevance to the Amended TSCA (ISES)

    Science.gov (United States)

    The Agency’s Office of Research and Development (ORD) has a number of ongoing exposure modeling tools and databases. These efforts are anticipated to be useful in supporting ongoing implementation of the amended Toxic Substances Control Act (TSCA). Under ORD’s Chemic...

  7. Modelling of phase diagrams and thermodynamic properties using Calphad method – Development of thermodynamic databases

    Czech Academy of Sciences Publication Activity Database

    Kroupa, Aleš

    2013-01-01

    Roč. 66, JAN (2013), s. 3-13 ISSN 0927-0256 R&D Projects: GA MŠk(CZ) OC08053 Institutional support: RVO:68081723 Keywords : Calphad method * phase diagram modelling * thermodynamic database development Subject RIV: BJ - Thermodynamics Impact factor: 1.879, year: 2013

  8. Modeling of activation data in the BrainMapTM database: Detection of outliers

    DEFF Research Database (Denmark)

    Nielsen, Finn Årup; Hansen, Lars Kai

    2002-01-01

    models is identification of novelty, i.e., low probability database events. We rank the novelty of the outliers and investigate the cause for 21 of the most novel, finding several outliers that are entry and transcription errors or infrequent or non-conforming terminology. We briefly discuss the use...

  9. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  10. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  11. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  12. Gas Chromatography and Mass Spectrometry Measurements and Protocols for Database and Library Development Relating to Organic Species in Support of the Mars Science Laboratory

    Science.gov (United States)

    Misra, P.; Garcia, R.; Mahaffy, P. R.

    2010-04-01

    An organic contaminant database and library has been developed for use with the Sample Analysis at Mars (SAM) instrumentation utilizing laboratory-based Gas Chromatography-Mass Spectrometry measurements of pyrolyzed and baked material samples.

  13. PK/DB: database for pharmacokinetic properties and predictive in silico ADME models.

    Science.gov (United States)

    Moda, Tiago L; Torres, Leonardo G; Carrara, Alexandre E; Andricopulo, Adriano D

    2008-10-01

    The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, blood-brain barrier and water solubility). http://www.pkdb.ifsc.usp.br

  14. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    Science.gov (United States)

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  15. BioQ: tracing experimental origins in public genomic databases using a novel data provenance model.

    Science.gov (United States)

    Saccone, Scott F; Quan, Jiaxi; Jones, Peter L

    2012-04-15

    Public genomic databases, which are often used to guide genetic studies of human disease, are now being applied to genomic medicine through in silico integrative genomics. These databases, however, often lack tools for systematically determining the experimental origins of the data. We introduce a new data provenance model that we have implemented in a public web application, BioQ, for assessing the reliability of the data by systematically tracing its experimental origins to the original subjects and biologics. BioQ allows investigators to both visualize data provenance as well as explore individual elements of experimental process flow using precise tools for detailed data exploration and documentation. It includes a number of human genetic variation databases such as the HapMap and 1000 Genomes projects. BioQ is freely available to the public at http://bioq.saclab.net.

  16. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  17. Predicting 30-day Hospital Readmission with Publicly Available Administrative Database. A Conditional Logistic Regression Modeling Approach.

    Science.gov (United States)

    Zhu, K; Lou, Z; Zhou, J; Ballester, N; Kong, N; Parikh, P

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners. Explore the use of conditional logistic regression to increase the prediction accuracy. We analyzed an HCUP statewide inpatient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models. The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of

  18. Safety Cultural Competency Modeling in Nuclear Organizations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Oh, Yeon Ju; Luo, Meiling; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The nuclear safety cultural competency model should be supplemented through a bottom-up approach such as behavioral event interview. The developed model, however, is meaningful for determining what should be dealt for enhancing safety cultural competency of nuclear organizations. The more details of the developing process, results, and applications will be introduced later. Organizational culture include safety culture in terms of its organizational characteristics.

  19. A two term model of the confinement in Elmy H-modes using the global confinement and pedestal databases

    International Nuclear Information System (INIS)

    2003-01-01

    Two different physical models of the H-mode pedestal are tested against the joint pedestal-core database. These models are then combined with models for the core and shown to give a good fit to the ELMy H-mode database. Predictions are made for the next step tokamaks ITER and FIRE. (author)

  20. 3D DIGITAL MODEL DATABASE APPLIED TO CONSERVATION AND RESEARCH OF WOODEN CONSTRUCTION IN CHINA

    Directory of Open Access Journals (Sweden)

    Y. Zheng

    2013-07-01

    Full Text Available Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E. to the end of the Yuan dynasty (1279–1368C.E. in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate

  1. D Digital Model Database Applied to Conservation and Research of Wooden Construction in China

    Science.gov (United States)

    Zheng, Y.

    2013-07-01

    Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E.) to the end of the Yuan dynasty (1279-1368C.E.) in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH) to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate information inventory

  2. A STRATEGIC MANAGEMENT MODEL FOR SERVICE ORGANIZATIONS

    OpenAIRE

    Andreea ZAMFIR

    2013-01-01

    This paper provides a knowledge-based strategic management of services model, with a view to emphasise an approach to gaining competitive advantage through knowledge, people and networking. The long-term evolution of the service organization is associated with the way in which the strategic management is practised.

  3. Using the Cambridge structure database of organic and organometalic compounds in structure biology

    Czech Academy of Sciences Publication Activity Database

    Hašek, Jindřich

    2010-01-01

    Roč. 17, 1a (2010), b24-b26 ISSN 1211-5894. [Discussions in Structural Molecular Biology /8./. Nové Hrady, 18.03.2010-20.03.2010] R&D Projects: GA AV ČR IAA500500701; GA ČR GA305/07/1073 Institutional research plan: CEZ:AV0Z40500505 Keywords : organic chemistry * Cambridge Structure Data base * molecular structure Subject RIV: CD - Macromolecular Chemistry http://xray.cz/ms/bul2010-1a/friday2.pdf

  4. The EDEN-IW ontology model for sharing knowledge and water quality data between heterogenous databases

    DEFF Research Database (Denmark)

    Stjernholm, M.; Poslad, S.; Zuo, L.

    2004-01-01

    The Environmental Data Exchange Network for Inland Water (EDEN-IW) project's main aim is to develop a system for making disparate and heterogeneous databases of Inland Water quality more accessible to users. The core technology is based upon a combination of: ontological model to represent...... a Semantic Web based data model for IW; software agents as an infrastructure to share and reason about the IW se-mantic data model and XML to make the information accessible to Web portals and mainstream Web services. This presentation focuses on the Semantic Web or Onto-logical model. Currently, we have...

  5. Emergent organization in a model market

    Science.gov (United States)

    Yadav, Avinash Chand; Manchanda, Kaustubh; Ramaswamy, Ramakrishna

    2017-09-01

    We study the collective behaviour of interacting agents in a simple model of market economics that was originally introduced by Nørrelykke and Bak. A general theoretical framework for interacting traders on an arbitrary network is presented, with the interaction consisting of buying (namely consumption) and selling (namely production) of commodities. Extremal dynamics is introduced by having the agent with least profit in the market readjust prices, causing the market to self-organize. In addition to examining this model market on regular lattices in two-dimensions, we also study the cases of random complex networks both with and without community structures. Fluctuations in an activity signal exhibit properties that are characteristic of avalanches observed in models of self-organized criticality, and these can be described by power-law distributions when the system is in the critical state.

  6. A scalable database model for multiparametric time series: a volcano observatory case study

    Science.gov (United States)

    Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea

    2014-05-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  7. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.

    Science.gov (United States)

    Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W

    2014-01-01

    Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.

  8. Integrated modelling of two xenobiotic organic compounds

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Gernaey, K.V.; Henze, Mogens

    2006-01-01

    This paper presents a dynamic mathematical model that describes the fate and transport of two selected xenobiotic organic compounds (XOCs) in a simplified representation. of an integrated urban wastewater system. A simulation study, where the xenobiotics bisphenol A and pyrene are used as reference...... compounds, is carried out. Sorption and specific biological degradation processes are integrated with standardised water process models to model the fate of both compounds. Simulated mass flows of the two compounds during one dry weather day and one wet weather day are compared for realistic influent flow...... rate and concentration profiles. The wet weather day induces resuspension of stored sediments, which increases the pollutant load on the downstream system. The potential of the model to elucidate important phenomena related to origin and fate of the model compounds is demonstrated....

  9. Epidemiology of Occupational Accidents in Iran Based on Social Security Organization Database

    Science.gov (United States)

    Mehrdad, Ramin; Seifmanesh, Shahdokht; Chavoshi, Farzaneh; Aminian, Omid; Izadi, Nazanin

    2014-01-01

    Background: Background: Today, occupational accidents are one of the most important problems in industrial world. Due to lack of appropriate system for registration and reporting, there is no accurate statistics of occupational accidents all over the world especially in developing countries. Objectives: The aim of this study is epidemiological assessment of occupational accidents in Iran. Materials and Methods: Information of available occupational accidents in Social Security Organization was extracted from accident reporting and registration forms. In this cross-sectional study, gender, age, economic activity, type of accident and injured body part in 22158 registered accidents during 2008 were described. Results: The occupational accidents rate was 253 in 100,000 workers in 2008. 98.2% of injured workers were men. The mean age of injured workers was 32.07 ± 9.12 years. The highest percentage belonged to age group of 25-34 years old. In our study, most of the accidents occurred in basic metals industry, electrical and non-electrical machines and construction industry. Falling down from height and crush injury were the most prevalent accidents. Upper and lower extremities were the most common injured body parts. Conclusion: Due to the high rate of accidents in metal and construction industries, engineering controls, the use of appropriate protective equipment and safety worker training seems necessary. PMID:24719699

  10. Epidemiology of occupational accidents in iran based on social security organization database.

    Science.gov (United States)

    Mehrdad, Ramin; Seifmanesh, Shahdokht; Chavoshi, Farzaneh; Aminian, Omid; Izadi, Nazanin

    2014-01-01

    Today, occupational accidents are one of the most important problems in industrial world. Due to lack of appropriate system for registration and reporting, there is no accurate statistics of occupational accidents all over the world especially in developing countries. The aim of this study is epidemiological assessment of occupational accidents in Iran. Information of available occupational accidents in Social Security Organization was extracted from accident reporting and registration forms. In this cross-sectional study, gender, age, economic activity, type of accident and injured body part in 22158 registered accidents during 2008 were described. The occupational accidents rate was 253 in 100,000 workers in 2008. 98.2% of injured workers were men. The mean age of injured workers was 32.07 ± 9.12 years. The highest percentage belonged to age group of 25-34 years old. In our study, most of the accidents occurred in basic metals industry, electrical and non-electrical machines and construction industry. Falling down from height and crush injury were the most prevalent accidents. Upper and lower extremities were the most common injured body parts. Due to the high rate of accidents in metal and construction industries, engineering controls, the use of appropriate protective equipment and safety worker training seems necessary.

  11. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  12. S-World: A high resolution global soil database for simulation modelling (Invited)

    Science.gov (United States)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  13. The relevance of the IFPE Database to the modelling of WWER-type fuel behaviour

    International Nuclear Information System (INIS)

    Killeen, J.; Sartori, E.

    2006-01-01

    The aim of the International Fuel Performance Experimental Database (IFPE Database) is to provide, in the public domain, a comprehensive and well-qualified database on zircaloy-clad UO 2 fuel for model development and code validation. The data encompass both normal and off-normal operation and include prototypic commercial irradiations as well as experiments performed in Material Testing Reactors. To date, the Database contains over 800 individual cases, providing data on fuel centreline temperatures, dimensional changes and FGR either from in-pile pressure measurements or PIE techniques, including puncturing, Electron Probe Micro Analysis (EPMA) and X-ray Fluorescence (XRF) measurements. This work in assembling and disseminating the Database is carried out in close co-operation and co-ordination between OECD/NEA and the IAEA. The majority of data sets are dedicated to fuel behaviour under LWR irradiation, and every effort has been made to obtain data representative of BWR, PWR and WWER conditions. In each case, the data set contains information on the pre-characterisation of the fuel, cladding and fuel rod geometry, the irradiation history presented in as much detail as the source documents allow, and finally any in-pile or PIE measurements that were made. The purpose of this paper is to highlight data that are relevant specifically to WWER application. To this end, the NEA and IAEA have been successful in obtaining appropriate data for both WWER-440 and WWER-1000-type reactors. These are: 1) Twelve (12) rods from the Finnish-Russian co-operative SOFIT programme; 2) Kola-3 WWER-440 irradiation; 3) MIR ramp tests on Kola-3 rods; 4) Zaporozskaya WWER-1000 irradiation; 5) Novovoronezh WWER-1000 irradiation. Before reviewing these data sets and their usefulness, the paper touches briefly on recent, more novel additions to the Database and on progress made in the use of the Database for the current IAEA FUMEX II Project. Finally, the paper describes the Computer

  14. Implementation of the Multidimensional Modeling Concepts into Object-Relational Databases

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A key to survival in the business world is being able to analyze, plan and react to changing business conditions as fast as possible. With multidimensional models the managers can explore information at different levels of granularity and the decision makers at all levels can quickly respond to changes in the business climate-the ultimate goal of business intelligence. This paper focuses on the implementation of the multidimensional concepts into object-relational databases.

  15. Candidate gene database and transcript map for peach, a model species for fruit trees.

    Science.gov (United States)

    Horn, Renate; Lecouls, Anne-Claire; Callahan, Ann; Dandekar, Abhaya; Garay, Lilibeth; McCord, Per; Howad, Werner; Chan, Helen; Verde, Ignazio; Main, Doreen; Jung, Sook; Georgi, Laura; Forrest, Sam; Mook, Jennifer; Zhebentyayeva, Tatyana; Yu, Yeisoo; Kim, Hye Ran; Jesudurai, Christopher; Sosinski, Bryon; Arús, Pere; Baird, Vance; Parfitt, Dan; Reighard, Gregory; Scorza, Ralph; Tomkins, Jeffrey; Wing, Rod; Abbott, Albert Glenn

    2005-05-01

    Peach (Prunus persica) is a model species for the Rosaceae, which includes a number of economically important fruit tree species. To develop an extensive Prunus expressed sequence tag (EST) database for identifying and cloning the genes important to fruit and tree development, we generated 9,984 high-quality ESTs from a peach cDNA library of developing fruit mesocarp. After assembly and annotation, a putative peach unigene set consisting of 3,842 ESTs was defined. Gene ontology (GO) classification was assigned based on the annotation of the single "best hit" match against the Swiss-Prot database. No significant homology could be found in the GenBank nr databases for 24.3% of the sequences. Using core markers from the general Prunus genetic map, we anchored bacterial artificial chromosome (BAC) clones on the genetic map, thereby providing a framework for the construction of a physical and transcript map. A transcript map was developed by hybridizing 1,236 ESTs from the putative peach unigene set and an additional 68 peach cDNA clones against the peach BAC library. Hybridizing ESTs to genetically anchored BACs immediately localized 11.2% of the ESTs on the genetic map. ESTs showed a clustering of expressed genes in defined regions of the linkage groups. [The data were built into a regularly updated Genome Database for Rosaceae (GDR), available at (http://www.genome.clemson.edu/gdr/).].

  16. Cluster based on sequence comparison of homologous proteins of 95 organism species - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Cluster based on sequence comparison of homologous proteins of 95 organism spe...cies Data detail Data name Cluster based on sequence comparison of homologous proteins of 95 organism specie...istory of This Database Site Policy | Contact Us Cluster based on sequence compariso

  17. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  18. Amino acid sequences of predicted proteins and their annotation for 95 organism species. - Gclust Server | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Gclust Server Amino acid sequences of predicted proteins and their annotation for 95 organis...m species. Data detail Data name Amino acid sequences of predicted proteins and their annotation for 95 orga...nism species. DOI 10.18908/lsdba.nbdc00464-001 Description of data contents Amino acid sequences of predicted proteins...Database Description Download License Update History of This Database Site Policy | Contact Us Amino acid sequences of predicted prot...eins and their annotation for 95 organism species. - Gclust Server | LSDB Archive ...

  19. Chess databases as a research vehicle in psychology: Modeling large data.

    Science.gov (United States)

    Vaci, Nemanja; Bilalić, Merim

    2017-08-01

    The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.

  20. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  1. Very fast road database verification using textured 3D city models obtained from airborne imagery

    Science.gov (United States)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  2. Technical report on implementation of reactor internal 3D modeling and visual database system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation`s integrated computer aided engineering system, such as Mitsubishi`s NUWINGS (Japan), AECL`s CANDID (Canada) and Duke Power`s PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new.

  3. Technical report on implementation of reactor internal 3D modeling and visual database system

    International Nuclear Information System (INIS)

    Kim, Yeun Seung; Eom, Young Sam; Lee, Suk Hee; Ryu, Seung Hyun

    1996-06-01

    In this report was described a prototype of reactor internal 3D modeling and VDB system for NSSS design quality improvement. For improving NSSS design quality several cases of the nuclear developed nation's integrated computer aided engineering system, such as Mitsubishi's NUWINGS (Japan), AECL's CANDID (Canada) and Duke Power's PASCE (USA) were studied. On the basis of these studies the strategy for NSSS design improvement system was extracted and detail work scope was implemented as follows : 3D modelling of the reactor internals were implemented by using the parametric solid modeler, a prototype system of design document computerization and database was suggested, and walk-through simulation integrated with 3D modeling and VDB was accomplished. Major effects of NSSS design quality improvement system by using 3D modeling and VDB are the plant design optimization by simulation, improving the reliability through the single design database system and engineering cost reduction by improving productivity and efficiency. For applying the VDB to full scope of NSSS system design, 3D modelings of reactor coolant system and nuclear fuel assembly and fuel rod were attached as appendix. 2 tabs., 31 figs., 7 refs. (Author) .new

  4. A Bayesian model for anomaly detection in SQL databases for security systems

    NARCIS (Netherlands)

    Drugan, M.M.

    2017-01-01

    We focus on automatic anomaly detection in SQL databases for security systems. Many logs of database systems, here the Townhall database, contain detailed information about users, like the SQL queries and the response of the database. A database is a list of log instances, where each log instance is

  5. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  6. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  7. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  8. Modelling the behaviour of organic degradation products

    International Nuclear Information System (INIS)

    Cross, J.E.; Ewart, F.T.; Greenfield, B.F.

    1989-03-01

    Results are presented from recent studies at Harwell which show that the degradation products which are formed when certain organic waste materials are exposed to the alkaline conditions typical of a cementitious environment, can enhance the solubility of plutonium, even at pH values as high as 12, by significant factors. Characterisation of the degradation products has been undertaken but the solubility enhancement does not appear to be related to the concentration of any of the major organic species that have been identified in the solutions. While it has not been possible to identify by analysis the organic ligand responsible for the increased solubility of plutonium, the behaviour of D-Saccharic acid does approach the behaviour of the degradation products. The PHREEQE code has been used to simulate the solubility of plutonium in the presence of D-Saccharic acid and other model degradation products, in order to explain the solubility enhancement. The extrapolation of the experimental conditions to the repository is the major objective, but in this work the ability of a model to predict the behaviour of plutonium over a range of experimental conditions has been tested. (author)

  9. Modeling and Design of Capacitive Micromachined Ultrasonic Transducers Based-on Database Optimization

    International Nuclear Information System (INIS)

    Chang, M W; Gwo, T J; Deng, T M; Chang, H C

    2006-01-01

    A Capacitive Micromachined Ultrasonic Transducers simulation database, based on electromechanical coupling theory, has been fully developed for versatile capacitive microtransducer design and analysis. Both arithmetic and graphic configurations are used to find optimal parameters based on serial coupling simulations. The key modeling parameters identified can improve microtransducer's character and reliability effectively. This method could be used to reduce design time and fabrication cost, eliminating trial-and-error procedures. Various microtransducers, with optimized characteristics, can be developed economically using the developed database. A simulation to design an ultrasonic microtransducer is completed as an executed example. The dependent relationship between membrane geometry, vibration displacement and output response is demonstrated. The electromechanical coupling effects, mechanical impedance and frequency response are also taken into consideration for optimal microstructures. The microdevice parameters with the best output signal response are predicted, and microfabrication processing constraints and realities are also taken into consideration

  10. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  11. Development of fauna, micro flora and aquatic organisms database at the vicinity of Gamma Green House in Malaysian Nuclear Agency

    International Nuclear Information System (INIS)

    Nur Humaira Lau Abdullah; Mohd Zaidan Kandar; Phua Choo Kwai Hoe

    2012-01-01

    The biodiversity database of non-human biota which consisted of flora, fauna, aquatic organisms and micro flora at the vicinity of Gamma Greenhouse (GGH) in Malaysian Nuclear Agency is under development. In 2011, a workshop on biodiversity and sampling of flora and fauna by local experts had been conducted in BAB to expose the necessary knowledge to all those involved in this study. Since then, several field surveys had been successfully being carried out covering terrestrial and aquatic ecosystems in order to observe species distribution pattern and to collect the non-human biota samples. The surveys had been conducted according to standard survey procedures and the samples collected were preserved and identified using appropriate techniques. In this paper, the work on fauna, micro flora and aquatic organisms was presented. The fauna and micro flora specimens were kept in Biodiversity Laboratory in Block 44. Based on those field surveys several species of terrestrial vertebrate and invertebrate organisms were spotted. A diverse group of mushroom was found to be present at the study site. The presence of several aquatic zooplankton for example Cyclops, Nauplius; phytoplankton and bacteria for example Klebsiella sp, Enterobacter sp and others in the pond nearby proved that the pond ecosystem is in good condition. Through this study, the preliminary biodiversity list of fauna at the vicinity of the nuclear facility, GGH had been developed and the work will continue for complete baseline data development. Besides that, many principles and methodologies used in ecological survey had been learnt and applied but the skills involved still need to be polished through workshops, collaboration and consultation from local experts. Thus far, several agencies had been approached to gain collaboration and consultation such as Institut Perikanan Malaysia, UKM, UPM and UMT. (author)

  12. Transposing an active fault database into a seismic hazard fault model for nuclear facilities. Pt. 1. Building a database of potentially active faults (BDFA) for metropolitan France

    Energy Technology Data Exchange (ETDEWEB)

    Jomard, Herve; Cushing, Edward Marc; Baize, Stephane; Chartier, Thomas [IRSN - Institute of Radiological Protection and Nuclear Safety, Fontenay-aux-Roses (France); Palumbo, Luigi; David, Claire [Neodyme, Joue les Tours (France)

    2017-07-01

    The French Institute for Radiation Protection and Nuclear Safety (IRSN), with the support of the Ministry of Environment, compiled a database (BDFA) to define and characterize known potentially active faults of metropolitan France. The general structure of BDFA is presented in this paper. BDFA reports to date 136 faults and represents a first step toward the implementation of seismic source models that would be used for both deterministic and probabilistic seismic hazard calculations. A robustness index was introduced, highlighting that less than 15% of the database is controlled by reasonably complete data sets. An example of transposing BDFA into a fault source model for PSHA (probabilistic seismic hazard analysis) calculation is presented for the Upper Rhine Graben (eastern France) and exploited in the companion paper (Chartier et al., 2017, hereafter Part 2) in order to illustrate ongoing challenges for probabilistic fault-based seismic hazard calculations.

  13. Performance of a TV white space database with different terrain resolutions and propagation models

    Directory of Open Access Journals (Sweden)

    A. M. Fanan

    2017-11-01

    Full Text Available Cognitive Radio has now become a realistic option for the solution of the spectrum scarcity problem in wireless communication. TV channels (the primary user can be protected from secondary-user interference by accurate prediction of TV White Spaces (TVWS by using appropriate propagation modelling. In this paper we address two related aspects of channel occupancy prediction for cognitive radio. Firstly we investigate the best combination of empirical propagation model and spatial resolution of terrain data for predicting TVWS by examining the performance of three propagation models (Extended-Hata, Davidson-Hata and Egli in the TV band 470 to 790 MHz along with terrain data resolutions of 1000, 100 and 30 m, when compared with a comprehensive set of propagation measurements taken in randomly-selected locations around Hull, UK. Secondly we describe how such models can be integrated into a database-driven tool for cognitive radio channel selection within the TVWS environment.

  14. The Fluka Linebuilder and Element Database: Tools for Building Complex Models of Accelerators Beam Lines

    CERN Document Server

    Mereghetti, A; Cerutti, F; Versaci, R; Vlachoudis, V

    2012-01-01

    Extended FLUKA models of accelerator beam lines can be extremely complex: heavy to manipulate, poorly versatile and prone to mismatched positioning. We developed a framework capable of creating the FLUKA model of an arbitrary portion of a given accelerator, starting from the optics configuration and a few other information provided by the user. The framework includes a builder (LineBuilder), an element database and a series of configuration and analysis scripts. The LineBuilder is a Python program aimed at dynamically assembling complex FLUKA models of accelerator beam lines: positions, magnetic fields and scorings are automatically set up, and geometry details such as apertures of collimators, tilting and misalignment of elements, beam pipes and tunnel geometries can be entered at user’s will. The element database (FEDB) is a collection of detailed FLUKA geometry models of machine elements. This framework has been widely used for recent LHC and SPS beam-machine interaction studies at CERN, and led to a dra...

  15. Emissions databases for polycyclic aromatic compounds in the Canadian Athabasca oil sands region - development using current knowledge and evaluation with passive sampling and air dispersion modelling data

    Science.gov (United States)

    Qiu, Xin; Cheng, Irene; Yang, Fuquan; Horb, Erin; Zhang, Leiming; Harner, Tom

    2018-03-01

    Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs) in the Athabasca oil sands region (AOSR) were developed. The first database was derived from volatile organic compound (VOC) emissions data provided by the Cumulative Environmental Management Association (CEMA) and the second database was derived from additional data collected within the Joint Canada-Alberta Oil Sands Monitoring (JOSM) program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, and dibenzothiophenes (DBTs), obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model-measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC emissions estimation and

  16. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  17. An expression database for roots of the model legume Medicago truncatula under salt stress.

    Science.gov (United States)

    Li, Daofeng; Su, Zhen; Dong, Jiangli; Wang, Tao

    2009-11-11

    Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.

  18. An expression database for roots of the model legume Medicago truncatula under salt stress

    Directory of Open Access Journals (Sweden)

    Dong Jiangli

    2009-11-01

    Full Text Available Abstract Background Medicago truncatula is a model legume whose genome is currently being sequenced by an international consortium. Abiotic stresses such as salt stress limit plant growth and crop productivity, including those of legumes. We anticipate that studies on M. truncatula will shed light on other economically important legumes across the world. Here, we report the development of a database called MtED that contains gene expression profiles of the roots of M. truncatula based on time-course salt stress experiments using the Affymetrix Medicago GeneChip. Our hope is that MtED will provide information to assist in improving abiotic stress resistance in legumes. Description The results of our microarray experiment with roots of M. truncatula under 180 mM sodium chloride were deposited in the MtED database. Additionally, sequence and annotation information regarding microarray probe sets were included. MtED provides functional category analysis based on Gene and GeneBins Ontology, and other Web-based tools for querying and retrieving query results, browsing pathways and transcription factor families, showing metabolic maps, and comparing and visualizing expression profiles. Utilities like mapping probe sets to genome of M. truncatula and In-Silico PCR were implemented by BLAT software suite, which were also available through MtED database. Conclusion MtED was built in the PHP script language and as a MySQL relational database system on a Linux server. It has an integrated Web interface, which facilitates ready examination and interpretation of the results of microarray experiments. It is intended to help in selecting gene markers to improve abiotic stress resistance in legumes. MtED is available at http://bioinformatics.cau.edu.cn/MtED/.

  19. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  20. Dynamic Model for Hydro-Turbine Generator Units Based on a Database Method for Guide Bearings

    Directory of Open Access Journals (Sweden)

    Yong Xu

    2013-01-01

    Full Text Available A suitable dynamic model of rotor system is of great significance not only for supplying knowledge of the fault mechanism, but also for assisting in machine health monitoring research. Many techniques have been developed for properly modeling the radial vibration of large hydro-turbine generator units. However, an applicable dynamic model has not yet been reported in literature due to the complexity of the boundary conditions and exciting forces. In this paper, a finite element (FE rotor dynamic model of radial vibration taking account of operating conditions is proposed. A brief and practical database method is employed to model the guide bearing. Taking advantage of the method, rotating speed and bearing clearance can be considered in the model. A novel algorithm, which can take account of both transient and steady-state analysis, is proposed to solve the model. Dynamic response for rotor model of 125 MW hydro-turbine generator units in Gezhouba Power Station is simulated. Field data from Optimal Maintenance Information System for Hydro power plants (HOMIS are analyzed compared with the simulation. Results illustrate the application value of the model in providing knowledge of the fault mechanism and in failure diagnosis.

  1. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    Science.gov (United States)

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  2. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  3. A Model-driven Role-based Access Control for SQL Databases

    Directory of Open Access Journals (Sweden)

    Raimundas Matulevičius

    2015-07-01

    Full Text Available Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC, which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.

  4. A database model for evaluating material accountability safeguards effectiveness against protracted theft

    International Nuclear Information System (INIS)

    Sicherman, A.; Fortney, D.S.; Patenaude, C.J.

    1993-07-01

    DOE Material Control and Accountability Order 5633.3A requires that facilities handling special nuclear material evaluate their effectiveness against protracted theft (repeated thefts of small quantities of material, typically occurring over an extended time frame, to accumulate a goal quantity). Because a protracted theft attempt can extend over time, material accountability-like (MA) safeguards may help detect a protracted theft attempt in progress. Inventory anomalies, and material not in its authorized location when requested for processing are examples of MA detection mechanisms. Crediting such detection in evaluations, however, requires taking into account potential insider subversion of MA safeguards. In this paper, the authors describe a database model for evaluating MA safeguards effectiveness against protracted theft that addresses potential subversion. The model includes a detailed yet practical structure for characterizing various types of MA activities, lists of potential insider MA defeat methods and access/authority related to MA activities, and an initial implementation of built-in MA detection probabilities. This database model, implemented in the new Protracted Insider module of ASSESS (Analytic System and Software for Evaluating Safeguards and Security), helps facilitate the systematic collection of relevant information about MA activity steps, and ''standardize'' MA safeguards evaluations

  5. Emissions databases for polycyclic aromatic compounds in the Canadian Athabasca oil sands region – development using current knowledge and evaluation with passive sampling and air dispersion modelling data

    Directory of Open Access Journals (Sweden)

    X. Qiu

    2018-03-01

    Full Text Available Two speciated and spatially resolved emissions databases for polycyclic aromatic compounds (PACs in the Athabasca oil sands region (AOSR were developed. The first database was derived from volatile organic compound (VOC emissions data provided by the Cumulative Environmental Management Association (CEMA and the second database was derived from additional data collected within the Joint Canada–Alberta Oil Sands Monitoring (JOSM program. CALPUFF modelling results for atmospheric polycyclic aromatic hydrocarbons (PAHs, alkylated PAHs, and dibenzothiophenes (DBTs, obtained using each of the emissions databases, are presented and compared with measurements from a passive air monitoring network. The JOSM-derived emissions resulted in better model–measurement agreement in the total PAH concentrations and for most PAH species concentrations compared to results using CEMA-derived emissions. At local sites near oil sands mines, the percent error of the model compared to observations decreased from 30 % using the CEMA-derived emissions to 17 % using the JOSM-derived emissions. The improvement at local sites was likely attributed to the inclusion of updated tailings pond emissions estimated from JOSM activities. In either the CEMA-derived or JOSM-derived emissions scenario, the model underestimated PAH concentrations by a factor of 3 at remote locations. Potential reasons for the disagreement include forest fire emissions, re-emissions of previously deposited PAHs, and long-range transport not considered in the model. Alkylated PAH and DBT concentrations were also significantly underestimated. The CALPUFF model is expected to predict higher concentrations because of the limited chemistry and deposition modelling. Thus the model underestimation of PACs is likely due to gaps in the emissions database for these compounds and uncertainties in the methodology for estimating the emissions. Future work is required that focuses on improving the PAC

  6. A parallel model for SQL astronomical databases based on solid state storage. Application to the Gaia Archive PostgreSQL database

    Science.gov (United States)

    González-Núñez, J.; Gutiérrez-Sánchez, R.; Salgado, J.; Segovia, J. C.; Merín, B.; Aguado-Agelet, F.

    2017-07-01

    Query planning and optimisation algorithms in most popular relational databases were developed at the times hard disk drives were the only storage technology available. The advent of higher parallel random access capacity devices, such as solid state disks, opens up the way for intra-machine parallel computing over large datasets. We describe a two phase parallel model for the implementation of heavy analytical processes in single instance PostgreSQL astronomical databases. This model is particularised to fulfil two frequent astronomical problems, density maps and crossmatch computation with Quad Tree Cube (Q3C) indexes. They are implemented as part of the relational databases infrastructure for the Gaia Archive and performance is assessed. Improvement of a factor 28.40 in comparison to sequential execution is observed in the reference implementation for a histogram computation. Speedup ratios of 3.7 and 4.0 are attained for the reference positional crossmatches considered. We observe large performance enhancements over sequential execution for both CPU and disk access intensive computations, suggesting these methods might be useful with the growing data volumes in Astronomy.

  7. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    International Nuclear Information System (INIS)

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-01-01

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: →We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. →The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity.

  8. An open source web interface for linking models to infrastructure system databases

    Science.gov (United States)

    Knox, S.; Mohamed, K.; Harou, J. J.; Rheinheimer, D. E.; Medellin-Azuara, J.; Meier, P.; Tilmant, A.; Rosenberg, D. E.

    2016-12-01

    Models of networked engineered resource systems such as water or energy systems are often built collaboratively with developers from different domains working at different locations. These models can be linked to large scale real world databases, and they are constantly being improved and extended. As the development and application of these models becomes more sophisticated, and the computing power required for simulations and/or optimisations increases, so has the need for online services and tools which enable the efficient development and deployment of these models. Hydra Platform is an open source, web-based data management system, which allows modellers of network-based models to remotely store network topology and associated data in a generalised manner, allowing it to serve multiple disciplines. Hydra Platform uses a web API using JSON to allow external programs (referred to as `Apps') to interact with its stored networks and perform actions such as importing data, running models, or exporting the networks to different formats. Hydra Platform supports multiple users accessing the same network and has a suite of functions for managing users and data. We present ongoing development in Hydra Platform, the Hydra Web User Interface, through which users can collaboratively manage network data and models in a web browser. The web interface allows multiple users to graphically access, edit and share their networks, run apps and view results. Through apps, which are located on the server, the web interface can give users access to external data sources and models without the need to install or configure any software. This also ensures model results can be reproduced by removing platform or version dependence. Managing data and deploying models via the web interface provides a way for multiple modellers to collaboratively manage data, deploy and monitor model runs and analyse results.

  9. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  10. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  11. Biomine: predicting links between biological entities using network models of heterogeneous databases

    Directory of Open Access Journals (Sweden)

    Eronen Lauri

    2012-06-01

    Full Text Available Abstract Background Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Results Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. Conclusions The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable

  12. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  13. The Mouse Tumor Biology Database: A Comprehensive Resource for Mouse Models of Human Cancer.

    Science.gov (United States)

    Krupke, Debra M; Begley, Dale A; Sundberg, John P; Richardson, Joel E; Neuhauser, Steven B; Bult, Carol J

    2017-11-01

    Research using laboratory mice has led to fundamental insights into the molecular genetic processes that govern cancer initiation, progression, and treatment response. Although thousands of scientific articles have been published about mouse models of human cancer, collating information and data for a specific model is hampered by the fact that many authors do not adhere to existing annotation standards when describing models. The interpretation of experimental results in mouse models can also be confounded when researchers do not factor in the effect of genetic background on tumor biology. The Mouse Tumor Biology (MTB) database is an expertly curated, comprehensive compendium of mouse models of human cancer. Through the enforcement of nomenclature and related annotation standards, MTB supports aggregation of data about a cancer model from diverse sources and assessment of how genetic background of a mouse strain influences the biological properties of a specific tumor type and model utility. Cancer Res; 77(21); e67-70. ©2017 AACR . ©2017 American Association for Cancer Research.

  14. Web application and database modeling of traffic impact analysis using Google Maps

    Science.gov (United States)

    Yulianto, Budi; Setiono

    2017-06-01

    Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.

  15. The use of extracorporeal membrane oxygenation in blunt thoracic trauma: A study of the Extracorporeal Life Support Organization database.

    Science.gov (United States)

    Jacobs, Jordan V; Hooft, Nicole M; Robinson, Brenton R; Todd, Emily; Bremner, Ross M; Petersen, Scott R; Smith, Michael A

    2015-12-01

    Reports documenting the use of extracorporeal membrane oxygenation (ECMO) after blunt thoracic trauma are scarce. We used a large, multicenter database to examine outcomes when ECMO was used in treating patients with blunt thoracic trauma. We performed a retrospective analysis of ECMO patients in the Extracorporeal Life Support Organization database between 1998 and 2014. The diagnostic code for blunt pulmonary contusion (861.21, DRG International Classification of Diseases-9th Rev.) was used to identify patients treated with ECMO after blunt thoracic trauma. Variations of pre-ECMO respiratory support were also evaluated. The primary outcome was survival to discharge; the secondary outcome was hemorrhagic complication associated with ECMO. Eighty-five patients met inclusion criteria. The mean ± SEM age of the cohort was 28.9 ± 1.1 years; 71 (83.5%) were male. The mean ± SEM pre-ECMO PaO2/FIO2 ratio was 59.7 ± 3.5, and the mean ± SEM pre-ECMO length of ventilation was 94.7 ± 13.2 hours. Pre-ECMO support included inhaled nitric oxide (15 patients, 17.6%), high-frequency oscillation (10, 11.8%), and vasopressor agents (57, 67.1%). The mean ± SEM duration of ECMO was 207.4 ± 23.8 hours, and 63 patients (74.1%) were treated with venovenous ECMO. Thirty-two patients (37.6%) underwent invasive procedures before ECMO, and 12 patients (14.1%) underwent invasive procedures while on ECMO. Hemorrhagic complications occurred in 25 cases (29.4%), including 12 patients (14.1%) with surgical site bleeding and 16 (18.8%) with cannula site bleeding (6 patients had both). The rate of survival to discharge was 74.1%. Multivariate analysis showed that shorter duration of ECMO and the use of venovenous ECMO predicted survival. Outcomes after the use of ECMO in blunt thoracic trauma can be favorable. Some trauma patients are appropriate candidates for this therapy. Further study may discern which subpopulations of trauma patients will benefit most from ECMO. Therapeutic

  16. Assessment of tropospheric delay mapping function models in Egypt: Using PTD database model

    Science.gov (United States)

    Abdelfatah, M. A.; Mousa, Ashraf E.; El-Fiky, Gamal S.

    2018-06-01

    For space geodetic measurements, estimates of tropospheric delays are highly correlated with site coordinates and receiver clock biases. Thus, it is important to use the most accurate models for the tropospheric delay to reduce errors in the estimates of the other parameters. Both the zenith delay value and mapping function should be assigned correctly to reduce such errors. Several mapping function models can treat the troposphere slant delay. The recent models were not evaluated for the Egyptian local climate conditions. An assessment of these models is needed to choose the most suitable one. The goal of this paper is to test the quality of global mapping function which provides high consistency with precise troposphere delay (PTD) mapping functions. The PTD model is derived from radiosonde data using ray tracing, which consider in this paper as true value. The PTD mapping functions were compared, with three recent total mapping functions model and another three separate dry and wet mapping function model. The results of the research indicate that models are very close up to zenith angle 80°. Saastamoinen and 1/cos z model are behind accuracy. Niell model is better than VMF model. The model of Black and Eisner is a good model. The results also indicate that the geometric range error has insignificant effect on slant delay and the fluctuation of azimuth anti-symmetric is about 1%.

  17. Subject and authorship of records related to the Organization for Tropical Studies (OTS) in BINABITROP, a comprehensive database about Costa Rican biology.

    Science.gov (United States)

    Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz

    2013-06-01

    BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  18. Subject and authorship of records related to the Organization for Tropical Studies (OTS in BINABITROP, a comprehensive database about Costa Rican biology

    Directory of Open Access Journals (Sweden)

    Julián Monge-Nájera

    2013-06-01

    Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  19. An Organization's Extended (Soft) Competencies Model

    Science.gov (United States)

    Rosas, João; Macedo, Patrícia; Camarinha-Matos, Luis M.

    One of the steps usually undertaken in partnerships formation is the assessment of organizations’ competencies. Typically considered competencies of a functional or technical nature, which provide specific outcomes can be considered as hard competencies. Yet, the very act of collaboration has its specific requirements, for which the involved organizations must be apt to exercise other type of competencies that affect their own performance and the partnership success. These competencies are more of a behavioral nature, and can be named as soft-competencies. This research aims at addressing the effects of the soft competencies on the performance of the hard ones. An extended competencies model is thus proposed, allowing the construction of adjusted competencies profiles, in which the competency levels are adjusted dynamically according to the requirements of collaboration opportunities.

  20. Modeling photocurrent transients in organic solar cells

    International Nuclear Information System (INIS)

    Hwang, I; Greenham, N C

    2008-01-01

    We investigate the transient photocurrents of organic photovoltaic devices in response to a sharp turn-on of illumination, by numerical modeling of the drift-diffusion equations. We show that the photocurrent turn-on dynamics are determined not only by the transport dynamics of free charges, but also by the time required for the population of geminate charge pairs to reach its steady-state value. The dissociation probability of a geminate charge pair is found to be a key parameter in determining the device performance, not only by controlling the efficiency at low intensities, but also in determining the fate of charge pairs formed by bimolecular recombination at high intensities. Bimolecular recombination is shown to reduce the turn-on time at high intensities, since the typical distance traveled by a charge pair is reduced.

  1. Computational modeling of Metal-Organic Frameworks

    Science.gov (United States)

    Sung, Jeffrey Chuen-Fai

    In this work, the metal-organic frameworks MIL-53(Cr), DMOF-2,3-NH 2Cl, DMOF-2,5-NH2Cl, and HKUST-1 were modeled using molecular mechanics and electronic structure. The effect of electronic polarization on the adsorption of water in MIL-53(Cr) was studied using molecular dynamics simulations of water-loaded MIL-53 systems with both polarizable and non-polarizable force fields. Molecular dynamics simulations of the full systems and DFT calculations on representative framework clusters were utilized to study the difference in nitrogen adsorption between DMOF-2,3-NH2Cl and DMOF-2,5-NH 2Cl. Finally, the control of proton conduction in HKUST-1 by complexation of molecules to the Cu open metal site was investigated using the MS-EVB methodology.

  2. The UCSC Genome Browser Database: 2008 update

    DEFF Research Database (Denmark)

    Karolchik, D; Kuhn, R M; Baertsch, R

    2007-01-01

    The University of California, Santa Cruz, Genome Browser Database (GBD) provides integrated sequence and annotation data for a large collection of vertebrate and model organism genomes. Seventeen new assemblies have been added to the database in the past year, for a total coverage of 19 vertebrat...

  3. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  4. A Multiquantum State-to-State Model for the Fundamental States of Air: The Stellar Database

    Science.gov (United States)

    Lino da Silva, M.; Lopez, B.; Guerra, V.; Loureiro, J.

    2012-12-01

    We present a detailed database of vibrationally specific heavy-impact multiquantum rates for transitions between the fundamental states of neutral air species (N2 , O2 , NO, N and O). The most up-to-date datasets for atom- diatom collisions are firstly selected from the literature, scaled to accurate vibrational levels manifolds obtained using realistic intramolecular potentials, and extrapolated to high temperatures when necessary. For diatom-diatom collisions, vibrationally specific rates are produced using the Forced Harmonic Oscillator theory. An adequate manifold of vibrational levels is obtained from an accurate intermolecular potential, and available intermolecular potentials are approximated by a simplified Morse isotropic potential, or assumed through scaling of similar potentials otherwise. The database state-specific rates are valid for a large temperature range of low to very high temperatures, making it suitable for applications such as the modeling of high-enthalpy plasma sources or atmospheric entry applications. As experimentally determined state-specific rates are scarce, specially at high temperatures, emphasis has rather been put into verifying that the obtained rates are physically consistent, and verifying that they scale within the bounds of equilibrium rates available in the literature. The STELLAR database provides a complete and adequate set of heavy-impact rates for vibrational excitation, exchange, dissociation and recombination rates which can then be coupled to more detailed datasets for the simulation of physical-chemical processes in high-temperature plasmas. An application to the dissociation and exchange processes occurring behind an hypersonic shock wave are also presented in this work.

  5. Modeling livestock population structure: a geospatial database for Ontario swine farms.

    Science.gov (United States)

    Khan, Salah Uddin; O'Sullivan, Terri L; Poljak, Zvonimir; Alsop, Janet; Greer, Amy L

    2018-01-30

    Infectious diseases in farmed animals have economic, social, and health consequences. Foreign animal diseases (FAD) of swine are of significant concern. Mathematical and simulation models are often used to simulate FAD outbreaks and best practices for control. However, simulation outcomes are sensitive to the population structure used. Within Canada, access to individual swine farm population data with which to parameterize models is a challenge because of privacy concerns. Our objective was to develop a methodology to model the farmed swine population in Ontario, Canada that could represent the existing population structure and improve the efficacy of simulation models. We developed a swine population model based on the factors such as facilities supporting farm infrastructure, land availability, zoning and local regulations, and natural geographic barriers that could affect swine farming in Ontario. Assigned farm locations were equal to the swine farm density described in the 2011 Canadian Census of Agriculture. Farms were then randomly assigned to farm types proportional to the existing swine herd types. We compared the swine population models with a known database of swine farm locations in Ontario and found that the modeled population was representative of farm locations with a high accuracy (AUC: 0.91, Standard deviation: 0.02) suggesting that our algorithm generated a reasonable approximation of farm locations in Ontario. In the absence of a readily accessible dataset providing details of the relative locations of swine farms in Ontario, development of a model livestock population that captures key characteristics of the true population structure while protecting privacy concerns is an important methodological advancement. This methodology will be useful for individuals interested in modeling the spread of pathogens between farms across a landscape and using these models to evaluate disease control strategies.

  6. Model for Railway Infrastructure Management Organization

    Directory of Open Access Journals (Sweden)

    Gordan Stojić

    2012-03-01

    Full Text Available The provision of appropriate quality rail services has an important role in terms of railway infrastructure: quality of infrastructure maintenance, regulation of railway traffic, line capacity, speed, safety, train station organization, the allowable lines load and other infrastructure parameters.The analysis of experiences in transforming the railway systems points to the conclusion that there is no unique solution in terms of choice for institutional rail infrastructure management modes, although more than nineteen years have passed from the beginning of the implementation of the Directive 91/440/EEC. Depending on the approach to the process of restructuring the national railway company, adopted regulations and caution in its implementation, the existence or absence of a clearly defined transport strategy, the willingness to liberalize the transport market, there are several different ways for institutional management of railway infrastructure.A hybrid model for selection of modes of institutional rail infrastructure management was developed based on the theory of artificial intelligence, theory of fuzzy sets and theory of multicriteria optimization.KEY WORDSmanagement, railway infrastructure, organizational structure, hybrid model

  7. Overall models and experimental database for UO2 and MOX fuel increasing performance

    International Nuclear Information System (INIS)

    Bernard, L.C.; Blanpain, P.

    2001-01-01

    Framatome steady-state fission gas release database includes more than 290 fuel rods irradiated in commercial and experimental reactors with rod average burnups up to 67 GWd/tM. The transient database includes close to 60 fuel rods with burnups up to 62 GWd//tM. The hold time for these rods ranged from several minutes to many hours and the linear heat generation rates ranged from 30 kW/m to 50 kW/m. The quality of the fission gas release model is state-of-the-art as the uncertainty of the model is comparable to other code models. Framatome is also greatly concerned with the MOX fuel performance and modeling given that, since 1997, more than 1500 MOX fuel assemblies have been delivered to French and foreign PWRs. The paper focuses on the significant data acquired through surveillance and analytical programs used for the validation and the improvement of the MOX fuel modeling. (author)

  8. A distributed atomic physics database and modeling system for plasma spectroscopy

    International Nuclear Information System (INIS)

    Nash, J.K.; Liedahl, D.; Chen, M.H.; Iglesias, C.A.; Lee, R.W.; Salter, J.M.

    1995-08-01

    We are undertaking to develop a set of computational capabilities which will facilitate the access, manipulation, and understanding of atomic data in calculations of x-ray spectral modeling. In this present limited description we will emphasize the objectives for this work, the design philosophy, and aspects of the atomic database, as a more complete description of this work is available. The project is referred to as the Plasma Spectroscopy Initiative; the computing environment is called PSI, or the ''PSI shell'' since the primary interface resembles a UNIX shell window. The working group consists of researchers in the fields of x-ray plasma spectroscopy, atomic physics, plasma diagnostics, line shape theory, astrophysics, and computer science. To date, our focus has been to develop the software foundations, including the atomic physics database, and to apply the existing capabilities to a range of working problems. These problems have been chosen in part to exercise the overall design and implementation of the shell. For successful implementation the final design must have great flexibility since our goal is not simply to satisfy our interests but to vide a tool of general use to the community

  9. Project-matrix models of marketing organization

    OpenAIRE

    Gutić Dragutin; Rudelj Siniša

    2009-01-01

    Unlike theory and practice of corporation organization, in marketing organization numerous forms and contents at its disposal are not reached until this day. It can be well estimated that marketing organization today in most of our companies and in almost all its parts, noticeably gets behind corporation organization. Marketing managers have always been occupied by basic, narrow marketing activities as: sales growth, market analysis, market growth and market share, marketing research, introdu...

  10. Studying Oogenesis in a Non-model Organism Using Transcriptomics: Assembling, Annotating, and Analyzing Your Data.

    Science.gov (United States)

    Carter, Jean-Michel; Gibbs, Melanie; Breuker, Casper J

    2016-01-01

    This chapter provides a guide to processing and analyzing RNA-Seq data in a non-model organism. This approach was implemented for studying oogenesis in the Speckled Wood Butterfly Pararge aegeria. We focus in particular on how to perform a more informative primary annotation of your non-model organism by implementing our multi-BLAST annotation strategy. We also provide a general guide to other essential steps in the next-generation sequencing analysis workflow. Before undertaking these methods, we recommend you familiarize yourself with command line usage and fundamental concepts of database handling. Most of the operations in the primary annotation pipeline can be performed in Galaxy (or equivalent standalone versions of the tools) and through the use of common database operations (e.g. to remove duplicates) but other equivalent programs and/or custom scripts can be implemented for further automation.

  11. Logical database design principles

    CERN Document Server

    Garmany, John; Clark, Terry

    2005-01-01

    INTRODUCTION TO LOGICAL DATABASE DESIGNUnderstanding a Database Database Architectures Relational Databases Creating the Database System Development Life Cycle (SDLC)Systems Planning: Assessment and Feasibility System Analysis: RequirementsSystem Analysis: Requirements Checklist Models Tracking and Schedules Design Modeling Functional Decomposition DiagramData Flow Diagrams Data Dictionary Logical Structures and Decision Trees System Design: LogicalSYSTEM DESIGN AND IMPLEMENTATION The ER ApproachEntities and Entity Types Attribute Domains AttributesSet-Valued AttributesWeak Entities Constraint

  12. Fiscal 1998 research report. Construction model project of the human sensory database; 1998 nendo ningen kankaku database kochiku model jigyo seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    This report summarizes the fiscal 1998 research result on construction of the human sensory database. The human sensory database for evaluating working environment was constructed on the basis of the measurement result on human sensory data (stress and fatigue) of 400 examinees at fields (transport field, control room and office) and in a laboratory. By using the newly developed standard measurement protocol for evaluating summer clothing (shirt, slacks and underwear), the database composed of the evaluation experiment results and the comparative experiment results on human physiological and sensory data of aged and young people was constructed. The database is featured by easy retrieval of various information concerned corresponding to requirements of tasks and use purposes. For evaluating the mass data with large time variation read corresponding to use purposes for every scene, the data detection support technique was adopted paying attention to physical and psychological variable phases, and mind and body events. A meaning of reaction and a hint for necessary measures are showed for every phase and event. (NEDO)

  13. Volcanogenic Massive Sulfide Deposits of the World - Database and Grade and Tonnage Models

    Science.gov (United States)

    Mosier, Dan L.; Berger, Vladimir I.; Singer, Donald A.

    2009-01-01

    Grade and tonnage models are useful in quantitative mineral-resource assessments. The models and database presented in this report are an update of earlier publications about volcanogenic massive sulfide (VMS) deposits. These VMS deposits include what were formerly classified as kuroko, Cyprus, and Besshi deposits. The update was necessary because of new information about some deposits, changes in information in some deposits, such as grades, tonnages, or ages, revised locations of some deposits, and reclassification of subtypes. In this report we have added new VMS deposits and removed a few incorrectly classified deposits. This global compilation of VMS deposits contains 1,090 deposits; however, it was not our intent to include every known deposit in the world. The data was recently used for mineral-deposit density models (Mosier and others, 2007; Singer, 2008). In this paper, 867 deposits were used to construct revised grade and tonnage models. Our new models are based on a reclassification of deposits based on host lithologies: Felsic, Bimodal-Mafic, and Mafic volcanogenic massive sulfide deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types occur in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment and economists to determine the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience

  14. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  15. Combining a weed traits database with a population dynamics model predicts shifts in weed communities.

    Science.gov (United States)

    Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A

    2015-04-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.

  16. Engineering the object-relation database model in O-Raid

    Science.gov (United States)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  17. HTO transfer from contaminated surfaces to the atmosphere: a database for model validation

    International Nuclear Information System (INIS)

    Davis, P.A.; Amiro, B.D.; Workman, W.J.G.; Corbett, B.J.

    1996-12-01

    This report comprises a detailed database that can be used to validate models of the emission of tritiated water vapour (HTO) from natural contaminated surfaces to the atmosphere. The data were collected in 1992 July during an intensive field study based on the flux-gradient method of micrometeorology. The measurements were made over a wetland area at the Chalk River Laboratories, and over a grassed field near the Pickering Nuclear Generating Station. The study sites, the sampling protocols and the analytical techniques are described in detail, and the measured fluxes are presented. The report also contains a detailed listing of HTO concentrations in air at two heights, HTO concentrations in the source compartments (soil, surface water and vegetation), supporting meteorological data, and various vegetation and soil properties. The uncertainties in all of the measured data are estimated. (author). 15 refs., 23 tabs., 9 figs

  18. MiDAS 2.0: an ecosystem-specific taxonomy and online database for the organisms of wastewater treatment systems expanded for anaerobic digester groups

    DEFF Research Database (Denmark)

    McIlroy, Simon Jon; Kirkegaard, Rasmus Hansen; McIlroy, Bianca

    2017-01-01

    of the anaerobic digester systems fed primary sludge and surplus activated sludge. The updated database includes descriptions of the abundant genus-level-taxa in influent wastewater, activated sludge and anaerobic digesters. Abundance information is also included to allow assessment of the role of emigration...... taxonomy endeavours to provide a genus-level classification for abundant phylotypes and the online field guide links this identity to published information regarding their ecology, function and distribution. This article describes the expansion of the database resources to cover the organisms...

  19. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  20. The Time Is Right to Focus on Model Organism Metabolomes

    Directory of Open Access Journals (Sweden)

    Arthur S. Edison

    2016-02-01

    Full Text Available Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  1. Carbonatites of the World, Explored Deposits of Nb and REE - Database and Grade and Tonnage Models

    Science.gov (United States)

    Berger, Vladimir I.; Singer, Donald A.; Orris, Greta J.

    2009-01-01

    This report is based on published tonnage and grade data on 58 Nb- and rare-earth-element (REE)-bearing carbonatite deposits that are mostly well explored and are partially mined or contain resources of these elements. The deposits represent only a part of the known 527 carbonatites around the world, but they are characterized by reliable quantitative data on ore tonnages and grades of niobium and REE. Grade and tonnage models are an important component of mineral resource assessments. Carbonatites present one of the main natural sources of niobium and rare-earth elements, the economic importance of which grows consistently. A purpose of this report is to update earlier publications. New information about known deposits, as well as data on new deposits published during the last decade, are incorporated in the present paper. The compiled database (appendix 1; linked to right) contains 60 explored Nb- and REE-bearing carbonatite deposits - resources of 55 of these deposits are taken from publications. In the present updated grade-tonnage model we have added 24 deposits comparing with the previous model of Singer (1998). Resources of most deposits are residuum ores in the upper part of carbonatite bodies. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types are present in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment, and the grade and tonnage models allow economists to

  2. Models of care and organization of services.

    Science.gov (United States)

    Markova, Alina; Xiong, Michael; Lester, Jenna; Burnside, Nancy J

    2012-01-01

    This article examines the overall organization of services and delivery of health care in the United States. Health maintenance organization, fee-for-service, preferred provider organizations, and the Veterans Health Administration are discussed, with a focus on structure, outcomes, and areas for improvement. An overview of wait times, malpractice, telemedicine, and the growing population of physician extenders in dermatology is also provided. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Saccharomyces genome database informs human biology

    OpenAIRE

    Skrzypek, Marek S; Nash, Robert S; Wong, Edith D; MacPherson, Kevin A; Hellerstedt, Sage T; Engel, Stacia R; Karra, Kalpana; Weng, Shuai; Sheppard, Travis K; Binkley, Gail; Simison, Matt; Miyasato, Stuart R; Cherry, J Michael

    2017-01-01

    Abstract The Saccharomyces Genome Database (SGD; http://www.yeastgenome.org) is an expertly curated database of literature-derived functional information for the model organism budding yeast, Saccharomyces cerevisiae. SGD constantly strives to synergize new types of experimental data and bioinformatics predictions with existing data, and to organize them into a comprehensive and up-to-date information resource. The primary mission of SGD is to facilitate research into the biology of yeast and...

  4. Towards Global QSAR Model Building for Acute Toxicity: Munro Database Case Study

    Directory of Open Access Journals (Sweden)

    Swapnil Chavan

    2014-10-01

    Full Text Available A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  5. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  6. An empirical modeling tool and glass property database in development of US-DOE radioactive waste glasses

    International Nuclear Information System (INIS)

    Muller, I.; Gan, H.

    1997-01-01

    An integrated glass database has been developed at the Vitreous State Laboratory of Catholic University of America. The major objective of this tool was to support glass formulation using the MAWS approach (Minimum Additives Waste Stabilization). An empirical modeling capability, based on the properties of over 1000 glasses in the database, was also developed to help formulate glasses from waste streams under multiple user-imposed constraints. The use of this modeling capability, the performance of resulting models in predicting properties of waste glasses, and the correlation of simple structural theories to glass properties are the subjects of this paper. (authors)

  7. Organization model and formalized description of nuclear enterprise information system

    International Nuclear Information System (INIS)

    Yuan Feng; Song Yafeng; Li Xudong

    2012-01-01

    Organization model is one of the most important models of Nuclear Enterprise Information System (NEIS). Scientific and reasonable organization model is the prerequisite that NEIS has robustness and extendibility, and is also the foundation of the integration of heterogeneous system. Firstly, the paper describes the conceptual model of the NEIS on ontology chart, which provides a consistent semantic framework of organization. Then it discusses the relations between the concepts in detail. Finally, it gives the formalized description of the organization model of NEIS based on six-tuple array. (authors)

  8. Defining new criteria for selection of cell-based intestinal models using publicly available databases

    Directory of Open Access Journals (Sweden)

    Christensen Jon

    2012-06-01

    Full Text Available Abstract Background The criteria for choosing relevant cell lines among a vast panel of available intestinal-derived lines exhibiting a wide range of functional properties are still ill-defined. The objective of this study was, therefore, to establish objective criteria for choosing relevant cell lines to assess their appropriateness as tumor models as well as for drug absorption studies. Results We made use of publicly available expression signatures and cell based functional assays to delineate differences between various intestinal colon carcinoma cell lines and normal intestinal epithelium. We have compared a panel of intestinal cell lines with patient-derived normal and tumor epithelium and classified them according to traits relating to oncogenic pathway activity, epithelial-mesenchymal transition (EMT and stemness, migratory properties, proliferative activity, transporter expression profiles and chemosensitivity. For example, SW480 represent an EMT-high, migratory phenotype and scored highest in terms of signatures associated to worse overall survival and higher risk of recurrence based on patient derived databases. On the other hand, differentiated HT29 and T84 cells showed gene expression patterns closest to tumor bulk derived cells. Regarding drug absorption, we confirmed that differentiated Caco-2 cells are the model of choice for active uptake studies in the small intestine. Regarding chemosensitivity we were unable to confirm a recently proposed association of chemo-resistance with EMT traits. However, a novel signature was identified through mining of NCI60 GI50 values that allowed to rank the panel of intestinal cell lines according to their drug responsiveness to commonly used chemotherapeutics. Conclusions This study presents a straightforward strategy to exploit publicly available gene expression data to guide the choice of cell-based models. While this approach does not overcome the major limitations of such models

  9. Transport and Environment Database System (TRENDS): Maritime Air Pollutant Emission Modelling

    DEFF Research Database (Denmark)

    Georgakaki, Aliki; Coffey, Robert; Lock, Grahm

    2005-01-01

    This paper reports the development of the maritime module within the framework of the Transport and Environment Database System (TRENDS) project. A detailed database has been constructed for the calculation of energy consumption and air pollutant emissions. Based on an in-house database...... changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA...... with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types...

  10. MARRVEL: Integration of Human and Model Organism Genetic Resources to Facilitate Functional Annotation of the Human Genome.

    Science.gov (United States)

    Wang, Julia; Al-Ouran, Rami; Hu, Yanhui; Kim, Seon-Young; Wan, Ying-Wooi; Wangler, Michael F; Yamamoto, Shinya; Chao, Hsiao-Tuan; Comjean, Aram; Mohr, Stephanie E; Perrimon, Norbert; Liu, Zhandong; Bellen, Hugo J

    2017-06-01

    One major challenge encountered with interpreting human genetic variants is the limited understanding of the functional impact of genetic alterations on biological processes. Furthermore, there remains an unmet demand for an efficient survey of the wealth of information on human homologs in model organisms across numerous databases. To efficiently assess the large volume of publically available information, it is important to provide a concise summary of the most relevant information in a rapid user-friendly format. To this end, we created MARRVEL (model organism aggregated resources for rare variant exploration). MARRVEL is a publicly available website that integrates information from six human genetic databases and seven model organism databases. For any given variant or gene, MARRVEL displays information from OMIM, ExAC, ClinVar, Geno2MP, DGV, and DECIPHER. Importantly, it curates model organism-specific databases to concurrently display a concise summary regarding the human gene homologs in budding and fission yeast, worm, fly, fish, mouse, and rat on a single webpage. Experiment-based information on tissue expression, protein subcellular localization, biological process, and molecular function for the human gene and homologs in the seven model organisms are arranged into a concise output. Hence, rather than visiting multiple separate databases for variant and gene analysis, users can obtain important information by searching once through MARRVEL. Altogether, MARRVEL dramatically improves efficiency and accessibility to data collection and facilitates analysis of human genes and variants by cross-disciplinary integration of 18 million records available in public databases to facilitate clinical diagnosis and basic research. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  11. 3D Bioprinting of Tissue/Organ Models.

    Science.gov (United States)

    Pati, Falguni; Gantelius, Jesper; Svahn, Helene Andersson

    2016-04-04

    In vitro tissue/organ models are useful platforms that can facilitate systematic, repetitive, and quantitative investigations of drugs/chemicals. The primary objective when developing tissue/organ models is to reproduce physiologically relevant functions that typically require complex culture systems. Bioprinting offers exciting prospects for constructing 3D tissue/organ models, as it enables the reproducible, automated production of complex living tissues. Bioprinted tissues/organs may prove useful for screening novel compounds or predicting toxicity, as the spatial and chemical complexity inherent to native tissues/organs can be recreated. In this Review, we highlight the importance of developing 3D in vitro tissue/organ models by 3D bioprinting techniques, characterization of these models for evaluating their resemblance to native tissue, and their application in the prioritization of lead candidates, toxicity testing, and as disease/tumor models. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database.

    Science.gov (United States)

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-05-14

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.

  13. A Relational Database Model for Managing Accelerator Control System Software at Jefferson Lab

    International Nuclear Information System (INIS)

    Sally Schaffner; Theodore Larrieu

    2001-01-01

    The operations software group at the Thomas Jefferson National Accelerator Facility faces a number of challenges common to facilities which manage a large body of software developed in-house. Developers include members of the software group, operators, hardware engineers and accelerator physicists.One management problem has been ensuring that all software has an identified owner who is still working at the lab. In some cases, locating source code for ''orphaned'' software has also proven to be difficult. Other challenges include ensuring that working versions of all operational software are available, testing changes to operational software without impacting operations, upgrading infrastructure software (OS, compilers, interpreters, commercial packages, share/freeware, etc), ensuring that appropriate documentation is available and up to date, underutilization of code reuse, input/output file management,and determining what other software will break if a software package is upgraded. This paper will describe a relational database model which has been developed to track this type of information and make it available to managers and developers.The model also provides a foundation for developing productivity-enhancing tools for automated building, versioning, and installation of software. This work was supported by the U.S. DOE contract No. DE-AC05-84ER40150

  14. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease.

    Science.gov (United States)

    Eppig, Janan T; Blake, Judith A; Bult, Carol J; Kadin, James A; Richardson, Joel E

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse-human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human-Mouse: Disease Connection, allows users to explore gene-phenotype-disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Topobathymetric elevation model development using a new methodology: Coastal National Elevation Database

    Science.gov (United States)

    Danielson, Jeffrey J.; Poppenga, Sandra K.; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy A.; Barras, John

    2016-01-01

    During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models

  16. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    International Nuclear Information System (INIS)

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 x 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs

  17. Authentication in Virtual Organizations: A Reputation Based PKI Interconnection Model

    Science.gov (United States)

    Wazan, Ahmad Samer; Laborde, Romain; Barrere, Francois; Benzekri, Abdelmalek

    Authentication mechanism constitutes a central part of the virtual organization work. The PKI technology is used to provide the authentication in each organization involved in the virtual organization. Different trust models are proposed to interconnect the different PKIs in order to propagate the trust between them. While the existing trust models contain many drawbacks, we propose a new trust model based on the reputation of PKIs.

  18. Modelling the fate of persistent organic pollutants in Europe: parameterisation of a gridded distribution model

    International Nuclear Information System (INIS)

    Prevedouros, Konstantinos; MacLeod, Matthew; Jones, Kevin C.; Sweetman, Andrew J.

    2004-01-01

    A regionally segmented multimedia fate model for the European continent is described together with an illustrative steady-state case study examining the fate of γ-HCH (lindane) based on 1998 emission data. The study builds on the regionally segmented BETR North America model structure and describes the regional segmentation and parameterisation for Europe. The European continent is described by a 5 deg. x 5 deg. grid, leading to 50 regions together with four perimetric boxes representing regions buffering the European environment. Each zone comprises seven compartments including; upper and lower atmosphere, soil, vegetation, fresh water and sediment and coastal water. Inter-regions flows of air and water are described, exploiting information originating from GIS databases and other georeferenced data. The model is primarily designed to describe the fate of Persistent Organic Pollutants (POPs) within the European environment by examining chemical partitioning and degradation in each region, and inter-region transport either under steady-state conditions or fully dynamically. A test case scenario is presented which examines the fate of estimated spatially resolved atmospheric emissions of lindane throughout Europe within the lower atmosphere and surface soil compartments. In accordance with the predominant wind direction in Europe, the model predicts high concentrations close to the major sources as well as towards Central and Northeast regions. Elevated soil concentrations in Scandinavian soils provide further evidence of the potential of increased scavenging by forests and subsequent accumulation by organic-rich terrestrial surfaces. Initial model predictions have revealed a factor of 5-10 underestimation of lindane concentrations in the atmosphere. This is explained by an underestimation of source strength and/or an underestimation of European background levels. The model presented can further be used to predict deposition fluxes and chemical inventories, and it

  19. Investigation of an artificial intelligence technology--Model trees. Novel applications for an immediate release tablet formulation database.

    Science.gov (United States)

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.

  20. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    Peer-to-peer databases are becoming more prevalent on the internet for sharing and distributing applications, documents, files, and other digital media. The problem associated with answering large-scale ad hoc analysis queries, aggregation queries, on these databases poses unique challenges. This paper presents an ...

  1. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    Science.gov (United States)

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  2. Development of Pipeline Database and CAD Model for Selection of Core Security Zone in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Choi, Seong Soo

    2010-06-01

    The goals of this report are (1) to develop a piping database for safety class 1 and 2 piping in Ulchin Units 3 and 4 in order to identify vital areas (2) to develop a CAD model for a vital area visualization (3) to realize a 3D program for a virtual reality of vital areas. We have performed a piping segmentation and an accident consequence analysis and developed a piping database. We also have developed a CAD model for primary auxiliary building, containment building, secondary auxiliary building, and turbine building

  3. Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database

    Science.gov (United States)

    Nguyen, S. H.; Yao, Z.; Kolbe, T. H.

    2017-10-01

    A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.

  4. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  5. MODELING OF MANAGEMENT PROCESSES IN AN ORGANIZATION

    Directory of Open Access Journals (Sweden)

    Stefan Iovan

    2016-05-01

    Full Text Available When driving any major change within an organization, strategy and execution are intrinsic to a project’s success. Nevertheless, closing the gap between strategy and execution remains a challenge for many organizations [1]. Companies tend to focus more on execution than strategy for quick results, instead of taking the time needed to understand the parts that make up the whole, so the right execution plan can be put in place to deliver the best outcomes. A large part of this understands that business operations don’t fit neatly within the traditional organizational hierarchy. Business processes are often messy, collaborative efforts that cross teams, departments and systems, making them difficult to manage within a hierarchical structure [2]. Business process management (BPM fills this gap by redefining an organization according to its end-to-end processes, so opportunities for improvement can be identified and processes streamlined for growth, revenue and transformation. This white paper provides guidelines on what to consider when using business process applications to solve your BPM initiatives, and the unique capabilities software systems provides that can help ensure both your project’s success and the success of your organization as a whole. majority of medium and small businesses, big companies and even some guvermental organizations [2].

  6. Sediment-hosted gold deposits of the world: database and grade and tonnage models

    Science.gov (United States)

    Berger, Vladimir I.; Mosier, Dan L.; Bliss, James D.; Moring, Barry C.

    2014-01-01

    All sediment-hosted gold deposits (as a single population) share one characteristic—they all have disseminated micron-sized invisible gold in sedimentary rocks. Sediment-hosted gold deposits are recognized in the Great Basin province of the western United States and in China along with a few recognized deposits in Indonesia, Iran, and Malaysia. Three new grade and tonnage models for sediment-hosted gold deposits are presented in this paper: (1) a general sediment-hosted gold type model, (2) a Carlin subtype model, and (3) a Chinese subtype model. These models are based on grade and tonnage data from a database compilation of 118 sediment-hosted gold deposits including a total of 123 global deposits. The new general grade and tonnage model for sediment-hosted gold deposits (n=118) has a median tonnage of 5.7 million metric tonnes (Mt) and a gold grade of 2.9 grams per tonne (g/t). This new grade and tonnage model is remarkable in that the estimated parameters of the resulting grade and tonnage distributions are comparable to the previous model of Mosier and others (1992). A notable change is in the reporting of silver in more than 10 percent of deposits; moreover, the previous model had not considered deposits in China. From this general grade and tonnage model, two significantly different subtypes of sediment-hosted gold deposits are differentiated: Carlin and Chinese. The Carlin subtype includes 88 deposits in the western United States, Indonesia, Iran, and Malaysia, with median tonnage and grade of 7.1 Mt and 2.0 g/t Au, respectively. The silver grade is 0.78 g/t Ag for the 10th percentile of deposits. The Chinese subtype represents 30 deposits in China, with a median tonnage of 3.9 Mt and medium grade of 4.6 g/t Au. Important differences are recognized in the mineralogy and alteration of the two sediment-hosted gold subtypes such as: increased sulfide minerals in the Chinese subtype and decalcification alteration dominant in the Carlin type. We therefore

  7. Self-Organizing Map Models of Language Acquisition

    Directory of Open Access Journals (Sweden)

    Ping eLi

    2013-11-01

    Full Text Available Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic PDP architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development.

  8. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  9. A Database for Propagation Models and Conversion to C++ Programming Language

    Science.gov (United States)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    The telecommunications system design engineer generally needs the quantification of effects of the propagation medium (definition of the propagation channel) to design an optimal communications system. To obtain the definition of the channel, the systems engineer generally has a few choices. A search of the relevant publications such as the IEEE Transactions, CCIR's, NASA propagation handbook, etc., may be conducted to find the desired channel values. This method may need excessive amounts of time and effort on the systems engineer's part and there is a possibility that the search may not even yield the needed results. To help the researcher and the systems engineers, it was recommended by the conference participants of NASA Propagation Experimenters (NAPEX) XV (London, Ontario, Canada, June 28 and 29, 1991) that a software should be produced that would contain propagation models and the necessary prediction methods of most propagation phenomena. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming. In the past few years, a software was produced to fit these requirements as best as could be done. The software was distributed to all NAPEX participants for evaluation and use, the participant reactions, suggestions etc., were gathered and were used to improve the subsequent releases of the software. The existing database program is in the Microsoft Excel application software and works fine within the guidelines of that environment, however, recently there have been some questions about the robustness and survivability of the Excel software in the ever changing (hopefully improving) world of software packages.

  10. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  11. [Establishment of the database of the 3D facial models for the plastic surgery based on network].

    Science.gov (United States)

    Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun

    2008-07-01

    To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.

  12. Development and validation of a facial expression database based on the dimensional and categorical model of emotions.

    Science.gov (United States)

    Fujimura, Tomomi; Umemura, Hiroyuki

    2018-01-15

    The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .

  13. The initiative on Model Organism Proteomes (iMOP) Session

    DEFF Research Database (Denmark)

    Schrimpf, Sabine P; Mering, Christian von; Bendixen, Emøke

    2012-01-01

    iMOP – the Initiative on Model Organism Proteomes – was accepted as a new HUPO initiative at the Ninth HUPO meeting in Sydney in 2010. A goal of iMOP is to integrate research groups working on a great diversity of species into a model organism community. At the Tenth HUPO meeting in Geneva...

  14. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Madronich, Sasha [Univ. Corporation for Atmospheric Research, Boulder, CO (United States)

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  15. Database and Library Development of Organic Species using Gas Chromatography and Mass Spectral Measurements in Support of the Mars Science Laboratory

    Science.gov (United States)

    Garcia, Raul; Mahaffy, Paul; Misra, Prabhakar

    2010-02-01

    Our work involves the development of an organic contaminants database that will allow us to determine which compounds are found here on Earth and would be inadvertently detected in the Mars soil and gaseous samples as impurities. It will be used for the Sample Analysis at Mars (SAM) instrumentation analysis in the Mars Science Laboratory (MSL) rover scheduled for launch in 2011. In order to develop a comprehensive target database, we utilize the NIST Mass Spectral Library, Automated Mass Spectral Deconvolution and Identification System (AMDIS) and Ion Fingerprint Deconvolution (IFD) software to analyze the GC-MS data. We have analyzed data from commercial samples, such as paint and polymers, which have not been implemented into the rover and are now analyzing actual data from pyrolyzation on the rover. We have successfully developed an initial target compound database that will aid SAM in determining whether the components being analyzed come from Mars or are contaminants from either the rover itself or the Earth environment and are continuing to make improvements and adding data to the target contaminants database. )

  16. National Solar Radiation Database (NSRDB) SolarAnywhere 10 km Model Output for 1989 to 2009

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Solar Radiation Database (NSRDB) was produced by the National Renewable Energy Laboratory under the U.S. Department of Energy's Office of Energy...

  17. Consolidated Human Activity Database (CHAD) for use in human exposure and health studies and predictive models

    Science.gov (United States)

    EPA scientists have compiled detailed data on human behavior from 22 separate exposure and time-use studies into CHAD. The database includes more than 54,000 individual study days of detailed human behavior.

  18. CODASC : a database for the validation of street canyon dispersion models

    OpenAIRE

    Gromke, C.B.

    2013-01-01

    CODASC stands for Concentration Data of Street Canyons (CODASC 2008, www.codasc.de). It is a database which provides traffic pollutant concentrations in urban street canyons obtained from wind-tunnel dispersion experiments. CODASC comprises concentration data of street canyons with different aspect ratios subjected to various wind directions and also for street canyons with tree-avenues. The database includes concentration data of tree-avenue configurations of different tree arrangement, tree...

  19. New model for distributed multimedia databases and its application to networking of museums

    Science.gov (United States)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  20. Impact of Socioeconomic Status on Patients Supported With a Left Ventricular Assist Device: An Analysis of the UNOS Database (United Network for Organ Sharing).

    Science.gov (United States)

    Clerkin, Kevin J; Garan, Arthur Reshad; Wayda, Brian; Givens, Raymond C; Yuzefpolskaya, Melana; Nakagawa, Shunichi; Takeda, Koji; Takayama, Hiroo; Naka, Yoshifumi; Mancini, Donna M; Colombo, Paolo C; Topkara, Veli K

    2016-10-01

    Low socioeconomic status (SES) is a known risk factor for heart failure, mortality among those with heart failure, and poor post heart transplant (HT) outcomes. This study sought to determine whether SES is associated with decreased waitlist survival while on left ventricular assist device (LVADs) support and after HT. A total of 3361 adult patients bridged to primary HT with an LVAD between May 2004 and April 2014 were identified in the UNOS database (United Network for Organ Sharing). SES was measured using the Agency for Healthcare Research and Quality SES index using data from the 2014 American Community Survey. In the study cohort, SES did not have an association with the combined end point of death or delisting on LVAD support (P=0.30). In a cause-specific unadjusted model, those in the top (hazard ratio, 1.55; 95% confidence interval, 1.14-2.11; P=0.005) and second greatest SES quartile (hazard ratio 1.50; 95% confidence interval, 1.10-2.04; P=0.01) had an increased risk of death on device support compared with the lowest SES quartile. Adjusting for clinical risk factors mitigated the increased risk. There was no association between SES and complications. Post-HT survival, both crude and adjusted, was decreased for patients in the lowest quartile of SES index compared with all other SES quartiles. Freedom from waitlist death or delisting was not affected by SES. Patients with a higher SES had an increased unadjusted risk of waitlist mortality during LVAD support, which was mitigated by adjusting for increased comorbid conditions. Low SES was associated with worse post-HT outcomes. Further study is needed to confirm and understand a differential effect of SES on post-transplant outcomes that was not seen during LVAD support before HT. © 2016 American Heart Association, Inc.

  1. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  2. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  3. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    Science.gov (United States)

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  4. The System Dynamics Model for Development of Organic Agriculture

    Science.gov (United States)

    Rozman, Črtomir; Škraba, Andrej; Kljajić, Miroljub; Pažek, Karmen; Bavec, Martina; Bavec, Franci

    2008-10-01

    Organic agriculture is the highest environmentally valuable agricultural system, and has strategic importance at national level that goes beyond the interests of agricultural sector. In this paper we address development of organic farming simulation model based on a system dynamics methodology (SD). The system incorporates relevant variables, which affect the development of the organic farming. The group decision support system (GDSS) was used in order to identify most relevant variables for construction of causal loop diagram and further model development. The model seeks answers to strategic questions related to the level of organically utilized area, levels of production and crop selection in a long term dynamic context and will be used for simulation of different policy scenarios for organic farming and their impact on economic and environmental parameters of organic production at an aggregate level.

  5. Ethnographic analysis of traumatic brain injury patients in the national Model Systems database.

    Science.gov (United States)

    Burnett, Derek M; Kolakowsky-Hayner, Stephanie A; Slater, Dan; Stringer, Anthony; Bushnik, Tamara; Zafonte, Ross; Cifu, David X

    2003-02-01

    To compare demographics, injury characteristics, therapy service and intensity, and outcome in minority versus nonminority patients with traumatic brain injury (TBI). Retrospective analysis. Twenty medical centers. Two thousand twenty patients (men, n=1,518; women, n=502; nonminority, n=1,168; minority, n=852) with TBI enrolled in the Traumatic Brain Injury Model Systems database. Not applicable. Age, gender, marital status, education, employment status, injury severity (based on Glasgow Coma Scale [GCS] admission score, length of posttraumatic amnesia, duration of unconsciousness), intensity (hours) of therapy rendered, rehabilitation length of stay (LOS), rehabilitation charges, discharge disposition, postinjury employment status, FIM instrument change scores, and FIM efficiency scores. Independent sample t tests were used to analyze continuous variables; chi-square analyses were used to evaluate categorical data. overall, minorities were found to be mostly young men who were single, unemployed, and less well educated, with a longer work week if employed when injured. motor vehicle crashes (MVCs) predominated as the cause of injury for both groups; however, minorities were more likely to sustain injury from acts of violence and auto-versus-pedestrian crashes. Minorities also had higher GCS scores on admission and shorter LOS. Rehabilitation services: significant differences were found in the types and intensity of rehabilitation services provided; these included physical therapy, occupational therapy, and speech-language pathology, but not psychology. Minority patients who sustain TBI generally tend to be young men with less social responsibility. Although MVCs predominate as the primary etiology, acts of violence and auto-versus-pedestrian incidents are more common in the minority population. Minorities tend to have higher GCS scores at admission. Also, the type and intensity of rehabilitation services provided differed significantly for the various

  6. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  7. Interpenetrating metal-organic and inorganic 3D networks: a computer-aided systematic investigation. Part II [1]. Analysis of the Inorganic Crystal Structure Database (ICSD)

    International Nuclear Information System (INIS)

    Baburin, I.A.; Blatov, V.A.; Carlucci, L.; Ciani, G.; Proserpio, D.M.

    2005-01-01

    Interpenetration in metal-organic and inorganic networks has been investigated by a systematic analysis of the crystallographic structural databases. We have used a version of TOPOS (a package for multipurpose crystallochemical analysis) adapted for searching for interpenetration and based on the concept of Voronoi-Dirichlet polyhedra and on the representation of a crystal structure as a reduced finite graph. In this paper, we report comprehensive lists of interpenetrating inorganic 3D structures from the Inorganic Crystal Structure Database (ICSD), inclusive of 144 Collection Codes for equivalent interpenetrating nets, analyzed on the basis of their topologies. Distinct Classes, corresponding to the different modes in which individual identical motifs can interpenetrate, have been attributed to the entangled structures. Interpenetrating nets of different nature as well as interpenetrating H-bonded nets were also examined

  8. A linear solvation energy relationship model of organic chemical partitioning to dissolved organic carbon.

    Science.gov (United States)

    Kipka, Undine; Di Toro, Dominic M

    2011-09-01

    Predicting the association of contaminants with both particulate and dissolved organic matter is critical in determining the fate and bioavailability of chemicals in environmental risk assessment. To date, the association of a contaminant to particulate organic matter is considered in many multimedia transport models, but the effect of dissolved organic matter is typically ignored due to a lack of either reliable models or experimental data. The partition coefficient to dissolved organic carbon (K(DOC)) may be used to estimate the fraction of a contaminant that is associated with dissolved organic matter. Models relating K(DOC) to the octanol-water partition coefficient (K(OW)) have not been successful for many types of dissolved organic carbon in the environment. Instead, linear solvation energy relationships are proposed to model the association of chemicals with dissolved organic matter. However, more chemically diverse K(DOC) data are needed to produce a more robust model. For humic acid dissolved organic carbon, the linear solvation energy relationship predicts log K(DOC) with a root mean square error of 0.43. Copyright © 2011 SETAC.

  9. JREM: An Approach for Formalising Models in the Requirements Phase with JSON and NoSQL Databases

    OpenAIRE

    Aitana Alonso-Nogueira; Helia Estévez-Fernández; Isaías García

    2017-01-01

    This paper presents an approach to reduce some of its current flaws in the requirements phase inside the software development process. It takes the software requirements of an application, makes a conceptual modeling about it and formalizes it within JSON documents. This formal model is lodged in a NoSQL database which is document-oriented, that is, MongoDB, because of its advantages in flexibility and efficiency. In addition, this paper underlines the contributions of the detailed approach a...

  10. Creating a model to detect dairy cattle farms with poor welfare using a national database.

    Science.gov (United States)

    Krug, C; Haskell, M J; Nunes, T; Stilwell, G

    2015-12-01

    The objective of this study was to determine whether dairy farms with poor cow welfare could be identified using a national database for bovine identification and registration that monitors cattle deaths and movements. The welfare of dairy cattle was assessed using the Welfare Quality(®) protocol (WQ) on 24 Portuguese dairy farms and on 1930 animals. Five farms were classified as having poor welfare and the other 19 were classified as having good welfare. Fourteen million records from the national cattle database were analysed to identify potential welfare indicators for dairy farms. Fifteen potential national welfare indicators were calculated based on that database, and the link between the results on the WQ evaluation and the national cattle database was made using the identification code of each farm. Within the potential national welfare indicators, only two were significantly different between farms with good welfare and poor welfare, 'proportion of on-farm deaths' (ptree based on two variables, 'proportion of on-farm deaths' and 'calving-to-calving interval', and it was able to correctly identify 70% and 79% of the farms classified as having poor and good welfare, respectively. The national cattle database analysis could be useful in helping official veterinary services in detecting farms that have poor welfare and also in determining which welfare indicators are poor on each particular farm. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Modeling the influence of organic acids on soil weathering

    Science.gov (United States)

    Lawrence, Corey R.; Harden, Jennifer W.; Maher, Kate

    2014-01-01

    Biological inputs and organic matter cycling have long been regarded as important factors in the physical and chemical development of soils. In particular, the extent to which low molecular weight organic acids, such as oxalate, influence geochemical reactions has been widely studied. Although the effects of organic acids are diverse, there is strong evidence that organic acids accelerate the dissolution of some minerals. However, the influence of organic acids at the field-scale and over the timescales of soil development has not been evaluated in detail. In this study, a reactive-transport model of soil chemical weathering and pedogenic development was used to quantify the extent to which organic acid cycling controls mineral dissolution rates and long-term patterns of chemical weathering. Specifically, oxalic acid was added to simulations of soil development to investigate a well-studied chronosequence of soils near Santa Cruz, CA. The model formulation includes organic acid input, transport, decomposition, organic-metal aqueous complexation and mineral surface complexation in various combinations. Results suggest that although organic acid reactions accelerate mineral dissolution rates near the soil surface, the net response is an overall decrease in chemical weathering. Model results demonstrate the importance of organic acid input concentrations, fluid flow, decomposition and secondary mineral precipitation rates on the evolution of mineral weathering fronts. In particular, model soil profile evolution is sensitive to kaolinite precipitation and oxalate decomposition rates. The soil profile-scale modeling presented here provides insights into the influence of organic carbon cycling on soil weathering and pedogenesis and supports the need for further field-scale measurements of the flux and speciation of reactive organic compounds.

  12. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  13. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  14. Transport and Environment Database System (TRENDS): Maritime Air Pollutant Emission Modelling

    DEFF Research Database (Denmark)

    Georgakaki, Aliki; Coffey, R. A.; Lock, G.

    2003-01-01

    This paper reports the development of the maritime module within the framework of the TRENDS project. A detailed database has been constructed, which includes all stages of the energy consumption and air pollutant emission calculations. The technical assumptions and factors incorporated in the da...... ¿ short sea or deep-sea shipping. Key Words: Air Pollution, Maritime Transport, Air Pollutant Emissions......This paper reports the development of the maritime module within the framework of the TRENDS project. A detailed database has been constructed, which includes all stages of the energy consumption and air pollutant emission calculations. The technical assumptions and factors incorporated...... encountered since the statistical data collection was not undertaken with a view to this purpose are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission results per port and vessel type, to aggregate results for different types of movements...

  15. Implementation of dragon-I database system based on B/S model

    International Nuclear Information System (INIS)

    Jiang Wei; Lai Qinggui; Chen Nan; Gao Feng

    2010-01-01

    B/S architecture is utilized in the database system of 'Dragon-I'. The dynamic web software is designed with the technology of ASP. NET, and the web software are divided into three main tiers: user interface tier, business logic tier and access tier. The data of accelerator status and the data generated in experiment processes are managed with SQL Server DBMS, and the database is accessed based on the technology of ADO. NET. The status of facility, control parameters and testing waves are queried by the experiment number and experiment time. The demand of storage, management, browse, query and offline analysis are implemented entirely in this database system based on B/S architecture. (authors)

  16. Daphnia as an Emerging Epigenetic Model Organism

    Directory of Open Access Journals (Sweden)

    Kami D. M. Harris

    2012-01-01

    Full Text Available Daphnia offer a variety of benefits for the study of epigenetics. Daphnia’s parthenogenetic life cycle allows the study of epigenetic effects in the absence of confounding genetic differences. Sex determination and sexual reproduction are epigenetically determined as are several other well-studied alternate phenotypes that arise in response to environmental stressors. Additionally, there is a large body of ecological literature available, recently complemented by the genome sequence of one species and transgenic technology. DNA methylation has been shown to be altered in response to toxicants and heavy metals, although investigation of other epigenetic mechanisms is only beginning. More thorough studies on DNA methylation as well as investigation of histone modifications and RNAi in sex determination and predator-induced defenses using this ecologically and evolutionarily important organism will contribute to our understanding of epigenetics.

  17. Nematodes: Model Organisms in High School Biology

    Science.gov (United States)

    Bliss, TJ; Anderson, Margery; Dillman, Adler; Yourick, Debra; Jett, Marti; Adams, Byron J.; Russell, RevaBeth

    2007-01-01

    In a collaborative effort between university researchers and high school science teachers, an inquiry-based laboratory module was designed using two species of insecticidal nematodes to help students apply scientific inquiry and elements of thoughtful experimental design. The learning experience and model are described in this article. (Contains 4…

  18. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  19. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    Science.gov (United States)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  20. Accessing and using chemical databases

    DEFF Research Database (Denmark)

    Nikolov, Nikolai Georgiev; Pavlov, Todor; Niemelä, Jay Russell

    2013-01-01

    Computer-based representation of chemicals makes it possible to organize data in chemical databases-collections of chemical structures and associated properties. Databases are widely used wherever efficient processing of chemical information is needed, including search, storage, retrieval......, and dissemination. Structure and functionality of chemical databases are considered. The typical kinds of information found in a chemical database are considered-identification, structural, and associated data. Functionality of chemical databases is presented, with examples of search and access types. More details...... are included about the OASIS database and platform and the Danish (Q)SAR Database online. Various types of chemical database resources are discussed, together with a list of examples....

  1. Supply Chain Initiatives Database

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-01

    The Supply Chain Initiatives Database (SCID) presents innovative approaches to engaging industrial suppliers in efforts to save energy, increase productivity and improve environmental performance. This comprehensive and freely-accessible database was developed by the Institute for Industrial Productivity (IIP). IIP acknowledges Ecofys for their valuable contributions. The database contains case studies searchable according to the types of activities buyers are undertaking to motivate suppliers, target sector, organization leading the initiative, and program or partnership linkages.

  2. Satisfaction with life after burn: A Burn Model System National Database Study.

    Science.gov (United States)

    Goverman, J; Mathews, K; Nadler, D; Henderson, E; McMullen, K; Herndon, D; Meyer, W; Fauerbach, J A; Wiechman, S; Carrougher, G; Ryan, C M; Schneider, J C

    2016-08-01

    While mortality rates after burn are low, physical and psychosocial impairments are common. Clinical research is focusing on reducing morbidity and optimizing quality of life. This study examines self-reported Satisfaction With Life Scale scores in a longitudinal, multicenter cohort of survivors of major burns. Risk factors associated with Satisfaction With Life Scale scores are identified. Data from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) Burn Model System (BMS) database for burn survivors greater than 9 years of age, from 1994 to 2014, were analyzed. Demographic and medical data were collected on each subject. The primary outcome measures were the individual items and total Satisfaction With Life Scale (SWLS) scores at time of hospital discharge (pre-burn recall period) and 6, 12, and 24 months after burn. The SWLS is a validated 5-item instrument with items rated on a 1-7 Likert scale. The differences in scores over time were determined and scores for burn survivors were also compared to a non-burn, healthy population. Step-wise regression analysis was performed to determine predictors of SWLS scores at different time intervals. The SWLS was completed at time of discharge (1129 patients), 6 months after burn (1231 patients), 12 months after burn (1123 patients), and 24 months after burn (959 patients). There were no statistically significant differences between these groups in terms of medical or injury demographics. The majority of the population was Caucasian (62.9%) and male (72.6%), with a mean TBSA burned of 22.3%. Mean total SWLS scores for burn survivors were unchanged and significantly below that of a non-burn population at all examined time points after burn. Although the mean SWLS score was unchanged over time, a large number of subjects demonstrated improvement or decrement of at least one SWLS category. Gender, TBSA burned, LOS, and school status were associated with SWLS scores at 6 months

  3. Combining a weed traits database with a population dynamics model predicts shifts in weed communities

    DEFF Research Database (Denmark)

    Storkey, Jonathan; Holst, Niels; Bøjer, Ole Mission

    2015-01-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, po...

  4. CODASC : a database for the validation of street canyon dispersion models

    NARCIS (Netherlands)

    Gromke, C.B.

    2013-01-01

    CODASC stands for Concentration Data of Street Canyons (CODASC 2008, www.codasc.de). It is a database which provides traffic pollutant concentrations in urban street canyons obtained from wind-tunnel dispersion experiments. CODASC comprises concentration data of street canyons with different aspect

  5. Biogas composition and engine performance, including database and biogas property model

    NARCIS (Netherlands)

    Bruijstens, A.J.; Beuman, W.P.H.; Molen, M. van der; Rijke, J. de; Cloudt, R.P.M.; Kadijk, G.; Camp, O.M.G.C. op den; Bleuanus, W.A.J.

    2008-01-01

    In order to enable this evaluation of the current biogas quality situation in the EU; results are presented in a biogas database. Furthermore the key gas parameter Sonic Bievo Index (influence on open loop A/F-ratio) is defined and other key gas parameters like the Methane Number (knock resistance)

  6. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  7. MiDAS 2.0: an ecosystem-specific taxonomy and online database for the organisms of wastewater treatment systems expanded for anaerobic digester groups.

    Science.gov (United States)

    McIlroy, Simon Jon; Kirkegaard, Rasmus Hansen; McIlroy, Bianca; Nierychlo, Marta; Kristensen, Jannie Munk; Karst, Søren Michael; Albertsen, Mads; Nielsen, Per Halkjær

    2017-01-01

    Wastewater is increasingly viewed as a resource, with anaerobic digester technology being routinely implemented for biogas production. Characterising the microbial communities involved in wastewater treatment facilities and their anaerobic digesters is considered key to their optimal design and operation. Amplicon sequencing of the 16S rRNA gene allows high-throughput monitoring of these systems. The MiDAS field guide is a public resource providing amplicon sequencing protocols and an ecosystem-specific taxonomic database optimized for use with wastewater treatment facility samples. The curated taxonomy endeavours to provide a genus-level-classification for abundant phylotypes and the online field guide links this identity to published information regarding their ecology, function and distribution. This article describes the expansion of the database resources to cover the organisms of the anaerobic digester systems fed primary sludge and surplus activated sludge. The updated database includes descriptions of the abundant genus-level-taxa in influent wastewater, activated sludge and anaerobic digesters. Abundance information is also included to allow assessment of the role of emigration in the ecology of each phylotype. MiDAS is intended as a collaborative resource for the progression of research into the ecology of wastewater treatment, by providing a public repository for knowledge that is accessible to all interested in these biotechnologically important systems. http://www.midasfieldguide.org. © The Author(s) 2017. Published by Oxford University Press.

  8. Self-organized quantum rings : Physical characterization and theoretical modeling

    NARCIS (Netherlands)

    Fomin, V.M.; Gladilin, V.N.; Devreese, J.T.; Koenraad, P.M.; Fomin, V.M.

    2014-01-01

    An adequate modeling of the self-organized quantum rings is possible only on the basis of the modern characterization of those nanostructures.We discuss an atomic-scale analysis of the indium distribution of self-organized InGaAs quantum rings (QRs). The analysis of the shape, size and composition

  9. Resilient organizations: matrix model and service line management.

    Science.gov (United States)

    Westphal, Judith A

    2005-09-01

    Resilient organizations modify structures to meet the demands of the marketplace. The author describes a structure that enables multihospital organizations to innovate and rapidly adapt to changes. Service line management within a matrix model is an evolving organizational structure for complex systems in which nurses are pivotal members.

  10. (Tropical) soil organic matter modelling: problems and prospects

    NARCIS (Netherlands)

    Keulen, van H.

    2001-01-01

    Soil organic matter plays an important role in many physical, chemical and biological processes. However, the quantitative relations between the mineral and organic components of the soil and the relations with the vegetation are poorly understood. In such situations, the use of models is an

  11. Investigating ecological speciation in non-model organisms

    DEFF Research Database (Denmark)

    Foote, Andrew David

    2012-01-01

    Background: Studies of ecological speciation tend to focus on a few model biological systems. In contrast, few studies on non-model organisms have been able to infer ecological speciation as the underlying mechanism of evolutionary divergence. Questions: What are the pitfalls in studying ecological...... speciation in non-model organisms that lead to this bias? What alternative approaches might redress the balance? Organism: Genetically differentiated types of the killer whale (Orcinus orca) exhibiting differences in prey preference, habitat use, morphology, and behaviour. Methods: Review of the literature...... on killer whale evolutionary ecology in search of any difficulty in demonstrating causal links between variation in phenotype, ecology, and reproductive isolation in this non-model organism. Results: At present, we do not have enough evidence to conclude that adaptive phenotype traits linked to ecological...

  12. Modelling the self-organization and collapse of complex networks

    Indian Academy of Sciences (India)

    Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.

  13. Development of Pipeline Database and CAD Model for Selection of Core Security Zone in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Choi, Seong Soo; Kwon, Tae Gyun; Baek, Hun Hyun; Kwon, Min Jin

    2008-07-01

    The objective of the project is to develop the pipeline database which can be used for selection of core security zones considering safety significance of pipes and to develop CAD model for 3-dimensional visualization of core security zones, for the purpose of minimizing damage and loss, enforcing security and protection on important facilities, and improving plant design preparing against emergency situations such as physical terrors in nuclear power plants. In this study, the pipeline database is developed for selection of core security zones considering safety significance of safety class 1 and 2 pipes. The database includes the information on 'pipe-room information-surrogate component' mapping, initiating events which may occur and accident mitigation functions which may be damaged by the pipe failure, and the drawing information related to 2,270 pipe segments of 30 systems. For the 3-dimensional visualization of core security zones, the CAD models on the containment building and the auxiliary building are developed using 3-D MAX tool and the demo program which can visualize the direct-X model converted from the 3-D MAX model is also developed. In addition to this, the coordinate information of all the buildings and their rooms is generated using AUTO CAD tool in order to be used as an input for 3-dimensional browsing of the VIP program

  14. Self-organizing map models of language acquisition

    Science.gov (United States)

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  15. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    Science.gov (United States)

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…

  16. Labour Quality Model for Organic Farming Food Chains

    OpenAIRE

    Gassner, B.; Freyer, B.; Leitner, H.

    2008-01-01

    The debate on labour quality in science is controversial as well as in the organic agriculture community. Therefore, we reviewed literature on different labour quality models and definitions, and had key informant interviews on labour quality issues with stakeholders in a regional oriented organic agriculture bread food chain. We developed a labour quality model with nine quality categories and discussed linkages to labour satisfaction, ethical values and IFOAM principles.

  17. Uncertainty assessment of a polygon database of soil organic carbon for greenhouse gas reporting in Canada’s Arctic and sub-arctic

    Directory of Open Access Journals (Sweden)

    M.F. Hossain

    2014-08-01

    Full Text Available Canada’s Arctic and sub-arctic consist 46% of Canada’s landmass and contain 45% of the total soil organic carbon (SOC. Pronounced climate warming and increasing human disturbances could induce the release of this SOC to the atmosphere as greenhouse gases. Canada is committed to estimating and reporting the greenhouse gases emissions and removals induced by land use change in the Arctic and sub-arctic. To assess the uncertainty of the estimate, we compiled a site-measured SOC database for Canada’s north, and used it to compare with a polygon database, that will be used for estimating SOC for the UNFCCC reporting. In 10 polygons where 3 or more measured sites were well located in each polygon, the site-averaged SOC content agreed with the polygon data within ±33% for the top 30 cm and within ±50% for the top 1 m soil. If we directly compared the SOC of the 382 measured sites with the polygon mean SOC, there was poor agreement: The relative error was less than 50% at 40% of the sites, and less than 100% at 68% of the sites. The relative errors were more than 400% at 10% of the sites. These comparisons indicate that the polygon database is too coarse to represent the SOC conditions for individual sites. The difference is close to the uncertainty range for reporting. The spatial database could be improved by relating site and polygon SOC data with more easily observable surface features that can be identified and derived from remote sensing imagery.

  18. International workshop of the Confinement Database and Modelling Expert Group in collaboration with the Edge and Pedestal Physics Expert Group

    International Nuclear Information System (INIS)

    Cordey, J.; Kardaun, O.

    2001-01-01

    A Workshop of the Confinement Database and Modelling Expert Group (EG) was held on 2-6 April at the Plasma Physics Research Center of Lausanne (CRPP), Switzerland. Presentations were held on the present status of the plasma pedestal (temperature and energy) scalings from an empirical and theoretical perspective. An integrated approach to modelling tokamaks incorporating core transport, edge pedestal and SOL, together with a model for ELMs was presented by JCT. New experimental data on on global H-mode confinement were discussed and presentations on L-H threshold power were made

  19. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  20. Populating a Control Point Database: A cooperative effort between the USGS, Grand Canyon Monitoring and Research Center and the Grand Canyon Youth Organization

    Science.gov (United States)

    Brown, K. M.; Fritzinger, C.; Wharton, E.

    2004-12-01

    The Grand Canyon Monitoring and Research Center measures the effects of Glen Canyon Dam operations on the resources along the Colorado River from Glen Canyon Dam to Lake Mead in support of the Grand Canyon Adaptive Management Program. Control points are integral for geo-referencing the myriad of data collected in the Grand Canyon including aerial photography, topographic and bathymetric data used for classification and change-detection analysis of physical, biologic and cultural resources. The survey department has compiled a list of 870 control points installed by various organizations needing to establish a consistent reference for data collected at field sites along the 240 mile stretch of Colorado River in the Grand Canyon. This list is the foundation for the Control Point Database established primarily for researchers, to locate control points and independently geo-reference collected field data. The database has the potential to be a valuable mapping tool for assisting researchers to easily locate a control point and reduce the occurrance of unknowingly installing new control points within close proximity of an existing control point. The database is missing photographs and accurate site description information. Current site descriptions do not accurately define the location of the point but refer to the project that used the point, or some other interesting fact associated with the point. The Grand Canyon Monitoring and Research Center (GCMRC) resolved this problem by turning the data collection effort into an educational exercise for the participants of the Grand Canyon Youth organization. Grand Canyon Youth is a non-profit organization providing experiential education for middle and high school aged youth. GCMRC and the Grand Canyon Youth formed a partnership where GCMRC provided the logistical support, equipment, and training to conduct the field work, and the Grand Canyon Youth provided the time and personnel to complete the field work. Two data

  1. The STRING database in 2011

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Franceschini, Andrea; Kuhn, Michael

    2011-01-01

    present an update on the online database resource Search Tool for the Retrieval of Interacting Genes (STRING); it provides uniquely comprehensive coverage and ease of access to both experimental as well as predicted interaction information. Interactions in STRING are provided with a confidence score...... models, extensive data updates and strongly improved connectivity and integration with third-party resources. Version 9.0 of STRING covers more than 1100 completely sequenced organisms; the resource can be reached at http://string-db.org....

  2. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  3. Knowledge Loss: A Defensive Model In Nuclear Research Organization Memory

    International Nuclear Information System (INIS)

    Mohamad Safuan Bin Sulaiman; Muhd Noor Muhd Yunus

    2013-01-01

    Knowledge is an essential part of research based organization. It should be properly managed to ensure that any pitfalls of knowledge retention due to knowledge loss of both tacit and explicit is mitigated. Audit of the knowledge entities exist in the organization is important to identify the size of critical knowledge. It is very much related to how much know-what, know-how and know-why experts exist in the organization. This study conceptually proposed a defensive model for Nuclear Malaysia's organization memory and application of Knowledge Loss Risk Assessment (KLRA) as an important tool for critical knowledge identification. (author)

  4. NEW MODEL FOR QUANTIFICATION OF ICT DEPENDABLE ORGANIZATIONS RESILIENCE

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2011-03-01

    Full Text Available Business environment today demands high reliable organizations in every segment to be competitive on the global market. Beside that, ICT sector is becoming irreplaceable in many fields of business, from the communication to the complex systems for process control and production. To fulfill those requirements and to develop further, many organizations worldwide are implementing business paradigm called - organizations resilience. Although resilience is well known term in many science fields, it is not well studied due to its complex nature. This paper is dealing with developing the new model for assessment and quantification of ICT dependable organizations resilience.

  5. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  6. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  7. A model to accumulate fractionated dose in a deforming organ

    International Nuclear Information System (INIS)

    Yan Di; Jaffray, D.A.; Wong, J.W.

    1999-01-01

    Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists throughout the course of radiation treatment. However, a method of constructing the resultant dose delivered to the organ volume remains a difficult challenge. In this study, a model to quantify internal organ motion and a method to construct a cumulative dose in a deforming organ are introduced. Methods and Materials: A biomechanical model of an elastic body is used to quantify patient organ motion in the process of radiation therapy. Intertreatment displacements of volume elements in an organ of interest is calculated by applying an finite element method with boundary conditions, obtained from multiple daily computed tomography (CT) measurements. Therefore, by incorporating also the measurements of daily setup error, daily dose delivered to a deforming organ can be accumulated by tracking the position of volume elements in the organ. Furthermore, distribution of patient-specific organ motion is also predicted during the early phase of treatment delivery using the daily measurements, and the cumulative dose distribution in the organ can then be estimated. This dose distribution will be updated whenever a new measurement becomes available, and used to reoptimize the ongoing treatment. Results: An integrated process to accumulate dosage in a daily deforming organ was implemented. In this process, intertreatment organ motion and setup error were systematically quantified, and incorporated in the calculation of the cumulative dose. An example of the rectal wall motion in a prostate treatment was applied to test the model. The displacements of volume elements in the rectal wall, as well as the resultant doses, were calculated. Conclusion: This study is intended to provide a systematic framework to incorporate daily patient-specific organ motion and setup error in the reconstruction of the cumulative dose distribution in an organ of interest. The realistic dose

  8. Charge carrier relaxation model in disordered organic semiconductors

    International Nuclear Information System (INIS)

    Lu, Nianduan; Li, Ling; Sun, Pengxiao; Liu, Ming

    2013-01-01

    The relaxation phenomena of charge carrier in disordered organic semiconductors have been demonstrated and investigated theoretically. An analytical model describing the charge carrier relaxation is proposed based on the pure hopping transport theory. The relation between the material disorder, electric field and temperature and the relaxation phenomena has been discussed in detail, respectively. The calculated results reveal that the increase of electric field and temperature can promote the relaxation effect in disordered organic semiconductors, while the increase of material disorder will weaken the relaxation. The proposed model can explain well the stretched-exponential law by adopting the appropriate parameters. The calculation shows a good agreement with the experimental data for organic semiconductors

  9. Modelling a critical infrastructure-driven spatial database for proactive disaster management: A developing country context

    Directory of Open Access Journals (Sweden)

    David O. Baloye

    2016-04-01

    Full Text Available The understanding and institutionalisation of the seamless link between urban critical infrastructure and disaster management has greatly helped the developed world to establish effective disaster management processes. However, this link is conspicuously missing in developing countries, where disaster management has been more reactive than proactive. The consequence of this is typified in poor response time and uncoordinated ways in which disasters and emergency situations are handled. As is the case with many Nigerian cities, the challenges of urban development in the city of Abeokuta have limited the effectiveness of disaster and emergency first responders and managers. Using geospatial techniques, the study attempted to design and deploy a spatial database running a web-based information system to track the characteristics and distribution of critical infrastructure for effective use during disaster and emergencies, with the purpose of proactively improving disaster and emergency management processes in Abeokuta. Keywords: Disaster Management; Emergency; Critical Infrastructure; Geospatial Database; Developing Countries; Nigeria

  10. OCL2Trigger: Deriving active mechanisms for relational databases using Model-Driven Architecture

    OpenAIRE

    Al-Jumaily, Harith T.; Cuadra, Dolores; Martínez, Paloma

    2008-01-01

    16 pages, 10 figures.-- Issue title: "Best papers from the 2007 Australian Software Engineering Conference (ASWEC 2007), Melbourne, Australia, April 10-13, 2007, Australian Software Engineering Conference 2007". Transforming integrity constraints into active rules or triggers for verifying database consistency produces a serious and complex problem related to real time behaviour that must be considered for any implementation. Our main contribution to this work is to provide a complete appr...

  11. Modelling the fate of organic micropollutants in stormwater ponds

    DEFF Research Database (Denmark)

    Vezzaro, Luca; Eriksson, Eva; Ledin, Anna

    2011-01-01

    ). The four simulated organic stormwater MP (iodopropynyl butylcarbamate — IPBC, benzene, glyphosate and pyrene) were selected according to their different urban sources and environmental fate. This ensures that the results can be extended to other relevant stormwater pollutants. All three models use......Urban water managers need to estimate the potential removal of organic micropollutants (MP) in stormwater treatment systems to support MP pollution control strategies. This study documents how the potential removal of organic MP in stormwater treatment systems can be quantified by using multimedia...... models. The fate of four different MP in a stormwater retention pond was simulated by applying two steady-state multimedia fate models (EPI Suite and SimpleBox) commonly applied in chemical risk assessment and a dynamic multimedia fate model (Stormwater Treatment Unit Model for Micro Pollutants — STUMP...

  12. The MEXICO project (Model Experiments in Controlled Conditions): The database and first results of data processing and interpretation

    International Nuclear Information System (INIS)

    Snel, H; Schepers, J G; Montgomerie, B

    2007-01-01

    The Mexico (Model experiments in Controlled Conditions) was a FP5 project, partly financed by European Commission. The main objective was to create a database of detailed aerodynamic and load measurements on a wind turbine model, in a large and high quality wind tunnel, to be used for model validation and improvement. Here model stands for both the extended BEM modelling used in state-of-the-art design and certification software, and CFD modelling of the rotor and near wake flow. For this purpose a three bladed 4.5 m diameter wind tunnel model was built and instrumented. The wind tunnel experiments were carried out in the open section (9.5*9.5 m 2 ) of the Large Scale Facility of the DNW (German-Netherlands) during a six day campaign in December 2006. The conditions for measurements cover three operational tip speed ratios, many blade pitch angles, three yaw misalignment angles and a small number of unsteady cases in the form of pitch ramps and rotor speed ramps. One of the most important feats of the measurement program was the flow field mapping, with stereo PIV techniques. Overall the measurement campaign was very successful. The paper describes the now existing database and discusses a number of highlights from early data processing and interpretation. It should be stressed that all results are first results, no tunnel correction has been performed so far, nor has the necessary checking of data quality

  13. RA radiological characterization database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    Radiological characterization of the RA research reactor is one of the main activities in the first two years of the reactor decommissioning project. The raw characterization data from direct measurements or laboratory analyses (defined within the existing sampling and measurement programme) have to be interpreted, organized and summarized in order to prepare the final characterization survey report. This report should be made so that the radiological condition of the entire site is completely and accurately shown with the radiological condition of the components clearly depicted. This paper presents an electronic database application, designed as a serviceable and efficient tool for characterization data storage, review and analysis, as well as for the reports generation. Relational database model was designed and the application is made by using Microsoft Access 2002 (SP1), a 32-bit RDBMS for the desktop and client/server database applications that run under Windows XP. (author)

  14. Modelling the fate of oxidisable organic contaminants in groundwater

    DEFF Research Database (Denmark)

    Barry, D.A.; Prommer, H.; Miller, C.T.

    2002-01-01

    modelling framework is illustrated by pertinent examples, showing the degradation of dissolved organics by microbial activity limited by the availability of nutrients or electron acceptors (i.e., changing redox states), as well as concomitant secondary reactions. Two field-scale modelling examples......Subsurface contamination by organic chemicals is a pervasive environmental problem, susceptible to remediation by natural or enhanced attenuation approaches or more highly engineered methods such as pump-and-treat, amongst others. Such remediation approaches, along with risk assessment...... are discussed, the Vejen landfill (Denmark) and an example where metal contamination is remediated by redox changes wrought by injection of a dissolved organic compound. A summary is provided of current and likely future challenges to modelling of oxidisable organics in the subsurface. (C) 2002 Elsevier Science...

  15. Drosophila melanogaster as a model organism to study nanotoxicity.

    Science.gov (United States)

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  16. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    Science.gov (United States)

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge

  17. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  18. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  19. Microsoft Access Small Business Solutions State-of-the-Art Database Models for Sales, Marketing, Customer Management, and More Key Business Activities

    CERN Document Server

    Hennig, Teresa; Linson, Larry; Purvis, Leigh; Spaulding, Brent

    2010-01-01

    Database models developed by a team of leading Microsoft Access MVPs that provide ready-to-use solutions for sales, marketing, customer management and other key business activities for most small businesses. As the most popular relational database in the world, Microsoft Access is widely used by small business owners. This book responds to the growing need for resources that help business managers and end users design and build effective Access database solutions for specific business functions. Coverage includes::; Elements of a Microsoft Access Database; Relational Data Model; Dealing with C

  20. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  1. Mutant mice: experimental organisms as materialised models in biomedicine.

    Science.gov (United States)

    Huber, Lara; Keuck, Lara K

    2013-09-01

    Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Spatial arrangement of organic compounds on a model mineral surface: implications for soil organic matter stabilization.

    Science.gov (United States)

    Petridis, Loukas; Ambaye, Haile; Jagadamma, Sindhu; Kilbey, S Michael; Lokitz, Bradley S; Lauter, Valeria; Mayes, Melanie A

    2014-01-01

    The complexity of the mineral-organic carbon interface may influence the extent of stabilization of organic carbon compounds in soils, which is important for global climate futures. The nanoscale structure of a model interface was examined here by depositing films of organic carbon compounds of contrasting chemical character, hydrophilic glucose and amphiphilic stearic acid, onto a soil mineral analogue (Al2O3). Neutron reflectometry, a technique which provides depth-sensitive insight into the organization of the thin films, indicates that glucose molecules reside in a layer between Al2O3 and stearic acid, a result that was verified by water contact angle measurements. Molecular dynamics simulations reveal the thermodynamic driving force behind glucose partitioning on the mineral interface: The entropic penalty of confining the less mobile glucose on the mineral surface is lower than for stearic acid. The fundamental information obtained here helps rationalize how complex arrangements of organic carbon on soil mineral surfaces may arise.

  3. A Framework for Formal Modeling and Analysis of Organizations

    NARCIS (Netherlands)

    Jonker, C.M.; Sharpanskykh, O.; Treur, J.; P., Yolum

    2007-01-01

    A new, formal, role-based, framework for modeling and analyzing both real world and artificial organizations is introduced. It exploits static and dynamic properties of the organizational model and includes the (frequently ignored) environment. The transition is described from a generic framework of

  4. Healing models for organizations: description, measurement, and outcomes.

    Science.gov (United States)

    Malloch, K

    2000-01-01

    Healthcare leaders are continually searching for ways to improve their ability to provide optimal healthcare services, be financially viable, and retain quality caregivers, often feeling like such goals are impossible to achieve in today's intensely competitive environment. Many healthcare leaders intuitively recognize the need for more humanistic models and the probable connection with positive patient outcomes and financial success but are hesitant to make significant changes in their organizations because of the lack of model descriptions or documented recognition of the clinical and financial advantages of humanistic models. This article describes a study that was developed in response to the increasing work in humanistic or healing environment models and the need for validation of the advantages of such models. The healthy organization model, a framework for healthcare organizations that incorporates humanistic healing values within the traditional structure, is presented as a result of the study. This model addresses the importance of optimal clinical services, financial performance, and staff satisfaction. The five research-based organizational components that form the framework are described, and key indicators of organizational effectiveness over a five-year period are presented. The resulting empirical data are strongly supportive of the healing model and reflect positive outcomes for the organization.

  5. PROCARB: A Database of Known and Modelled Carbohydrate-Binding Protein Structures with Sequence-Based Prediction Tools

    Directory of Open Access Journals (Sweden)

    Adeel Malik

    2010-01-01

    Full Text Available Understanding of the three-dimensional structures of proteins that interact with carbohydrates covalently (glycoproteins as well as noncovalently (protein-carbohydrate complexes is essential to many biological processes and plays a significant role in normal and disease-associated functions. It is important to have a central repository of knowledge available about these protein-carbohydrate complexes as well as preprocessed data of predicted structures. This can be significantly enhanced by tools de novo which can predict carbohydrate-binding sites for proteins in the absence of structure of experimentally known binding site. PROCARB is an open-access database comprising three independently working components, namely, (i Core PROCARB module, consisting of three-dimensional structures of protein-carbohydrate complexes taken from Protein Data Bank (PDB, (ii Homology Models module, consisting of manually developed three-dimensional models of N-linked and O-linked glycoproteins of unknown three-dimensional structure, and (iii CBS-Pred prediction module, consisting of web servers to predict carbohydrate-binding sites using single sequence or server-generated PSSM. Several precomputed structural and functional properties of complexes are also included in the database for quick analysis. In particular, information about function, secondary structure, solvent accessibility, hydrogen bonds and literature reference, and so forth, is included. In addition, each protein in the database is mapped to Uniprot, Pfam, PDB, and so forth.

  6. Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory

    Science.gov (United States)

    2009-01-01

    Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third

  7. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    Science.gov (United States)

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  8. The GED4GEM project: development of a Global Exposure Database for the Global Earthquake Model initiative

    Science.gov (United States)

    Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.

    2012-01-01

    In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.

  9. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  10. A self-organized criticality model for plasma transport

    International Nuclear Information System (INIS)

    Carreras, B.A.; Newman, D.; Lynch, V.E.

    1996-01-01

    Many models of natural phenomena manifest the basic hypothesis of self-organized criticality (SOC). The SOC concept brings together the self-similarity on space and time scales that is common to many of these phenomena. The application of the SOC modelling concept to the plasma dynamics near marginal stability opens new possibilities of understanding issues such as Bohm scaling, profile consistency, broad band fluctuation spectra with universal characteristics and fast time scales. A model realization of self-organized criticality for plasma transport in a magnetic confinement device is presented. The model is based on subcritical resistive pressure-gradient-driven turbulence. Three-dimensional nonlinear calculations based on this model show the existence of transport under subcritical conditions. This model that includes fluctuation dynamics leads to results very similar to the running sandpile paradigm

  11. An Ising model for metal-organic frameworks

    Science.gov (United States)

    Höft, Nicolas; Horbach, Jürgen; Martín-Mayor, Victor; Seoane, Beatriz

    2017-08-01

    We present a three-dimensional Ising model where lines of equal spins are frozen such that they form an ordered framework structure. The frame spins impose an external field on the rest of the spins (active spins). We demonstrate that this "porous Ising model" can be seen as a minimal model for condensation transitions of gas molecules in metal-organic frameworks. Using Monte Carlo simulation techniques, we compare the phase behavior of a porous Ising model with that of a particle-based model for the condensation of methane (CH4) in the isoreticular metal-organic framework IRMOF-16. For both models, we find a line of first-order phase transitions that end in a critical point. We show that the critical behavior in both cases belongs to the 3D Ising universality class, in contrast to other phase transitions in confinement such as capillary condensation.

  12. Regional Persistent Organic Pollutants' Environmental Impact Assessment and Control Model

    Directory of Open Access Journals (Sweden)

    Jurgis Staniskis

    2008-10-01

    Full Text Available The sources of formation, environmental distribution and fate of persistent organic pollutants (POPs are increasingly seen as topics to be addressed and solved at the global scale. Therefore, there are already two international agreements concerning persistent organic pollutants: the Protocol of 1998 to the 1979 Convention on the Long-Range Transboundary Air Pollution on Persistent Organic Pollutants (Aarhus Protocol; and the Stockholm Convention on Persistent Organic Pollutants. For the assessment of environmental pollution of POPs, for the risk assessment, for the evaluation of new pollutants as potential candidates to be included in the POPs list of the Stokholmo or/and Aarhus Protocol, a set of different models are developed or under development. Multimedia models help describe and understand environmental processes leading to global contamination through POPs and actual risk to the environment and human health. However, there is a lack of the tools based on a systematic and integrated approach to POPs management difficulties in the region.

  13. Modelization of tritium transfer into the organic compartments of algae

    International Nuclear Information System (INIS)

    Bonotto, S.; Gerber, G.B.; Arapis, G.; Kirchmann, R.

    1982-01-01

    Uptake of tritium oxide and its conversion into organic tritium was studied in four different types of algae with widely varying size and growth characteristics (Acetabularia acetabulum, Boergesenia forbesii, two strains of Chlamydomonas and Dunaliella bioculata). Water in the cell and the vacuales equilibrates rapidly with external tritium water. Tritium is actively incorporated into organically bound form as the organisms grow. During the stationary phase, incorporation of tritium is slow. There exists a discrimination against the incorporation of tritium into organically bound form. A model has been elaborated taking in account these different factors. It appears that transfer of organic tritium by algae growing near the sites of release would be significant only for actively growing algae. Algae growing slowly may, however, be useful as cumulative indicators of discontinuous tritium release. (author)

  14. MODELLING CONSUMERS' DEMAND FOR ORGANIC FOOD PRODUCTS: THE SWEDISH EXPERIENCE

    Directory of Open Access Journals (Sweden)

    Manuchehr Irandoust

    2016-07-01

    Full Text Available This paper attempts to examine a few factors characterizing consumer preferences and behavior towards organic food products in the south of Sweden using a proportional odds model which captures the natural ordering of dependent variables and any inherent nonlinearities. The findings show that consumer's choice for organic food depends on perceived benefits of organic food (environment, health, and quality and consumer's perception and attitudes towards labelling system, message framing, and local origin. In addition, high willingness to pay and income level will increase the probability to buy organic food, while the cultural differences and socio-demographic characteristics have no effect on consumer behaviour and attitudes towards organic food products. Policy implications are offered.

  15. Modeling cadmium in the feed chain and cattle organs

    OpenAIRE

    Fels-Klerx, van der, H.J.; Romkens, P.F.A.M.; Franz, E.; Raamsdonk, van, L.W.D.

    2011-01-01

    The objectives of this study were to estimate cadmium contamination levels in different scenarios related to soil characteristics and assumptions regarding cadmium accumulation in the animal tissues, using quantitative supply chain modeling. The model takes into account soil cadmium levels, soil pH, soil-to-plant transfer, animal consumption patterns, and transfer into animal organs (liver and kidneys). The model was applied to cattle up to the age of six years which were fed roughage (maize ...

  16. Lotka-Volterra competition models for sessile organisms.

    Science.gov (United States)

    Spencer, Matthew; Tanner, Jason E

    2008-04-01

    Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.

  17. Modeling Temperature Dependent Singlet Exciton Dynamics in Multilayered Organic Nanofibers

    DEFF Research Database (Denmark)

    de Sousa, Leonardo Evaristo; de Oliveira Neto, Pedro Henrique; Kjelstrup-Hansen, Jakob

    2018-01-01

    Organic nanofibers have shown potential for application in optoelectronic devices because of the tunability of their optical properties. These properties are influenced by the electronic structure of the molecules that compose the nanofibers, but also by the behavior of the excitons generated...... dynamics in multilayered organic nanofibers. By simulating absorption and emission spectra, the possible Förster transitions are identified. Then, a Kinetic Monte Carlo (KMC) model is employed in combination with a genetic algorithm to theoretically reproduce time resolved photoluminescence measurements...

  18. Electrochemical model of the polyaniline based organic memristive device

    International Nuclear Information System (INIS)

    Demin, V. A.; Erokhin, V. V.; Kashkarov, P. K.; Kovalchuk, M. V.

    2014-01-01

    The electrochemical organic memristive device with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, including the neuromorphic networks capable for learning. In this work, a new theoretical model of the polyaniline memristive is presented. The developed model of organic memristive functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment but also the quantitative similarities of the resultant current values. It is shown how the memristive could behave at zero potential difference relative to the reference electrode. This improved model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  19. A model-independent view of the mature organization

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, M.; Langston, D.

    1996-12-31

    Over the last 10 years, industry has been dealing with the issues of process and organizational maturity. This focus on process is driven by the success that manufacturing organizations have had implementing the management principles of W. Edwards Deming and Joseph M. Juran. The organizational-maturity focus is driven by organizations striving to be ISO 9000 compliant or to achieve a specific level on one of the maturity models. Unfortunately, each of the models takes a specific view into what is a very broad arena. That is to say, each model addresses only a specific subset of the characteristics of maturity. This paper attempts to extend beyond these specific views to answer the general question, What is a mature organization and its relationship to Quantitative management and statistical process control?

  20. Molecular analysis of the replication program in unicellular model organisms.

    Science.gov (United States)

    Raghuraman, M K; Brewer, Bonita J

    2010-01-01

    Eukaryotes have long been reported to show temporal programs of replication, different portions of the genome being replicated at different times in S phase, with the added possibility of developmentally regulated changes in this pattern depending on species and cell type. Unicellular model organisms, primarily the budding yeast Saccharomyces cerevisiae, have been central to our current understanding of the mechanisms underlying the regulation of replication origins and the temporal program of replication in particular. But what exactly is a temporal program of replication, and how might it arise? In this article, we explore this question, drawing again on the wealth of experimental information in unicellular model organisms.

  1. GPCR-SSFE: A comprehensive database of G-protein-coupled receptor template predictions and homology models

    Directory of Open Access Journals (Sweden)

    Kreuchwig Annika

    2011-05-01

    Full Text Available Abstract Background G protein-coupled receptors (GPCRs transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Description Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. Conclusions The data provided by GPCR-SSFE are useful for investigating

  2. GPCR-SSFE: a comprehensive database of G-protein-coupled receptor template predictions and homology models.

    Science.gov (United States)

    Worth, Catherine L; Kreuchwig, Annika; Kleinau, Gunnar; Krause, Gerd

    2011-05-23

    G protein-coupled receptors (GPCRs) transduce a wide variety of extracellular signals to within the cell and therefore have a key role in regulating cell activity and physiological function. GPCR malfunction is responsible for a wide range of diseases including cancer, diabetes and hyperthyroidism and a large proportion of drugs on the market target these receptors. The three dimensional structure of GPCRs is important for elucidating the molecular mechanisms underlying these diseases and for performing structure-based drug design. Although structural data are restricted to only a handful of GPCRs, homology models can be used as a proxy for those receptors not having crystal structures. However, many researchers working on GPCRs are not experienced homology modellers and are therefore unable to benefit from the information that can be gleaned from such three-dimensional models. Here, we present a comprehensive database called the GPCR-SSFE, which provides initial homology models of the transmembrane helices for a large variety of family A GPCRs. Extending on our previous theoretical work, we have developed an automated pipeline for GPCR homology modelling and applied it to a large set of family A GPCR sequences. Our pipeline is a fragment-based approach that exploits available family A crystal structures. The GPCR-SSFE database stores the template predictions, sequence alignments, identified sequence and structure motifs and homology models for 5025 family A GPCRs. Users are able to browse the GPCR dataset according to their pharmacological classification or search for results using a UniProt entry name. It is also possible for a user to submit a GPCR sequence that is not contained in the database for analysis and homology model building. The models can be viewed using a Jmol applet and are also available for download along with the alignments. The data provided by GPCR-SSFE are useful for investigating general and detailed sequence-structure-function relationships

  3. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  4. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    Science.gov (United States)

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  5. Subject and authorship of records related to the Organization for Tropical Studies (OTS in BINABITROP, a comprehensive database about Costa Rican biology

    Directory of Open Access Journals (Sweden)

    Julián Monge-Nájera

    2013-06-01

    Full Text Available BINABITROP is a bibliographical database of more than 38 000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011. Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.BINABITROP es una base de datos bibliográfica con más de 38 000 registros sobre los ecosistemas y organismos de Costa Rica. En contraste con bases de datos comerciales como Web of Knowledge y Scopus, que excluyen a la mayoría de las revistas científicas publicadas en los países tropicales, BINABITROP registra casi por completo la literatura biológica sobre Costa Rica. Analizamos los registros de La Selva, Palo Verde y Las Cruces. Hallamos que la mayoría de los registros corresponden a estudios sobre ecología y sistemática; que la mayoría de los autores sólo registraron un artículo en el período de estudio (1963-2011 y que la mayoría de la investigación formalmente publicada apareció en cuatro revistas: Biotropica, Revista de Biología Tropical/International Journal of Tropical Biology, Zootaxa y Brenesia. Este parece ser el primer estudio de una base de datos integral sobre literatura de biología tropical.

  6. The UCSC Genome Browser Database: update 2006

    DEFF Research Database (Denmark)

    Hinrichs, A S; Karolchik, D; Baertsch, R

    2006-01-01

    The University of California Santa Cruz Genome Browser Database (GBD) contains sequence and annotation data for the genomes of about a dozen vertebrate species and several major model organisms. Genome annotations typically include assembly data, sequence composition, genes and gene predictions, ...

  7. Identification of fire modeling issues based on an analysis of real events from the OECD FIRE database

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, Dominik [Swiss Federal Nuclear Safety Inspectorate ENSI, Brugg (Switzerland)

    2017-03-15

    Precursor analysis is widely used in the nuclear industry to judge the significance of events relevant to safety. However, in case of events that may damage equipment through effects that are not ordinary functional dependencies, the analysis may not always fully appreciate the potential for further evolution of the event. For fires, which are one class of such events, this paper discusses modelling challenges that need to be overcome when performing a probabilistic precursor analysis. The events used to analyze are selected from the Organisation for Economic Cooperation and Development (OECD) Fire Incidents Records Exchange (FIRE) Database.

  8. Self-organized Criticality Model for Ocean Internal Waves

    International Nuclear Information System (INIS)

    Wang Gang; Hou Yijun; Lin Min; Qiao Fangli

    2009-01-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of -2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed. (general)

  9. Device model investigation of bilayer organic light emitting diodes

    International Nuclear Information System (INIS)

    Crone, B. K.; Davids, P. S.; Campbell, I. H.; Smith, D. L.

    2000-01-01

    Organic materials that have desirable luminescence properties, such as a favorable emission spectrum and high luminescence efficiency, are not necessarily suitable for single layer organic light-emitting diodes (LEDs) because the material may have unequal carrier mobilities or contact limited injection properties. As a result, single layer LEDs made from such organic materials are inefficient. In this article, we present device model calculations of single layer and bilayer organic LED characteristics that demonstrate the improvements in device performance that can occur in bilayer devices. We first consider an organic material where the mobilities of the electrons and holes are significantly different. The role of the bilayer structure in this case is to move the recombination away from the electrode that injects the low mobility carrier. We then consider an organic material with equal electron and hole mobilities but where it is not possible to make a good contact for one carrier type, say electrons. The role of a bilayer structure in this case is to prevent the holes from traversing the device without recombining. In both cases, single layer device limitations can be overcome by employing a two organic layer structure. The results are discussed using the calculated spatial variation of the carrier densities, electric field, and recombination rate density in the structures. (c) 2000 American Institute of Physics

  10. Green Algae as Model Organisms for Biological Fluid Dynamics

    Science.gov (United States)

    Goldstein, Raymond E.

    2015-01-01

    In the past decade, the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimeters), their geometric regularity, the ease with which they can be cultured, and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms.

  11. There Is No Simple Model of the Plasma Membrane Organization

    Science.gov (United States)

    Bernardino de la Serna, Jorge; Schütz, Gerhard J.; Eggeling, Christian; Cebecauer, Marek

    2016-01-01

    Ever since technologies enabled the characterization of eukaryotic plasma membranes, heterogeneities in the distributions of its constituents were observed. Over the years this led to the proposal of various models describing the plasma membrane organization such as lipid shells, picket-and-fences, lipid rafts, or protein islands, as addressed in numerous publications and reviews. Instead of emphasizing on one model we in this review give a brief overview over current models and highlight how current experimental work in one or the other way do not support the existence of a single overarching model. Instead, we highlight the vast variety of membrane properties and components, their influences and impacts. We believe that highlighting such controversial discoveries will stimulate unbiased research on plasma membrane organization and functionality, leading to a better understanding of this essential cellular structure. PMID:27747212

  12. A global database of seismically and non-seismically triggered landslides for 2D/3D numerical modeling

    Science.gov (United States)

    Domej, Gisela; Bourdeau, Céline; Lenti, Luca; Pluta, Kacper

    2017-04-01

    Landsliding is a worldwide common phenomenon. Every year, and ranging in size from very small to enormous, landslides cause all too often loss of life and disastrous damage to infrastructure, property and the environment. One main reason for more frequent catastrophes is the growth of population on the Earth which entails extending urbanization to areas at risk. Landslides are triggered by a variety and combination of causes, among which the role of water and seismic activity appear to have the most serious consequences. In this regard, seismic shaking is of particular interest since topographic elevation as well as the landslide mass itself can trap waves and hence amplify incoming surface waves - a phenomenon known as "site effects". Research on the topic of landsliding due to seismic and non-seismic activity is extensive and a broad spectrum of methods for modeling slope deformation is available. Those methods range from pseudo-static and rigid-block based models to numerical models. The majority is limited to 2D modeling since more sophisticated approaches in 3D are still under development or calibration. However, the effect of lateral confinement as well as the mechanical properties of the adjacent bedrock might be of great importance because they may enhance the focusing of trapped waves in the landslide mass. A database was created to study 3D landslide geometries. It currently contains 277 distinct seismically and non-seismically triggered landslides spread all around the globe whose rupture bodies were measured in all available details. Therefore a specific methodology was developed to maintain predefined standards, to keep the bias as low as possible and to set up a query tool to explore the database. Besides geometry, additional information such as location, date, triggering factors, material, sliding mechanisms, event chronology, consequences, related literature, among other things are stored for every case. The aim of the database is to enable

  13. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2013-02-01

    Full Text Available High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa. The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not

  14. An Ontology for Modeling Complex Inter-relational Organizations

    Science.gov (United States)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  15. Analisis Performansi Database Ditinjau dari Aspek Optimasi Query dan Desain Model Data Relational pada DAS dan RAID

    OpenAIRE

    Lubis, Juanda Hakim

    2015-01-01

    The amount of data that is stored in magnetic disk (floppy disk, harddisk, etc) increases 100% each year for each department for each company so an effort to maintain a database system to be optimal is needed. Designing a database is the initial step when creating a system with an optimal database performance. However, just designing the database is not enough to increase the performance of the database.One of the ways is to increase the speed of data transaction by increaseing...

  16. A model of virtual organization for corporate visibility and ...

    African Journals Online (AJOL)

    This paper considers the existing numerous research in business, Information and Communication Technology (ICT), examines a theoretical framework for value creation in a virtual world. Following a proposed model, a new strategic paradigm is created for corporate value; and virtual organization (VO) apply the use of ...

  17. Modeling of the transient mobility in disordered organic semiconductors

    NARCIS (Netherlands)

    Germs, W.C.; Van der Holst, J.M.M.; Van Mensfoort, S.L.M.; Bobbert, P.A.; Coehoorn, R.

    2011-01-01

    In non-steady-state experiments, the electrical response of devicesbased on disordered organic semiconductors often shows a large transient contribution due to relaxation of the out-of-equilibrium charge-carrier distribution. We have developed a model describing this process, based only on the

  18. An Integrated Model for Effective Knowledge Management in Chinese Organizations

    Science.gov (United States)

    An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen

    2013-01-01

    Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…

  19. Waste Reduction Model (WARM) Resources for Small Businesses and Organizations

    Science.gov (United States)

    This page provides a brief overview of how EPA’s Waste Reduction Model (WARM) can be used by small businesses and organizations. The page includes a brief summary of uses of WARM for the audience and links to other resources.

  20. SOMPROF: A vertically explicit soil organic matter model

    NARCIS (Netherlands)

    Braakhekke, M.C.; Beer, M.; Hoosbeek, M.R.; Kruijt, B.; Kabat, P.

    2011-01-01

    Most current soil organic matter (SOM) models represent the soil as a bulk without specification of the vertical distribution of SOM in the soil profile. However, the vertical SOM profile may be of great importance for soil carbon cycling, both on short (hours to years) time scale, due to

  1. Modeling growth of specific spoilage organisms in tilapia ...

    African Journals Online (AJOL)

    Tilapia is an important aquatic fish, but severe spoilage of tilapia is most likely related to the global aquaculture. The spoilage is mostly caused by specific spoilage organisms (SSO). Therefore, it is very important to use microbial models to predict the growth of SSO in tilapia. This study firstly verified Pseudomonas and Vibrio ...

  2. There Is No Simple Model of the Plasma Membrane Organization

    Czech Academy of Sciences Publication Activity Database

    de la serna, J. B.; Schütz, G.; Eggeling, Ch.; Cebecauer, Marek

    2016-01-01

    Roč. 4, SEP 2016 (2016), 106 ISSN 2296-634X R&D Projects: GA ČR GA15-06989S Institutional support: RVO:61388955 Keywords : plasma membrane * membrane organization models * heterogeneous distribution Subject RIV: CF - Physical ; Theoretical Chemistry

  3. Cleanup of a HLW nuclear fuel-reprocessing center using 3-D database modeling technology

    International Nuclear Information System (INIS)

    Sauer, R.C.

    1992-01-01

    A significant challenge in decommissioning any large nuclear facility is how to solidify the large volume of residual high-level radioactive waste (HLW) without structurally interfering with the existing equipment and piping used at the original facility or would require rework due to interferences which were not identified during the design process. This problem is further compounded when the nuclear facility to be decommissioned is a 35 year old nuclear fuel reprocessing center designed to recover usable uranium and plutonium. Facilities of this vintage usually tend to lack full documentation of design changes made over the years and as a result, crude traps or pockets of high-level contamination may not be fully realized. Any miscalculation in the construction or modification sequences could compound the overall dismantling and decontamination of the facility. This paper reports that development of a 3-dimensional (3-D) computer database tool was considered critical in defining the most complex portions of this one-of-a-kind vitrification facility

  4. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  5. Benchmarking density functional tight binding models for barrier heights and reaction energetics of organic molecules.

    Science.gov (United States)

    Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus

    2017-09-30

    Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Targeted Therapy Database (TTD): a model to match patient's molecular profile with current knowledge on cancer biology.

    Science.gov (United States)

    Mocellin, Simone; Shrager, Jeff; Scolyer, Richard; Pasquali, Sandro; Verdi, Daunia; Marincola, Francesco M; Briarava, Marta; Gobbel, Randy; Rossi, Carlo; Nitti, Donato

    2010-08-10

    The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. We created a manually annotated database (Targeted Therapy Database, TTD) where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method developed to fully exploit the available knowledge on cancer biology with the

  7. Lamination of organic solar cells and organic light emitting devices: Models and experiments

    International Nuclear Information System (INIS)

    Oyewole, O. K.; Yu, D.; Du, J.; Asare, J.; Fashina, A.; Anye, V. C.; Zebaze Kana, M. G.; Soboyejo, W. O.

    2015-01-01

    In this paper, a combined experimental, computational, and analytical approach is used to provide new insights into the lamination of organic solar cells and light emitting devices at macro- and micro-scales. First, the effects of applied lamination force (on contact between the laminated layers) are studied. The crack driving forces associated with the interfacial cracks (at the bi-material interfaces) are estimated along with the critical interfacial crack driving forces associated with the separation of thin films, after layer transfer. The conditions for successful lamination are predicted using a combination of experiments and computational models. Guidelines are developed for the lamination of low-cost organic electronic structures

  8. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  9. Mycobacteriophage genome database.

    Science.gov (United States)

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  10. A regression model for zircaloy cladding in-reactor creepdown: Database, development, and assessment

    International Nuclear Information System (INIS)

    Shah, V.N.; Tolli, J.E.; Lanning, D.

    1987-01-01

    The paper presents a cladding deformation model developed to analyze cladding creepdown during steady state operation in a PWR and a BWR. This model accounts for variation in the zircaloy cladding heat treatments - cold worked and stress relieved material typically used in a PWR and fully recrystallized material typically used in a BWR. This model calculates cladding creepdown as a function of hoop stress, fast neutron flux, exposure time, and temperature. The paper also presents a comparison between cladding creep calculations by the creepdown model and corresponding test results from the KWU/CE program. ORNL HOBBIE experiments, and EPRI/Westinghouse Engineering cooperative project. The comparisons show that the creepdown model calculates cladding creep strains reasonably well. (orig./HP)

  11. Using digital databases to create geologic maps for the 21st century : a GIS model for geologic, environmental, cultural and transportation data from southern Rhode Island

    Science.gov (United States)

    2002-05-01

    Knowledge of surface and subsurface geology is fundamental to the planning and development of new or modified transportation systems. Toward this : end, we have compiled a model GIS database consisting of important geologic, cartographic, environment...

  12. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    Science.gov (United States)

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. A new model validation database for evaluating AERMOD, NRPB R91 and ADMS using krypton-85 data from BNFL Sellafield

    International Nuclear Information System (INIS)

    Hill, R.; Taylor, J.; Lowles, I.; Emmerson, K.; Parker, T.

    2004-01-01

    The emission of krypton-85 ( 85 Kr) from nuclear fuel reprocessing operations provide a classical passive tracer for the study of atmospheric dispersion. This is because of the persistence of this radioisotope in the atmosphere, due to its long radioactive halflife and inert chemistry; and the low background levels that result due to the limited number of anthropogenic sources globally. The BNFL Sellafield site in Cumbria (UK) is one of the most significant point sources of 85 Kr in the northern hemisphere, with 85 Kr being discharged from two stacks on the site, MAGNOX and THORP. Field experiments have been conducted since October 1996 using a cryogenic distillation technique (Janssens et al., 1986) to quantify the ground level concentration of 85 Kr. This paper reports on the construction of a model validation database to allow evaluation of regulatory atmospheric dispersion models using the measured 85 Kr concentrations as a tracer. The results of the database for local and regional scale dispersion are presented. (orig.)

  14. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    Science.gov (United States)

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  15. Sustainable Organic Farming For Environmental Health A Social Development Model

    Directory of Open Access Journals (Sweden)

    Ijun Rijwan Susanto

    2015-05-01

    Full Text Available ABSTRACT In this study the researcher attempted 1 to understand the basic features of organic farming in The Paguyuban Pasundans Cianjur 2 to describe and understand how the stakeholders were are able to internalize the challenges of organic farming on their lived experiences in the community 3 to describe and understand how the stakeholders were are able to internalize and applied the values of benefits of organic farming in support of environmental health on their lived experiences in the community 4 The purpose was to describe and understand how the stakeholders who are able to articulate their ideas regarding the model of sustainable organic farming 5 The Policy Recommendation for Organic Farming. The researcher employed triangulation thorough finding that provides breadth and depth to an investigation offering researchers a more accurate picture of the phenomenon. In the implementation of triangulation researchers conducted several interviews to get saturation. After completion of the interview results are written compiled and shown to the participants to check every statement by every participant. In addition researchers also checked the relevant documents and direct observation in the field The participants of this study were the stakeholders namely 1 The leader of Paguyuban Pasundans Organic Farmer Cianjur PPOFC 2 Members of Paguyuban Pasundans Organic FarmersCianjur 3 Leader of NGO 4 Government officials of agriculture 5 Business of organic food 6 and Consumer of organic food. Generally the findings of the study revealed the following 1 PPOFC began to see the reality as the impact of modern agriculture showed in fertility problems due to contaminated soil by residues of agricultural chemicals such as chemical fertilizers and chemical pesticides. So he wants to restore the soil fertility through environmentally friendly of farming practices 2 the challenges of organic farming on their lived experiences in the community farmers did not

  16. On the influence of the exposure model on organ doses

    International Nuclear Information System (INIS)

    Drexler, G.; Eckerl, H.

    1988-01-01

    Based on the design characteristics of the MIRD-V phantom, two sex-specific adult phantoms, ADAM and EVA were introduced especially for the calculation of organ doses resulting from external irradiation. Although the body characteristics of all the phantoms are in good agreement with those of the reference man and woman, they have some disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages related to the location and shape of organs and form of the whole body. To overcome these disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages and to obtain more realistic phantoms, a technique based on computer tomographic data (voxel-phantom) was developed. This technique allows any physical phantom or real body to be converted into computer files. The improvements are of special importance with regard to the skeleton, because a better modeling of the bone surfaces and separation of hard bone and bone marrow can be achieved. For photon irradiation, the sensitivity of the model on organ doses or the effective dose equivalent is important for operational radiation protection

  17. Modeling of secondary organic aerosol yields from laboratory chamber data

    Directory of Open Access Journals (Sweden)

    M. N. Chan

    2009-08-01

    Full Text Available Laboratory chamber data serve as the basis for constraining models of secondary organic aerosol (SOA formation. Current models fall into three categories: empirical two-product (Odum, product-specific, and volatility basis set. The product-specific and volatility basis set models are applied here to represent laboratory data on the ozonolysis of α-pinene under dry, dark, and low-NOx conditions in the presence of ammonium sulfate seed aerosol. Using five major identified products, the model is fit to the chamber data. From the optimal fitting, SOA oxygen-to-carbon (O/C and hydrogen-to-carbon (H/C ratios are modeled. The discrepancy between measured H/C ratios and those based on the oxidation products used in the model fitting suggests the potential importance of particle-phase reactions. Data fitting is also carried out using the volatility basis set, wherein oxidation products are parsed into volatility bins. The product-specific model is most likely hindered by lack of explicit inclusion of particle-phase accretion compounds. While prospects for identification of the majority of SOA products for major volatile organic compounds (VOCs classes remain promising, for the near future empirical product or volatility basis set models remain the approaches of choice.

  18. Multilevel security for relational databases

    CERN Document Server

    Faragallah, Osama S; El-Samie, Fathi E Abd

    2014-01-01

    Concepts of Database Security Database Concepts Relational Database Security Concepts Access Control in Relational Databases      Discretionary Access Control      Mandatory Access Control      Role-Based Access Control Work Objectives Book Organization Basic Concept of Multilevel Database Security IntroductionMultilevel Database Relations Polyinstantiation      Invisible Polyinstantiation      Visible Polyinstantiation      Types of Polyinstantiation      Architectural Consideration

  19. A database of wavefront measurements for laser system modeling, optical component development and fabrication process qualification

    International Nuclear Information System (INIS)

    Wolfe, C.R.; Lawson, J.K.; Aikens, D.M.; English, R.E.

    1995-01-01

    In the second half of the 1990's, LLNL and others anticipate designing and beginning construction of the National Ignition Facility (NIF). The NIF will be capable of producing the worlds first laboratory scale fusion ignition and bum reaction by imploding a small target. The NIF will utilize approximately 192 simultaneous laser beams for this purpose. The laser will be capable of producing a shaped energy pulse of at least 1.8 million joules (MJ) with peak power of at least 500 trillion watts (TV). In total, the facility will require more than 7,000 large optical components. The performance of a high power laser of this kind can be seriously degraded by the presence of low amplitude, periodic modulations in the surface and transmitted wavefronts of the optics used. At high peak power, these phase modulations can convert into large intensity modulations by non-linear optical processes. This in turn can lead to loss in energy on target via many well known mechanisms. In some cases laser damage to the optics downstream of the source of the phase modulation can occur. The database described here contains wavefront phase maps of early prototype optical components for the NIF. It has only recently become possible to map the wavefront of these large aperture components with high spatial resolution. Modem large aperture static fringe and phase shifting interferometers equipped with large area solid state detectors have made this possible. In a series of measurements with these instruments, wide spatial bandwidth can be detected in the wavefront

  20. Accounting for microbial habitats in modeling soil organic matter dynamics

    Science.gov (United States)

    Chenu, Claire; Garnier, Patricia; Nunan, Naoise; Pot, Valérie; Raynaud, Xavier; Vieublé, Laure; Otten, Wilfred; Falconer, Ruth; Monga, Olivier

    2017-04-01

    The extreme heterogeneity of soils constituents, architecture and inhabitants at the microscopic scale is increasingly recognized. Microbial communities exist and are active in a complex 3-D physical framework of mineral and organic particles defining pores of various sizes, more or less inter-connected. This results in a frequent spatial disconnection between soil carbon, energy sources and the decomposer organisms and a variety of microhabitats that are more or less suitable for microbial growth and activity. However, current biogeochemical models account for C dynamics at the macroscale (cm, m) and consider time- and spatially averaged relationships between microbial activity and soil characteristics. Different modelling approaches have intended to account for this microscale heterogeneity, based either on considering aggregates as surrogates for microbial habitats, or pores. Innovative modelling approaches are based on an explicit representation of soil structure at the fine scale, i.e. at µm to mm scales: pore architecture and their saturation with water, localization of organic resources and of microorganisms. Three recent models are presented here, that describe the heterotrophic activity of either bacteria or fungi and are based upon different strategies to represent the complex soil pore system (Mosaic, LBios and µFun). These models allow to hierarchize factors of microbial activity in soil's heterogeneous architecture. Present limits of these approaches and challenges are presented, regarding the extensive information required on soils at the microscale and to up-scale microbial functioning from the pore to the core scale.

  1. [Biomechanical modeling of pelvic organ mobility: towards personalized medicine].

    Science.gov (United States)

    Cosson, Michel; Rubod, Chrystèle; Vallet, Alexandra; Witz, Jean-François; Brieu, Mathias

    2011-11-01

    Female pelvic mobility is crucial for urinary, bowel and sexual function and for vaginal delivery. This mobility is ensured by a complex organ suspension system composed of ligaments, fascia and muscles. Impaired pelvic mobility affects one in three women of all ages and can be incapacitating. Surgical management has a high failure rate, largely owing to poor knowledge of the organ support system, including the barely discernible ligamentous system. We propose a 3D digital model of the pelvic cavity based on MRI images and quantitative tools, designed to locate the pelvic ligaments. We thus obtain a coherent anatomical and functional model which can be used to analyze pelvic pathophysiology. This work represents a first step towards creating a tool for localizing and characterizing the source of pelvic imbalance. We examine possible future applications of this model, in terms of personalized therapy and prevention.

  2. Clear-sky classification procedures and models using a world-wide data-base

    International Nuclear Information System (INIS)

    Younes, S.; Muneer, T.

    2007-01-01

    Clear-sky data need to be extracted from all-sky measured solar-irradiance dataset, often by using algorithms that rely on other measured meteorological parameters. Current procedures for clear-sky data extraction have been examined and compared with each other to determine their reliability and location dependency. New clear-sky determination algorithms are proposed that are based on a combination of clearness index, diffuse ratio, cloud cover and Linke's turbidity limits. Various researchers have proposed clear-sky irradiance models that rely on synoptic parameters; four of these models, MRM, PRM, YRM and REST2 have been compared for six world-wide-locations. Based on a previously-developed comprehensive accuracy scoring method, the models MRM, REST2 and YRM were found to be of satisfactory performance in decreasing order. The so-called Page radiation model (PRM) was found to underestimate solar radiation, even though local turbidity data were provided for its operation

  3. Brief Report: Rheumatoid Arthritis as the Underlying Cause of Death in Thirty-One Countries, 1987-2011: Trend Analysis of World Health Organization Mortality Database.

    Science.gov (United States)

    Kiadaliri, Aliasghar A; Felson, David T; Neogi, Tuhina; Englund, Martin

    2017-08-01

    To examine trends in rheumatoid arthritis (RA) as an underlying cause of death (UCD) in 31 countries across the world from 1987 to 2011. Data on mortality and population were collected from the World Health Organization mortality database and from the United Nations Population Prospects database. Age-standardized mortality rates (ASMRs) were calculated by means of direct standardization. We applied joinpoint regression analysis to identify trends. Between-country disparities were examined using between-country variance and the Gini coefficient. Due to low numbers of deaths, we smoothed the ASMRs using a 3-year moving average. Changes in the number of RA deaths between 1987 and 2011 were decomposed using 2 counterfactual scenarios. The absolute number of deaths with RA registered as the UCD decreased from 9,281 (0.12% of all-cause deaths) in 1987 to 8,428 (0.09% of all-cause deaths) in 2011. The mean ASMR decreased from 7.1 million person-years in 1987-1989 to 3.7 million person-years in 2009-2011 (48.2% reduction). A reduction of ≥25% in the ASMR occurred in 21 countries, while a corresponding increase was observed in 3 countries. There was a persistent reduction in RA mortality, and on average, the ASMR declined by 3.0% per year. The absolute and relative between-country disparities decreased during the study period. The rates of mortality attributable to RA have declined globally. However, we observed substantial between-country disparities in RA mortality, although these disparities decreased over time. Population aging combined with a decline in RA mortality may lead to an increase in the economic burden of disease that should be taken into consideration in policy-making. © 2017, American College of Rheumatology.

  4. Soil moisture modelling of a SMOS pixel: interest of using the PERSIANN database over the Valencia Anchor Station

    Directory of Open Access Journals (Sweden)

    S. Juglea

    2010-08-01

    Full Text Available In the framework of Soil Moisture and Ocean Salinity (SMOS Calibration/Validation (Cal/Val activities, this study addresses the use of the PERSIANN-CCS1database in hydrological applications to accurately simulate a whole SMOS pixel by representing the spatial and temporal heterogeneity of the soil moisture fields over a wide area (50×50 km2. The study focuses on the Valencia Anchor Station (VAS experimental site, in Spain, which is one of the main SMOS Cal/Val sites in Europe.

    A faithful representation of the soil moisture distribution at SMOS pixel scale (50×50 km2 requires an accurate estimation of the amount and temporal/spatial distribution of precipitation. To quantify the gain of using the comprehensive PERSIANN database instead of sparsely distributed rain gauge measurements, comparisons between in situ observations and satellite rainfall data are done both at point and areal scale. An overestimation of the satellite rainfall amounts is observed in most of the cases (about 66% but the precipitation occurrences are in general retrieved (about 67%.

    To simulate the high variability in space and time of surface soil moisture, a Soil Vegetation Atmosphere Transfer (SVAT model – ISBA (Interactions between Soil Biosphere Atmosphere is used. The interest of using satellite rainfall estimates as well as the influence that the precipitation events can induce on the modelling of the water content in the soil is depicted by a comparison between different soil moisture data. Point-like and spatialized simulated data using rain gauge observations or PERSIANN – CCS database as well as ground measurements are used. It is shown that a good adequacy is reached in most part of the year, the precipitation differences having less impact upon the simulated soil moisture. The behaviour of simulated surface soil moisture at SMOS scale is verified by the use of remote sensing data from the Advanced

  5. Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV

    Science.gov (United States)

    Savin, Dmitry; Kosov, Mikhail

    2017-09-01

    We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.

  6. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    International Nuclear Information System (INIS)

    Nagasaki, S.

    2013-01-01

    Based on the sorption distribution coefficients (K d ) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K d values of Cs onto Mizunami granite carried out by JAEA with the K d values predicted by the model, the effect of the ionic strength on the K d values of Cs onto granite was evaluated. It was found that K d values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10 -2 to 5 x 10 -1 mol/dm 3 . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  7. Evaluation of sorption distribution coefficient of Cs onto granite using sorption data collected in sorption database and sorption model

    Energy Technology Data Exchange (ETDEWEB)

    Nagasaki, S., E-mail: nagasas@mcmaster.ca [McMaster Univ., Hamilton, Ontario (Canada)

    2013-07-01

    Based on the sorption distribution coefficients (K{sub d}) of Cs onto granite collected from the JAERI Sorption Database (SDB), the parameters for a two-site model without the triple-layer structure were optimized. Comparing the experimentally measured K{sub d} values of Cs onto Mizunami granite carried out by JAEA with the K{sub d} values predicted by the model, the effect of the ionic strength on the K{sub d} values of Cs onto granite was evaluated. It was found that K{sub d} values could be determined using the content of biotite in granite at a sodium concentration ([Na]) of 1 x 10{sup -2} to 5 x 10{sup -1} mol/dm{sup 3} . It was suggested that in high ionic strength solutions, the sorption of Cs onto other minerals such as microcline should also be taken into account. (author)

  8. Sediment-Hosted Zinc-Lead Deposits of the World - Database and Grade and Tonnage Models

    Science.gov (United States)

    Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.

    2009-01-01

    This report provides information on sediment-hosted zinc-lead mineral deposits based on the geologic settings that are observed on regional geologic maps. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for sediment-hosted zinc-lead deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments: Grades and tonnages among deposit types are significantly different, and many types occur in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables, or for robust estimation of undiscovered deposits - thus, we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral-deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral-deposit models play the central role in transforming geoscience information to a form useful to policy makers. This publication contains a computer file of information on sediment-hosted zinc-lead deposits from around the world. It also presents new grade and tonnage models for nine types of these deposits and a file allowing locations of all deposits to be plotted in Google Earth. The data are presented in FileMaker Pro, Excel and text files to make the information available to as many as possible. The

  9. Predicting long-term organic carbon dynamics in organically amended soils using the CQESTR model

    Energy Technology Data Exchange (ETDEWEB)

    Plaza, Cesar; Polo, Alfredo [Consejo Superior de Investigaciones Cientificas, Madrid (Spain). Inst. de Ciencias Agrarias; Gollany, Hero T. [Columbia Plateau Conservation Research Center, Pendleton, OR (United States). USDA-ARS; Baldoni, Guido; Ciavatta, Claudio [Bologna Univ. (Italy). Dept. of Agroenvironmental Sciences and Technologies

    2012-04-15

    Purpose: The CQESTR model is a process-based C model recently developed to simulate soil organic matter (SOM) dynamics and uses readily available or easily measurable input parameters. The current version of CQESTR (v. 2.0) has been validated successfully with a number of datasets from agricultural sites in North America but still needs to be tested in other geographic areas and soil types under diverse organic management systems. Materials and methods: We evaluated the predictive performance of CQESTR to simulate long-term (34 years) soil organic C (SOC) changes in a SOM-depleted European soil either unamended or amended with solid manure, liquid manure, or crop residue. Results and discussion: Measured SOC levels declined over the study period in the unamended soil, remained constant in the soil amended with crop residues, and tended to increase in the soils amended with manure, especially with solid manure. Linear regression analysis of measured SOC contents and CQESTR predictions resulted in a correlation coefficient of 0.626 (P < 0.001) and a slope and an intercept not significantly different from 1 and 0, respectively (95% confidence level). The mean squared deviation and root mean square error were relatively small. Simulated values fell within the 95% confidence interval of the measured SOC, and predicted errors were mainly associated with data scattering. Conclusions: The CQESTR model was shown to predict, with a reasonable degree of accuracy, the organic C dynamics in the soils examined. The CQESTR performance, however, could be improved by adding an additional parameter to differentiate between pre-decomposed organic amendments with varying degrees of stability. (orig.)

  10. REPLIKASI UNIDIRECTIONAL PADA HETEROGEN DATABASE

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2013-01-01

    The use of diverse database technology in enterprise today can not be avoided. Thus, technology is needed to generate information in real time. The purpose of this research is to discuss a database replication technology that can be applied in heterogeneous database environments. In this study we use Windows-based MS SQL Server database to Linux-based Oracle database as the goal. The research method used is prototyping where development can be done quickly and testing of working models of the...

  11. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  12. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    Science.gov (United States)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  13. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  14. Organized versus self-organized criticality in the abelian sandpile model

    OpenAIRE

    Fey-den Boer, AC Anne; Redig, FHJ Frank

    2005-01-01

    We define stabilizability of an infinite volume height configuration and of a probability measure on height configurations. We show that for high enough densities, a probability measure cannot be stabilized. We also show that in some sense the thermodynamic limit of the uniform measures on the recurrent configurations of the abelian sandpile model (ASM) is a maximal element of the set of stabilizable measures. In that sense the self-organized critical behavior of the ASM can be understood in ...

  15. IT Business Value Model for Information Intensive Organizations

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Gastaud Maçada

    2012-01-01

    Full Text Available Many studies have highlighted the capacity Information Technology (IT has for generating value for organizations. Investments in IT made by organizations have increased each year. Therefore, the purpose of the present study is to analyze the IT Business Value for Information Intensive Organizations (IIO - e.g. banks, insurance companies and securities brokers. The research method consisted of a survey that used and combined the models from Weill and Broadbent (1998 and Gregor, Martin, Fernandez, Stern and Vitale (2006. Data was gathered using an adapted instrument containing 5 dimensions (Strategic, Informational, Transactional, Transformational and Infra-structure with 27 items. The instrument was refined by employing statistical techniques such as Exploratory and Confirmatory Factorial Analysis through Structural Equations (first and second order Model Measurement. The final model is composed of four factors related to IT Business Value: Strategic, Informational, Transactional and Transformational, arranged in 15 items. The dimension Infra-structure was excluded during the model refinement process because it was discovered during interviews that managers were unable to perceive it as a distinct dimension of IT Business Value.

  16. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  17. Virtual OPACs versus Union Database: Two Models of Union Catalogue Provision.

    Science.gov (United States)

    Cousins, Shirley

    1999-01-01

    Considers some of the major technical and organizational issues involved in virtual-catalog production, contrasting them with the traditional union catalog approach exemplified by COPAC, an online public-access catalog composed of academic libraries in the United Kingdom. Suggest a method of integrating these two models of the union catalog.…

  18. Global spatiotemporal distribution of soil respiration modeled using a global database

    Science.gov (United States)

    Hashimoto, S.; Carvalhais, N.; Ito, A.; Migliavacca, M.; Nishina, K.; Reichstein, M.

    2015-07-01

    The flux of carbon dioxide from the soil to the atmosphere (soil respiration) is one of the major fluxes in the global carbon cycle. At present, the accumulated field observation data cover a wide range of geographical locations and climate conditions. However, there are still large uncertainties in the magnitude and spatiotemporal variation of global soil respiration. Using a global soil respiration data set, we developed a climate-driven model of soil respiration by modifying and updating Raich's model, and the global spatiotemporal distribution of soil respiration was examined using this model. The model was applied at a spatial resolution of 0.5°and a monthly time step. Soil respiration was divided into the heterotrophic and autotrophic components of respiration using an empirical model. The estimated mean annual global soil respiration was 91 Pg C yr-1 (between 1965 and 2012; Monte Carlo 95 % confidence interval: 87-95 Pg C yr-1) and increased at the rate of 0.09 Pg C yr-2. The contribution of soil respiration from boreal regions to the total increase in global soil respiration was on the same order of magnitude as that of tropical and temperate regions, despite a lower absolute magnitude of soil respiration in boreal regions. The estimated annual global heterotrophic respiration and global autotrophic respiration were 51 and 40 Pg C yr-1, respectively. The global soil respiration responded to the increase in air temperature at the rate of 3.3 Pg C yr-1 °C-1, and Q10 = 1.4. Our study scaled up observed soil respiration values from field measurements to estimate global soil respiration and provide a data-oriented estimate of global soil respiration. The estimates are based on a semi-empirical model parameterized with over one thousand data points. Our analysis indicates that the climate controls on soil respiration may translate into an increasing trend in global soil respiration and our analysis emphasizes the relevance of the soil carbon flux from soil to

  19. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  20. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  1. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  2. A taxonomy of nursing care organization models in hospitals.

    Science.gov (United States)

    Dubois, Carl-Ardy; D'Amour, Danielle; Tchouaket, Eric; Rivard, Michèle; Clarke, Sean; Blais, Régis

    2012-08-28

    Over the last decades, converging forces in hospital care, including cost-containment policies, rising healthcare demands and nursing shortages, have driven the search for new operational models of nursing care delivery that maximize the use of available nursing resources while ensuring safe, high-quality care. Little is known, however, about the distinctive features of these emergent nursing care models. This article contributes to filling this gap by presenting a theoretically and empirically grounded taxonomy of nursing care organization models in the context of acute care units in Quebec and comparing their distinctive features. This study was based on a survey of 22 medical units in 11 acute care facilities in Quebec. Data collection methods included questionnaire, interviews, focus groups and administrative data census. The analytical procedures consisted of first generating unit profiles based on qualitative and quantitative data collected at the unit level, then applying hierarchical cluster analysis to the units' profile data. The study identified four models of nursing care organization: two professional models that draw mainly on registered nurses as professionals to deliver nursing services and reflect stronger support to nurses' professional practice, and two functional models that draw more significantly on licensed practical nurses (LPNs) and assistive staff (orderlies) to deliver nursing services and are characterized by registered nurses' perceptions that the practice environment is less supportive of their professional work. This study showed that medical units in acute care hospitals exhibit diverse staff mixes, patterns of skill use, work environment design, and support for innovation. The four models reflect not only distinct approaches to dealing with the numerous constraints in the nursing care environment, but also different degrees of approximations to an "ideal" nursing professional practice model described by some leaders in the

  3. A taxonomy of nursing care organization models in hospitals

    Science.gov (United States)

    2012-01-01

    Background Over the last decades, converging forces in hospital care, including cost-containment policies, rising healthcare demands and nursing shortages, have driven the search for new operational models of nursing care delivery that maximize the use of available nursing resources while ensuring safe, high-quality care. Little is known, however, about the distinctive features of these emergent nursing care models. This article contributes to filling this gap by presenting a theoretically and empirically grounded taxonomy of nursing care organization models in the context of acute care units in Quebec and comparing their distinctive features. Methods This study was based on a survey of 22 medical units in 11 acute care facilities in Quebec. Data collection methods included questionnaire, interviews, focus groups and administrative data census. The analytical procedures consisted of first generating unit profiles based on qualitative and quantitative data collected at the unit level, then applying hierarchical cluster analysis to the units’ profile data. Results The study identified four models of nursing care organization: two professional models that draw mainly on registered nurses as professionals to deliver nursing services and reflect stronger support to nurses’ professional practice, and two functional models that draw more significantly on licensed practical nurses (LPNs) and assistive staff (orderlies) to deliver nursing services and are characterized by registered nurses’ perceptions that the practice environment is less supportive of their professional work. Conclusions This study showed that medical units in acute care hospitals exhibit diverse staff mixes, patterns of skill use, work environment design, and support for innovation. The four models reflect not only distinct approaches to dealing with the numerous constraints in the nursing care environment, but also different degrees of approximations to an “ideal” nursing professional practice

  4. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  5. PERANCANGAN MODEL NETWORK PADA MESIN DATABASE NON SPATIAL UNTUK MANUVER JARINGAN LISTRIK SEKTOR DISTRIBUSI DENGAN PL SQ

    Directory of Open Access Journals (Sweden)

    I Made Sukarsa

    2009-06-01

    Full Text Available Saat ini aplikasi di bidang SIG telah banyak yang dikembangkan berbasis mesin DBMS (Database Management System non spatial sehingga mampu mendukung model penyajian data secara client server dan menangani data dalam jumlah yang besar. Salah satunya telah dikembangkan untuk menangani data jaringan listrik.Kenyataannya, mesin-mesin DBMS belum dilengkapi dengan kemampuan untuk melakukan analisis network seperti manuver jaringan dan merupakan dasar untuk pengembangan berbagai aplikasi lainnya. Oleh karena itu,perlu dikembangkan suatu model network untuk manuver jaringan listrik dengan berbagai kekhasannya.Melalui beberapa tahapan penelitian yang dilakukan, telah dapat dikembangkan suatu model network yangdapat digunakan untuk menangani manuver jaringan. Model ini dibangun dengan memperhatikan kepentingan pengintegrasian dengan sistem eksisting dengan meminimalkan adanya perubahan pada aplikasi eksisting.Pemilihan implementasi berbasis PL SQL (Pragrammable Language Structure Query Language akan memberikan berbagai keuntungan termasuk unjuk kerja sistem. Model ini telah diujikan untuk simulasi pemadaman,menghitung perubahan struktur pembebanan jaringan dan dapat dikembangkan untuk analisis sistem tenaga listrik seperti rugi-rugi, load flow dan sebagainya sehingga pada akhirnya aplikasi SIG akan mampu mensubstitusi danmengatasi kelemahan aplikasi analisis sistem tenaga yang banyak dipakai saat ini seperti EDSA (Electrical DesignSystem Anaysis .

  6. Absence of respiratory inflammatory reaction of elemental sulfur using the California Pesticide Illness Database and a mouse model.

    Science.gov (United States)

    Lee, Kiyoung; Smith, Jodi L; Last, Jerold A

    2005-01-01

    Elemental sulfur, a natural substance, is used as a fungicide. Elemental sulfur is the most heavily used agricultural chemical in California. In 2003, annual sulfur usage in California was about 34% of the total weight of pesticide active ingredient used in production agriculture. Even though sulfur is mostly used in dust form, the respiratory health effects of elemental sulfur are not well documented. The purpose of this paper is to address the possible respiratory effect of elemental sulfur using the California Pesticide Illness Database and laboratory experiments with mice. We analyzed the California Pesticide Illness Database between 1991 and 2001. Among 127 reports of definite, probable, and possible illness involving sulfur, 21 cases (16%) were identified as respiratory related. A mouse model was used to examine whether there was an inflammatory or fibrotic response to elemental sulfur. Dust solutions were injected intratracheally into ovalbumin sensitized mice and lung damage was evaluated. Lung inflammatory response was analyzed via total lavage cell counts and differentials, and airway collagen content was analyzed histologically and biochemically. No significant differences from controls were seen in animals exposed to sulfur particles. The findings suggest that acute exposure of elemental sulfur itself may not cause an inflammatory reaction. However, further studies are needed to understand the possible health effects of chronic sulfur exposure and environmental weathering of sulfur dust.

  7. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data

    International Nuclear Information System (INIS)

    Zhu, Xiao; Kruhlak, Naomi L.

    2014-01-01

    Graphical abstract: - Abstract: Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure–activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI

  8. A data-based model to locate mass movements triggered by seismic events in Sichuan, China.

    Science.gov (United States)

    de Souza, Fabio Teodoro

    2014-01-01

    Earthquakes affect the entire world and have catastrophic consequences. On May 12, 2008, an earthquake of magnitude 7.9 on the Richter scale occurred in the Wenchuan area of Sichuan province in China. This event, together with subsequent aftershocks, caused many avalanches, landslides, debris flows, collapses, and quake lakes and induced numerous unstable slopes. This work proposes a methodology that uses a data mining approach and geographic information systems to predict these mass movements based on their association with the main and aftershock epicenters, geologic faults, riverbeds, and topography. A dataset comprising 3,883 mass movements is analyzed, and some models to predict the location of these mass movements are developed. These predictive models could be used by the Chinese authorities as an important tool for identifying risk areas and rescuing survivors during similar events in the future.

  9. Sulfur Emissions, Abatement Technologies and Related Costs for Europe in the RAINS Model Database

    OpenAIRE

    Cofala, J.; Syri, S.

    1998-01-01

    This paper describes the part of the Regional Pollution Information and Simulation (RAINS) model dealing with the potential and costs controlling emissions of sulfur dioxide. The paper describes the selected aggregation level of the emission generating activities and reviews the major options for controlling SO2 emissions. An algorithm for estimating emission control costs is presented. The cost calculation distinguishes 'general'(i.e., valid for all countries) and 'country-specific' paramete...

  10. Designing Predictive Models for Beta-Lactam Allergy Using the Drug Allergy and Hypersensitivity Database.

    Science.gov (United States)

    Chiriac, Anca Mirela; Wang, Youna; Schrijvers, Rik; Bousquet, Philippe Jean; Mura, Thibault; Molinari, Nicolas; Demoly, Pascal

    Beta-lactam antibiotics represent the main cause of allergic reactions to drugs, inducing both immediate and nonimmediate allergies. The diagnosis is well established, usually based on skin tests and drug provocation tests, but cumbersome. To design predictive models for the diagnosis of beta-lactam allergy, based on the clinical history of patients with suspicions of allergic reactions to beta-lactams. The study included a retrospective phase, in which records of patients explored for a suspicion of beta-lactam allergy (in the Allergy Unit of the University Hospital of Montpellier between September 1996 and September 2012) were used to construct predictive models based on a logistic regression and decision tree method; a prospective phase, in which we performed an external validation of the chosen models in patients with suspicion of beta-lactam allergy recruited from 3 allergy centers (Montpellier, Nîmes, Narbonne) between March and November 2013. Data related to clinical history and allergy evaluation results were retrieved and analyzed. The retrospective and prospective phases included 1991 and 200 patients, respectively, with a different prevalence of confirmed beta-lactam allergy (23.6% vs 31%, P = .02). For the logistic regression method, performances of the models were similar in both samples: sensitivity was 51% (vs 60%), specificity 75% (vs 80%), positive predictive value 40% (vs 57%), and negative predictive value 83% (vs 82%). The decision tree method reached a sensitivity of 29.5% (vs 43.5%), specificity of 96.4% (vs 94.9%), positive predictive value of 71.6% (vs 79.4%), and negative predictive value of 81.6% (vs 81.3%). Two different independent methods using clinical history predictors were unable to accurately predict beta-lactam allergy and replace a conventional allergy evaluation for suspected beta-lactam allergy. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  11. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  12. Model of the Dynamic Construction Process of Texts and Scaling Laws of Words Organization in Language Systems.

    Science.gov (United States)

    Li, Shan; Lin, Ruokuang; Bian, Chunhua; Ma, Qianli D Y; Ivanov, Plamen Ch

    2016-01-01

    Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language.

  13. Model of the Dynamic Construction Process of Texts and Scaling Laws of Words Organization in Language Systems.

    Directory of Open Access Journals (Sweden)

    Shan Li

    Full Text Available Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language.

  14. The geothermal energy potential in Denmark - updating the database and new structural and thermal models

    Science.gov (United States)

    Nielsen, Lars Henrik; Sparre Andersen, Morten; Balling, Niels; Boldreel, Lars Ole; Fuchs, Sven; Leth Hjuler, Morten; Kristensen, Lars; Mathiesen, Anders; Olivarius, Mette; Weibel, Rikke

    2017-04-01

    Knowledge of structural, hydraulic and thermal conditions of the subsurface is fundamental for the planning and use of hydrothermal energy. In the framework of a project under the Danish Research program 'Sustainable Energy and Environment' funded by the 'Danish Agency for Science, Technology and Innovation', fundamental geological and geophysical information of importance for the utilization of geothermal energy in Denmark was compiled, analyzed and re-interpreted. A 3D geological model was constructed and used as structural basis for the development of a national subsurface temperature model. In that frame, all available reflection seismic data were interpreted, quality controlled and integrated to improve the regional structural understanding. The analyses and interpretation of available relevant data (i.e. old and new seismic profiles, core and well-log data, literature data) and a new time-depth conversion allowed a consistent correlation of seismic surfaces for whole Denmark and across tectonic features. On this basis, new topologically consistent depth and thickness maps for 16 geological units from the top pre-Zechstein to the surface were drawn. A new 3D structural geological model was developed with special emphasis on potential geothermal reservoirs. The interpretation of petrophysical data (core data and well-logs) allows to evaluate the hydraulic and thermal properties of potential geothermal reservoirs and to develop a parameterized numerical 3D conductive subsurface temperature model. Reservoir properties and quality were estimated by integrating petrography and diagenesis studies with porosity-permeability data. Detailed interpretation of the reservoir quality of the geological formations was made by estimating net reservoir sandstone thickness based on well-log analysis, determination of mineralogy including sediment provenance analysis, and burial history data. New local surface heat-flow values (range: 64-84 mW/m2) were determined for the Danish

  15. Molecular analysis of the replication program in unicellular model organisms

    OpenAIRE

    Raghuraman, M. K.; Brewer, Bonita J.

    2010-01-01

    Eukaryotes have long been reported to show temporal programs of replication, different portions of the genome being replicated at different times in S phase, with the added possibility of developmentally regulated changes in this pattern depending on species and cell type. Unicellular model organisms, primarily the budding yeast Saccharomyces cerevisiae, have been central to our current understanding of the mechanisms underlying the regulation of replication origins and the temporal program o...

  16. Understanding rare disease pathogenesis: a grand challenge for model organisms.

    Science.gov (United States)

    Hieter, Philip; Boycott, Kym M

    2014-10-01

    In this commentary, Philip Hieter and Kym Boycott discuss the importance of model organisms for understanding pathogenesis of rare human genetic diseases, and highlight the work of Brooks et al., "Dysfunction of 60S ribosomal protein L10 (RPL10) disrupts neurodevelopment and causes X-linked microcephaly in humans," published in this issue of GENETICS. Copyright © 2014 by the Genetics Society of America.

  17. Categorical database generalization in GIS

    NARCIS (Netherlands)

    Liu, Y.

    2002-01-01

    Key words: Categorical database, categorical database generalization, Formal data structure, constraints, transformation unit, classification hierarchy, aggregation hierarchy, semantic similarity, data model,

  18. Quasi-dynamic model for an organic Rankine cycle

    International Nuclear Information System (INIS)

    Bamgbopa, Musbaudeen O.; Uzgoren, Eray

    2013-01-01

    Highlights: • Study presents a simplified transient modeling approach for an ORC under variable heat input. • The ORC model is presented as a synthesis of its models of its sub-components. • The model is compared to benchmark numerical simulations and experimental data at different stages. - Abstract: When considering solar based thermal energy input to an organic Rankine cycle (ORC), intermittent nature of the heat input does not only adversely affect the power output but also it may prevent ORC to operate under steady state conditions. In order to identify reliability and efficiency of such systems, this paper presents a simplified transient modeling approach for an ORC operating under variable heat input. The approach considers that response of the system to heat input variations is mainly dictated by the evaporator. Consequently, overall system is assembled using dynamic models for the heat exchangers (evaporator and condenser) and static models of the pump and the expander. In addition, pressure drop within heat exchangers is neglected. The model is compared to benchmark numerical and experimental data showing that the underlying assumptions are reasonable for cases where thermal input varies in time. Furthermore, the model is studied on another configuration and mass flow rates of both the working fluid and hot water and hot water’s inlet temperature to the ORC unit are shown to have direct influence on the system’s response

  19. Sordaria macrospora, a model organism to study fungal cellular development.

    Science.gov (United States)

    Engh, Ines; Nowrousian, Minou; Kück, Ulrich

    2010-12-01

    During the development of multicellular eukaryotes, the processes of cellular growth and organogenesis are tightly coordinated. Since the 1940s, filamentous fungi have served as genetic model organisms to decipher basic mechanisms underlying eukaryotic cell differentiation. Here, we focus on Sordaria macrospora, a homothallic ascomycete and important model organism for developmental biology. During its sexual life cycle, S. macrospora forms three-dimensional fruiting bodies, a complex process involving the formation of different cell types. S. macrospora can be used for genetic, biochemical and cellular experimental approaches since diverse tools, including fluorescence microscopy, a marker recycling system and gene libraries, are available. Moreover, the genome of S. macrospora has been sequenced and allows functional genomics analyses. Over the past years, our group has generated and analysed a number of developmental mutants which has greatly enhanced our fundamental understanding about fungal morphogenesis. In addition, our recent research activities have established a link between developmental proteins and conserved signalling cascades, ultimately leading to a regulatory network controlling differentiation processes in a eukaryotic model organism. This review summarizes the results of our recent findings, thus advancing current knowledge of the general principles and paradigms underpinning eukaryotic cell differentiation and development. Copyright © 2010 Elsevier GmbH. All rights reserved.

  20. Aging, neurogenesis, and caloric restriction in different model organisms.

    Science.gov (United States)

    Arslan-Ergul, Ayca; Ozdemir, A Tugrul; Adams, Michelle M

    2013-08-01

    Brain aging is a multifactorial process that is occurring across multiple cognitive domains. A significant complaint that occurs in the elderly is a decrement in learning and memory ability. Both rodents and zebrafish exhibit a similar problem with memory during aging. The neurobiological changes that underlie this cognitive decline are complex and undoubtedly influenced by many factors. Alterations in the birth of new neurons and neuron turnover may contribute to age-related cognitive problems. Caloric restriction is the only non-genetic intervention that reliably increases life span and healthspan across multiple organisms although the molecular mechanisms are not well-understood. Recently the zebrafish has become a popular model organism for understanding the neurobiological consequences but to date very little work has been performed. Similarly, few studies have examined the effects of dietary restriction in zebrafish. Here we review the literature related to memory decline, neurogenesis, and caloric restriction across model organisms and suggest that zebrafish has the potential to be an important animal model for understanding the complex interactions between age, neurobiological changes in the brain, and dietary regimens or their mimetics as interventions.

  1. Turbulence and Self-Organization Modeling Astrophysical Objects

    CERN Document Server

    Marov, Mikhail Ya

    2013-01-01

    This book focuses on the development of continuum models of natural turbulent media. It provides a theoretical approach to the solutions of different problems related to the formation, structure and evolution of astrophysical and geophysical objects. A stochastic modeling approach is used in the mathematical treatment of these problems, which reflects self-organization processes in open dissipative systems. The authors also consider examples of ordering for various objects in space throughout their evolutionary processes. This volume is aimed at graduate students and researchers in the fields of mechanics, astrophysics, geophysics, planetary and space science.

  2. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    Energy Technology Data Exchange (ETDEWEB)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C. [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 265 04 (Greece); Loudos, George [Department of Biomedical Engineering, Technological Educational Institute of Athens, Ag. Spyridonos Street, Egaleo GR 122 10, Athens (Greece); Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [Medical Information Processing Laboratory (LaTIM), National Institute of Health and Medical Research (INSERM), 29609 Brest (France)

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines

  3. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    International Nuclear Information System (INIS)

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-01-01

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  4. VizieR Online Data Catalog: Lowell Photometric Database asteroid models (Durech+, 2016)

    Science.gov (United States)

    Durech, J.; Hanus, J.; Oszkiewicz, D.; Vanco, R.

    2016-01-01

    List of new asteroid models. For each asteroid, there is one or two pole directions in the ecliptic coordinates, the sidereal rotation period, rotation period from LCDB and its quality code (if available), the minimum and maximum lightcurve amplitude, the number of data points, and the method which was used to derive the unique rotation period. The accuracy of the sidereal rotation period is of the order of the last decimal place given. Asteroids marked with asterisk were independently confirmed by Hanus et al. (2016A&A...586A.108H). (2 data files).

  5. Conceptual hierarchical modeling to describe wetland plant community organization

    Science.gov (United States)

    Little, A.M.; Guntenspergen, G.R.; Allen, T.F.H.

    2010-01-01

    Using multivariate analysis, we created a hierarchical modeling process that describes how differently-scaled environmental factors interact to affect wetland-scale plant community organization in a system of small, isolated wetlands on Mount Desert Island, Maine. We followed the procedure: 1) delineate wetland groups using cluster analysis, 2) identify differently scaled environmental gradients using non-metric multidimensional scaling, 3) order gradient hierarchical levels according to spatiotem-poral scale of fluctuation, and 4) assemble hierarchical model using group relationships with ordination axes and post-hoc tests of environmental differences. Using this process, we determined 1) large wetland size and poor surface water chemistry led to the development of shrub fen wetland vegetation, 2) Sphagnum and water chemistry differences affected fen vs. marsh / sedge meadows status within small wetlands, and 3) small-scale hydrologic differences explained transitions between forested vs. non-forested and marsh vs. sedge meadow vegetation. This hierarchical modeling process can help explain how upper level contextual processes constrain biotic community response to lower-level environmental changes. It creates models with more nuanced spatiotemporal complexity than classification and regression tree procedures. Using this process, wetland scientists will be able to generate more generalizable theories of plant community organization, and useful management models. ?? Society of Wetland Scientists 2009.

  6. Modeling of the hERG K+ Channel Blockage Using Online Chemical Database and Modeling Environment (OCHEM).

    Science.gov (United States)

    Li, Xiao; Zhang, Yuan; Li, Huanhuan; Zhao, Yong

    2017-12-01

    Human ether-a-go-go related gene (hERG) K+ channel plays an important role in cardiac action potential. Blockage of hERG channel may result in long QT syndrome (LQTS), even cause sudden cardiac death. Many drugs have been withdrawn from the market because of the serious hERG-related cardiotoxicity. Therefore, it is quite essential to estimate the chemical blockage of hERG in the early stage of drug discovery. In this study, a diverse set of 3721 compounds with hERG inhibition data was assembled from literature. Then, we make full use of the Online Chemical Modeling Environment (OCHEM), which supplies rich machine learning methods and descriptor sets, to build a series of classification models for hERG blockage. We also generated two consensus models based on the top-performing individual models. The consensus models performed much better than the individual models both on 5-fold cross validation and external validation. Especially, consensus model II yielded the prediction accuracy of 89.5 % and MCC of 0.670 on external validation. This result indicated that the predictive power of consensus model II should be stronger than most of the previously reported models. The 17 top-performing individual models and the consensus models and the data sets used for model development are available at https://ochem.eu/article/103592. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Silkworm: A Promising Model Organism in Life Science.

    Science.gov (United States)

    Meng, Xu; Zhu, Feifei; Chen, Keping

    2017-09-01

    As an important economic insect, silkworm Bombyx mori (L.) (Lepidoptera: Bombycidae) has numerous advantages in life science, such as low breeding cost, large progeny size, short generation time, and clear genetic background. Additionally, there are rich genetic resources associated with silkworms. The completion of the silkworm genome has further accelerated it to be a modern model organism in life science. Genomic studies showed that some silkworm genes are highly homologous to certain genes related to human hereditary disease and, therefore, are a candidate model for studying human disease. In this article, we provided a review of silkworm as an important model in various research areas, including human disease, screening of antimicrobial agents, environmental safety monitoring, and antitumor studies. In addition, the application potentiality of silkworm model in life sciences was discussed. © The Author 2017. Published by Oxford University Press on behalf of Entomological Society of America.

  8. Data-based Modeling of the Dynamical Inner Magnetosphere During Strong Geomagnetic Storms

    Science.gov (United States)

    Tsyganenko, N.; Sitnov, M.

    2004-12-01

    This work builds on and extends our previous effort [Tsyganenko et al., 2003] to develop a dynamical model of the storm-time geomagnetic field in the inner magnetosphere, using space magnetometer data taken during 37 major events in 1996--2000 and concurrent observations of the solar wind and IMF. The essence of the approach is to derive from the data the temporal variation of all major current systems contributing to the geomagnetic field during the entire storm cycle, using a simple model of their growth and decay. Each principal source of the external magnetic field (magnetopause, cross-tail current sheet, axisymmetric and partial ring currents, Birkeland currents) is controlled by a separate driving variable that includes a combination of geoeffective parameters in the form Nλ Vβ Bsγ , where N, V, and Bs are the solar wind density, speed, and the magnitude of the southward component of the IMF, respectively. Each source was also assumed to have an individual relaxation timescale and residual quiet-time strength, so that its partial contribution to the total field was calculated for any moment as a time integral, taking into account the entire history of the external driving of the magnetosphere during each storm. In addition, the magnitudes of the principal field sources were assumed to saturate during extremely large storms with abnormally strong external driving. All the parameters of the model field sources, including their magnitudes, geometrical characteristics, solar wind/IMF driving functions, decay timescales, and saturation thresholds were treated as free variables, to be derived from the data by the least squares. The relaxation timescales of the individual magnetospheric field sources were found to largely differ between each other, from as large as ˜30 hours for the symmetrical ring current to only ˜50 min for the region~1 Birkeland current. The total magnitudes of the currents were also found to dramatically vary in the course of major storms

  9. Uncertainty in geochemical modelling of CO2 and calcite dissolution in NaCl solutions due to different modelling codes and thermodynamic databases

    International Nuclear Information System (INIS)

    Haase, Christoph; Dethlefsen, Frank; Ebert, Markus; Dahmke, Andreas

    2013-01-01

    Highlights: • CO 2 and calcite dissolution is calculated. • The codes PHREEQC, Geochemist’s Workbench, EQ3/6, and FactSage are used. • Comparison with Duan and Li (2008) shows lowest deviation using phreeqc.dat and wateq4f.dat. • Using Pitzer databases does not improve accurate calculations. • Uncertainty in dissolved CO 2 is largest using the geochemical models. - Abstract: A prognosis of the geochemical effects of CO 2 storage induced by the injection of CO 2 into geologic reservoirs or by CO 2 leakage into the overlaying formations can be performed by numerical modelling (non-invasive) and field experiments. Until now the research has been focused on the geochemical processes of the CO 2 reacting with the minerals of the storage formation, which mostly consists of quartzitic sandstones. Regarding the safety assessment the reactions between the CO 2 and the overlaying formations in the case of a CO 2 leakage are of equal importance as the reactions in the storage formation. In particular, limestone formations can react very sensitively to CO 2 intrusion. The thermodynamic parameters necessary to model these reactions are not determined explicitly through experiments at the total range of temperature and pressure conditions and are thus extrapolated by the simulation code. The differences in the calculated results lead to different calcite and CO 2 solubilities and can influence the safety issues. This uncertainty study is performed by comparing the computed results, applying the geochemical modelling software codes The Geochemist’s Workbench, EQ3/6, PHREEQC and FactSage/ChemApp and their thermodynamic databases. The input parameters (1) total concentration of the solution, (2) temperature and (3) fugacity are varied within typical values for CO 2 reservoirs, overlaying formations and close-to-surface aquifers. The most sensitive input parameter in the system H 2 O–CO 2 –NaCl–CaCO 3 for the calculated range of dissolved calcite and CO 2 is the

  10. Targeted Therapy Database (TTD: a model to match patient's molecular profile with current knowledge on cancer biology.

    Directory of Open Access Journals (Sweden)

    Simone Mocellin

    Full Text Available BACKGROUND: The efficacy of current anticancer treatments is far from satisfactory and many patients still die of their disease. A general agreement exists on the urgency of developing molecularly targeted therapies, although their implementation in the clinical setting is in its infancy. In fact, despite the wealth of preclinical studies addressing these issues, the difficulty of testing each targeted therapy hypothesis in the clinical arena represents an intrinsic obstacle. As a consequence, we are witnessing a paradoxical situation where most hypotheses about the molecular and cellular biology of cancer remain clinically untested and therefore do not translate into a therapeutic benefit for patients. OBJECTIVE: To present a computational method aimed to comprehensively exploit the scientific knowledge in order to foster the development of personalized cancer treatment by matching the patient's molecular profile with the available evidence on targeted therapy. METHODS: To this aim we focused on melanoma, an increasingly diagnosed malignancy for which the need for novel therapeutic approaches is paradigmatic since no effective treatment is available in the advanced setting. Relevant data were manually extracted from peer-reviewed full-text original articles describing any type of anti-melanoma targeted therapy tested in any type of experimental or clinical model. To this purpose, Medline, Embase, Cancerlit and the Cochrane databases were searched. RESULTS AND CONCLUSIONS: We created a manually annotated database (Targeted Therapy Database, TTD where the relevant data are gathered in a formal representation that can be computationally analyzed. Dedicated algorithms were set up for the identification of the prevalent therapeutic hypotheses based on the available evidence and for ranking treatments based on the molecular profile of individual patients. In this essay we describe the principles and computational algorithms of an original method

  11. Review and assessment of the database and numerical modeling for turbine heat transfer

    Science.gov (United States)

    Gladden, H. J.; Simoneau, R. J.

    1989-01-01

    The objectives of the NASA Hot Section Technology (HOST) Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  12. [Geothermal system temperature-depth database and model for data analysis]. 5. quarterly technical progress report

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, D.D.

    1998-04-25

    During this first quarter of the second year of the contract activity has involved several different tasks. The author has continued to work on three tasks most intensively during this quarter: the task of implementing the data base for geothermal system temperature-depth, the maintenance of the WWW site with the heat flow and gradient data base, and finally the development of a modeling capability for analysis of the geothermal system exploration data. The author has completed the task of developing a data base template for geothermal system temperature-depth data that can be used in conjunction with the regional data base that he had already developed and is now implementing it. Progress is described.

  13. An Instructional Development Model for Global Organizations: The GOaL Model.

    Science.gov (United States)

    Hara, Noriko; Schwen, Thomas M.

    1999-01-01

    Presents an instructional development model, GOaL (Global Organization Localization), for use by global organizations. Topics include gaps in language, culture, and needs; decentralized processes; collaborative efforts; predetermined content; multiple perspectives; needs negotiation; learning within context; just-in-time training; and bilingual…

  14. Danish Urogynaecological Database

    DEFF Research Database (Denmark)

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    , complications if relevant, implants used if relevant, 3-6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database......The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures...... has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number...

  15. Enabling Semantic Queries Against the Spatial Database

    Directory of Open Access Journals (Sweden)

    PENG, X.

    2012-02-01

    Full Text Available The spatial database based upon the object-relational database management system (ORDBMS has the merits of a clear data model, good operability and high query efficiency. That is why it has been widely used in spatial data organization and management. However, it cannot express the semantic relationships among geospatial objects, making the query results difficult to meet the user's requirement well. Therefore, this paper represents an attempt to combine the Semantic Web technology with the spatial database so as to make up for the traditional database's disadvantages. In this way, on the one hand, users can take advantages of ORDBMS to store and manage spatial data; on the other hand, if the spatial database is released in the form of Semantic Web, the users could describe a query more concisely with the cognitive pattern which is similar to that of daily life. As a consequence, this methodology enables the benefits of both Semantic Web and the object-relational database (ORDB available. The paper discusses systematically the semantic enriched spatial database's architecture, key technologies and implementation. Subsequently, we demonstrate the function of spatial semantic queries via a practical prototype system. The query results indicate that the method used in this study is feasible.

  16. The LHCb configuration database

    CERN Document Server

    Abadie, L; Van Herwijnen, Eric; Jacobsson, R; Jost, B; Neufeld, N

    2005-01-01

    The aim of the LHCb configuration database is to store information about all the controllable devices of the detector. The experiment's control system (that uses PVSS ) will configure, start up and monitor the detector from the information in the configuration database. The database will contain devices with their properties, connectivity and hierarchy. The ability to store and rapidly retrieve huge amounts of data, and the navigability between devices are important requirements. We have collected use cases to ensure the completeness of the design. Using the entity relationship modelling technique we describe the use cases as classes with attributes and links. We designed the schema for the tables using relational diagrams. This methodology has been applied to the TFC (switches) and DAQ system. Other parts of the detector will follow later. The database has been implemented using Oracle to benefit from central CERN database support. The project also foresees the creation of tools to populate, maintain, and co...

  17. Rainfall-induced fecal indicator organisms transport from manured fields: model sensitivity analysis.

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov A; Whelan, Gene; Yakirevich, Alexander M; Guber, Andrey; Gish, Timothy J

    2014-02-01

    Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evaluate best management practices that reduce the microbial pollution in ecosystems and water sources and thus help to predict the risk of food and waterborne diseases. In this study we performed a sensitivity analysis for the KINEROS/STWIR model developed to predict the FIOs transport out of manured fields to other fields and water bodies in order to identify input variables that control the transport uncertainty. The distributions of model input parameters were set to encompass values found from three-year experiments at the USDA-ARS OPE3 experimental site in Beltsville and publicly available information. Sobol' indices and complementary regression trees were used to perform the global sensitivity analysis of the model and to explore the interactions between model input parameters on the proportion of FIO removed from fields. Regression trees provided a useful visualization of the differences in sensitivity of the model output in different parts of the input variable domain. Environmental controls such as soil saturation, rainfall duration and rainfall intensity had the largest influence in the model behavior, whereas soil and manure properties ranked lower. The field length had only moderate effect on the model output sensitivity to the model inputs. Among the manure-related properties the parameter determining the shape of the FIO release kinetic curve had the largest influence on the removal of FIOs from the fields. That underscored the need to better characterize the FIO release kinetics. Since the most sensitive model inputs are available in soil and weather databases or can be obtained using soil water models, results indicate the opportunity of obtaining large-scale estimates of FIO

  18. Examples of New Models Applied in Selected Simulation Systems with Respect to Database

    Directory of Open Access Journals (Sweden)

    Ignaszak Z.

    2013-03-01

    Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared the new challenges and expectations for permanent development of process virtualization in the mechanical engineering industry. Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage- and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage- and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.

  19. Examples of New Models Applied in Selected Simulation Systems with Respect to Database

    Directory of Open Access Journals (Sweden)

    Z. Ignaszak

    2013-01-01

    Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared thenew challenges and expectations for permanent development of process virtualization in the mechanical engineering industry.Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage– and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage– and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.

  20. Dryout modeling in support of the organic tank safety project

    International Nuclear Information System (INIS)

    Simmons, C.S.

    1998-08-01

    This work was performed for the Organic Tank Safety Project to evaluate the moisture condition of the waste surface organic-nitrate bearing tanks that are classified as being conditionally safe because sufficient water is present. This report describes the predictive modeling procedure used to predict the moisture content of waste in the future, after it has been subjected to dryout caused by water vapor loss through passive ventilation. This report describes a simplified procedure for modeling the drying out of tank waste. Dryout occurs as moisture evaporates from the waste into the headspace and then exits the tank through ventilation. The water vapor concentration within the waste of the headspace is determined by the vapor-liquid equilibrium, which depends on the waste's moisture content and temperature. This equilibrium has been measured experimentally for a variety of waste samples and is described by a curve called the water vapor partial pressure isotherm. This curve describes the lowering of the partial pressure of water vapor in equilibrium with the waste relative to pure water due to the waste's chemical composition and hygroscopic nature. Saltcake and sludge are described by two distinct calculations that emphasize the particular physical behavior or each. A simple, steady-state model is devised for each type to obtain the approximate drying behavior. The report shows the application of the model to Tanks AX-102, C-104, and U-105