WorldWideScience

Sample records for integrates numerous database

  1. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  2. 數據資料庫 Numeric Databases

    Directory of Open Access Journals (Sweden)

    Mei-ling Wang Chen

    1989-03-01

    Full Text Available 無In 1979, the International Communication Bureau of R.O.C. connected some U.S. information service centers through the international telecommunication network. Since then, there are Dialog, ORBIT & BRS introduced into this country. However, the users are interested in the bibliographic databases and seldomly know the non-bibliographic databases or the numeric databases. This article mainly describes the numeric database about its definition & characteristics, comparison with bibliographic databases, its producers. Service systems & users, data element, a brief introduction by the subject, its problem and future, Iibrary role and the present use status in the R.O.C.

  3. Integrated spent nuclear fuel database system

    International Nuclear Information System (INIS)

    Henline, S.P.; Klingler, K.G.; Schierman, B.H.

    1994-01-01

    The Distributed Information Systems software Unit at the Idaho National Engineering Laboratory has designed and developed an Integrated Spent Nuclear Fuel Database System (ISNFDS), which maintains a computerized inventory of all US Department of Energy (DOE) spent nuclear fuel (SNF). Commercial SNF is not included in the ISNFDS unless it is owned or stored by DOE. The ISNFDS is an integrated, single data source containing accurate, traceable, and consistent data and provides extensive data for each fuel, extensive facility data for every facility, and numerous data reports and queries

  4. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  5. A Database Integrity Pattern Language

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-08-01

    Full Text Available Patterns and Pattern Languages are ways to capture experience and make it re-usable for others, and describe best practices and good designs. Patterns are solutions to recurrent problems.This paper addresses the database integrity problems from a pattern perspective. Even if the number of vendors of database management systems is quite high, the number of available solutions to integrity problems is limited. They all learned from the past experience applying the same solutions over and over again.The solutions to avoid integrity threats applied to in database management systems (DBMS can be formalized as a pattern language. Constraints, transactions, locks, etc, are recurrent integrity solutions to integrity threats and therefore they should be treated accordingly, as patterns.

  6. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  7. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  8. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  9. Numerical approach to one-loop integrals

    International Nuclear Information System (INIS)

    Fujimoto, Junpei; Shimizu, Yoshimitsu; Kato, Kiyoshi; Oyanagi, Yoshio.

    1992-01-01

    Two numerical methods are proposed for the calculation of one-loop scalar integrals. In the first method, the singularity is cancelled by the symmetrization of the integrand and the integration is done by a Monte-Carlo method. In the second one, after the transform of the integrand into a standard form, the integral is reduced into a regular numerical integral. These methods provide us practical tools to evaluate one-loop Feynman diagrams with desired numerical accuracy. They are extended to the integral with numerator and the treatment of the one-loop virtual correction to the cross section is also presented. (author)

  10. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  11. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  12. Ontology based heterogeneous materials database integration and semantic query

    Science.gov (United States)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  13. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    OpenAIRE

    Errol A. Blake

    2007-01-01

    Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions ...

  14. [A web-based integrated clinical database for laryngeal cancer].

    Science.gov (United States)

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  15. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  16. Emission & Generation Resource Integrated Database (eGRID)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions & Generation Resource Integrated Database (eGRID) is an integrated source of data on environmental characteristics of electric power generation....

  17. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  18. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  19. Optimal database locks for efficient integrity checking

    DEFF Research Database (Denmark)

    Martinenghi, Davide

    2004-01-01

    In concurrent database systems, correctness of update transactions refers to the equivalent effects of the execution schedule and some serial schedule over the same set of transactions. Integrity constraints add further semantic requirements to the correctness of the database states reached upon...... the execution of update transactions. Several methods for efficient integrity checking and enforcing exist. We show in this paper how to apply one such method to automatically extend update transactions with locks and simplified consistency tests on the locked entities. All schedules produced in this way...

  20. Loopedia, a database for loop integrals

    Science.gov (United States)

    Bogner, C.; Borowka, S.; Hahn, T.; Heinrich, G.; Jones, S. P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.

    2018-04-01

    Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of INSPIRE or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.

  1. Functional integration of automated system databases by means of artificial intelligence

    Science.gov (United States)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  2. Development of an integrated database management system to evaluate integrity of flawed components of nuclear power plant

    International Nuclear Information System (INIS)

    Mun, H. L.; Choi, S. N.; Jang, K. S.; Hong, S. Y.; Choi, J. B.; Kim, Y. J.

    2001-01-01

    The object of this paper is to develop an NPP-IDBMS(Integrated DataBase Management System for Nuclear Power Plants) for evaluating the integrity of components of nuclear power plant using relational data model. This paper describes the relational data model, structure and development strategy for the proposed NPP-IDBMS. The NPP-IDBMS consists of database, database management system and interface part. The database part consists of plant, shape, operating condition, material properties and stress database, which are required for the integrity evaluation of each component in nuclear power plants. For the development of stress database, an extensive finite element analysis was performed for various components considering operational transients. The developed NPP-IDBMS will provide efficient and accurate way to evaluate the integrity of flawed components

  3. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  4. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    Science.gov (United States)

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  5. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  6. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna; Tramontano, Anna; Marcatili, Paolo

    2011-01-01

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  7. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  8. Numerical integration of asymptotic solutions of ordinary differential equations

    Science.gov (United States)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  9. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  10. High speed numerical integration algorithm using FPGA | Razak ...

    African Journals Online (AJOL)

    Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...

  11. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact,...

  12. Numerical method of singular problems on singular integrals

    International Nuclear Information System (INIS)

    Zhao Huaiguo; Mou Zongze

    1992-02-01

    As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

  13. Tight-coupling of groundwater flow and transport modelling engines with spatial databases and GIS technology: a new approach integrating Feflow and ArcGIS

    Directory of Open Access Journals (Sweden)

    Ezio Crestaz

    2012-09-01

    Full Text Available Implementation of groundwater flow and transport numerical models is generally a challenge, time-consuming and financially-demanding task, in charge to specialized modelers and consulting firms. At a later stage, within clearly stated limits of applicability, these models are often expected to be made available to less knowledgeable personnel to support/design and running of predictive simulations within more familiar environments than specialized simulation systems. GIS systems coupled with spatial databases appear to be ideal candidates to address problem above, due to their much wider diffusion and expertise availability. Current paper discusses the issue from a tight-coupling architecture perspective, aimed at integration of spatial databases, GIS and numerical simulation engines, addressing both observed and computed data management, retrieval and spatio-temporal analysis issues. Observed data can be migrated to the central database repository and then used to set up transient simulation conditions in the background, at run time, while limiting additional complexity and integrity failure risks as data duplication during data transfer through proprietary file formats. Similarly, simulation scenarios can be set up in a familiar GIS system and stored to spatial database for later reference. As numerical engine is tightly coupled with the GIS, simulations can be run within the environment and results themselves saved to the database. Further tasks, as spatio-temporal analysis (i.e. for postcalibration auditing scopes, cartography production and geovisualization, can then be addressed using traditional GIS tools. Benefits of such an approach include more effective data management practices, integration and availability of modeling facilities in a familiar environment, streamlining spatial analysis processes and geovisualization requirements for the non-modelers community. Major drawbacks include limited 3D and time-dependent support in

  14. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    Science.gov (United States)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  15. An Integrative Theory of Numerical Development

    Science.gov (United States)

    Siegler, Robert; Lortie-Forgues, Hugues

    2014-01-01

    Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…

  16. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  17. A delta-rule model of numerical and non-numerical order processing.

    Science.gov (United States)

    Verguts, Tom; Van Opstal, Filip

    2014-06-01

    Numerical and non-numerical order processing share empirical characteristics (distance effect and semantic congruity), but there are also important differences (in size effect and end effect). At the same time, models and theories of numerical and non-numerical order processing developed largely separately. Currently, we combine insights from 2 earlier models to integrate them in a common framework. We argue that the same learning principle underlies numerical and non-numerical orders, but that environmental features determine the empirical differences. Implications for current theories on order processing are pointed out. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  19. Network and Database Security: Regulatory Compliance, Network, and Database Security - A Unified Process and Goal

    Directory of Open Access Journals (Sweden)

    Errol A. Blake

    2007-12-01

    Full Text Available Database security has evolved; data security professionals have developed numerous techniques and approaches to assure data confidentiality, integrity, and availability. This paper will show that the Traditional Database Security, which has focused primarily on creating user accounts and managing user privileges to database objects are not enough to protect data confidentiality, integrity, and availability. This paper is a compilation of different journals, articles and classroom discussions will focus on unifying the process of securing data or information whether it is in use, in storage or being transmitted. Promoting a change in Database Curriculum Development trends may also play a role in helping secure databases. This paper will take the approach that if one make a conscientious effort to unifying the Database Security process, which includes Database Management System (DBMS selection process, following regulatory compliances, analyzing and learning from the mistakes of others, Implementing Networking Security Technologies, and Securing the Database, may prevent database breach.

  20. A Numerical Study of Quantization-Based Integrators

    Directory of Open Access Journals (Sweden)

    Barros Fernando

    2014-01-01

    Full Text Available Adaptive step size solvers are nowadays considered fundamental to achieve efficient ODE integration. While, traditionally, ODE solvers have been designed based on discrete time machines, new approaches based on discrete event systems have been proposed. Quantization provides an efficient integration technique based on signal threshold crossing, leading to independent and modular solvers communicating through discrete events. These solvers can benefit from the large body of knowledge on discrete event simulation techniques, like parallelization, to obtain efficient numerical integration. In this paper we introduce new solvers based on quantization and adaptive sampling techniques. Preliminary numerical results comparing these solvers are presented.

  1. Numerical methods for engine-airframe integration

    International Nuclear Information System (INIS)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment

  2. Cuba: Multidimensional numerical integration library

    Science.gov (United States)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  3. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  4. Loop integration results using numerical extrapolation for a non-scalar integral

    International Nuclear Information System (INIS)

    Doncker, E. de; Shimizu, Y.; Fujimoto, J.; Yuasa, F.; Kaugars, K.; Cucos, L.; Van Voorst, J.

    2004-01-01

    Loop integration results have been obtained using numerical integration and extrapolation. An extrapolation to the limit is performed with respect to a parameter in the integrand which tends to zero. Results are given for a non-scalar four-point diagram. Extensions to accommodate loop integration by existing integration packages are also discussed. These include: using previously generated partitions of the domain and roundoff error guards

  5. Using XML technology for the ontology-based semantic integration of life science databases.

    Science.gov (United States)

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  6. Numerical evaluation of tensor Feynman integrals in Euclidean kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Gluza, J.; Kajda [Silesia Univ., Katowice (Poland). Inst. of Physics; Riemann, T.; Yundin, V. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2010-10-15

    For the investigation of higher order Feynman integrals, potentially with tensor structure, it is highly desirable to have numerical methods and automated tools for dedicated, but sufficiently 'simple' numerical approaches. We elaborate two algorithms for this purpose which may be applied in the Euclidean kinematical region and in d=4-2{epsilon} dimensions. One method uses Mellin-Barnes representations for the Feynman parameter representation of multi-loop Feynman integrals with arbitrary tensor rank. Our Mathematica package AMBRE has been extended for that purpose, and together with the packages MB (M. Czakon) or MBresolve (A. V. Smirnov and V. A. Smirnov) one may perform automatically a numerical evaluation of planar tensor Feynman integrals. Alternatively, one may apply sector decomposition to planar and non-planar multi-loop {epsilon}-expanded Feynman integrals with arbitrary tensor rank. We automatized the preparations of Feynman integrals for an immediate application of the package sectordecomposition (C. Bogner and S. Weinzierl) so that one has to give only a proper definition of propagators and numerators. The efficiency of the two implementations, based on Mellin-Barnes representations and sector decompositions, is compared. The computational packages are publicly available. (orig.)

  7. Data integration for plant genomics--exemplars from the integration of Arabidopsis thaliana databases.

    Science.gov (United States)

    Lysenko, Artem; Lysenko, Atem; Hindle, Matthew Morritt; Taubert, Jan; Saqi, Mansoor; Rawlings, Christopher John

    2009-11-01

    The development of a systems based approach to problems in plant sciences requires integration of existing information resources. However, the available information is currently often incomplete and dispersed across many sources and the syntactic and semantic heterogeneity of the data is a challenge for integration. In this article, we discuss strategies for data integration and we use a graph based integration method (Ondex) to illustrate some of these challenges with reference to two example problems concerning integration of (i) metabolic pathway and (ii) protein interaction data for Arabidopsis thaliana. We quantify the degree of overlap for three commonly used pathway and protein interaction information sources. For pathways, we find that the AraCyc database contains the widest coverage of enzyme reactions and for protein interactions we find that the IntAct database provides the largest unique contribution to the integrated dataset. For both examples, however, we observe a relatively small amount of data common to all three sources. Analysis and visual exploration of the integrated networks was used to identify a number of practical issues relating to the interpretation of these datasets. We demonstrate the utility of these approaches to the analysis of groups of coexpressed genes from an individual microarray experiment, in the context of pathway information and for the combination of coexpression data with an integrated protein interaction network.

  8. On the applicability of schema integration techniques to database interoperation

    NARCIS (Netherlands)

    Vermeer, Mark W.W.; Apers, Peter M.G.

    1996-01-01

    We discuss the applicability of schema integration techniques developed for tightly-coupled database interoperation to interoperation of databases stemming from different modelling contexts. We illustrate that in such an environment, it is typically quite difficult to infer the real-world semantics

  9. Numerical time integration for air pollution models

    NARCIS (Netherlands)

    J.G. Verwer (Jan); W. Hundsdorfer (Willem); J.G. Blom (Joke)

    1998-01-01

    textabstractDue to the large number of chemical species and the three space dimensions, off-the-shelf stiff ODE integrators are not feasible for the numerical time integration of stiff systems of advection-diffusion-reaction equations [ fracpar{c{t + nabla cdot left( vu{u c right) = nabla cdot left(

  10. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    Science.gov (United States)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  11. Numerical solution of boundary-integral equations for molecular electrostatics.

    Science.gov (United States)

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  12. Comparison of direct numerical simulation databases of turbulent channel flow at Re = 180

    NARCIS (Netherlands)

    Vreman, A.W.; Kuerten, J.G.M.

    2014-01-01

    Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at Re t = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations below 0.2%

  13. Construction of an integrated database to support genomic sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  14. Numerical integration subprogrammes in Fortran II-D

    Energy Technology Data Exchange (ETDEWEB)

    Fry, C. R.

    1966-12-15

    This note briefly describes some integration subprogrammes written in FORTRAN II-D for the IBM 1620-II at CARDE. These presented are two Newton-Cotes, Chebyshev polynomial summation, Filon's, Nordsieck's and optimum Runge-Kutta and predictor-corrector methods. A few miscellaneous numerical integration procedures are also mentioned covering statistical functions, oscillating integrands and functions occurring in electrical engineering.

  15. Numerical databases in marine biology

    Digital Repository Service at National Institute of Oceanography (India)

    Sarupria, J.S.; Bhargava, R.M.S.

    stream_size 9 stream_content_type text/plain stream_name Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt stream_source_info Natl_Workshop_Database_Networking_Mar_Biol_1991_45.pdf.txt Content-Encoding ISO-8859-1 Content-Type... text/plain; charset=ISO-8859-1 ...

  16. MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status

    NARCIS (Netherlands)

    Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D'Elia, D.; Montalvo, A.; Pinto, B.; de Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H.; Sloof, P.; Saccone, C.

    2000-01-01

    MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces

  17. Building an integrated neurodegenerative disease database at an academic health center.

    Science.gov (United States)

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  18. Autism genetic database (AGD: a comprehensive database including autism susceptibility gene-CNVs integrated with known noncoding RNAs and fragile sites

    Directory of Open Access Journals (Sweden)

    Talebizadeh Zohreh

    2009-09-01

    Full Text Available Abstract Background Autism is a highly heritable complex neurodevelopmental disorder, therefore identifying its genetic basis has been challenging. To date, numerous susceptibility genes and chromosomal abnormalities have been reported in association with autism, but most discoveries either fail to be replicated or account for a small effect. Thus, in most cases the underlying causative genetic mechanisms are not fully understood. In the present work, the Autism Genetic Database (AGD was developed as a literature-driven, web-based, and easy to access database designed with the aim of creating a comprehensive repository for all the currently reported genes and genomic copy number variations (CNVs associated with autism in order to further facilitate the assessment of these autism susceptibility genetic factors. Description AGD is a relational database that organizes data resulting from exhaustive literature searches for reported susceptibility genes and CNVs associated with autism. Furthermore, genomic information about human fragile sites and noncoding RNAs was also downloaded and parsed from miRBase, snoRNA-LBME-db, piRNABank, and the MIT/ICBP siRNA database. A web client genome browser enables viewing of the features while a web client query tool provides access to more specific information for the features. When applicable, links to external databases including GenBank, PubMed, miRBase, snoRNA-LBME-db, piRNABank, and the MIT siRNA database are provided. Conclusion AGD comprises a comprehensive list of susceptibility genes and copy number variations reported to-date in association with autism, as well as all known human noncoding RNA genes and fragile sites. Such a unique and inclusive autism genetic database will facilitate the evaluation of autism susceptibility factors in relation to known human noncoding RNAs and fragile sites, impacting on human diseases. As a result, this new autism database offers a valuable tool for the research

  19. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  20. Integrating spatial and numerical structure in mathematical patterning

    Science.gov (United States)

    Ni’mah, K.; Purwanto; Irawan, E. B.; Hidayanto, E.

    2018-03-01

    This paper reports a study monitoring the integrating spatial and numerical structure in mathematical patterning skills of 30 students grade 7th of junior high school. The purpose of this research is to clarify the processes by which learners construct new knowledge in mathematical patterning. Findings indicate that: (1) students are unable to organize the structure of spatial and numerical, (2) students were only able to organize the spatial structure, but the numerical structure is still incorrect, (3) students were only able to organize numerical structure, but its spatial structure is still incorrect, (4) students were able to organize both of the spatial and numerical structure.

  1. Comparison of direct numerical simulation databases of turbulent channel flow at $Re_{\\tau}$ = 180

    NARCIS (Netherlands)

    Vreman, A.W.; Kuerten, Johannes G.M.

    2014-01-01

    Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at $Re_{\\tau}$ = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations

  2. Symbolic-Numeric Integration of the Dynamical Cosserat Equations

    KAUST Repository

    Lyakhov, Dmitry A.

    2017-08-29

    We devise a symbolic-numeric approach to the integration of the dynamical part of the Cosserat equations, a system of nonlinear partial differential equations describing the mechanical behavior of slender structures, like fibers and rods. This is based on our previous results on the construction of a closed form general solution to the kinematic part of the Cosserat system. Our approach combines methods of numerical exponential integration and symbolic integration of the intermediate system of nonlinear ordinary differential equations describing the dynamics of one of the arbitrary vector-functions in the general solution of the kinematic part in terms of the module of the twist vector-function. We present an experimental comparison with the well-established generalized \\\\alpha -method illustrating the computational efficiency of our approach for problems in structural mechanics.

  3. Symbolic-Numeric Integration of the Dynamical Cosserat Equations

    KAUST Repository

    Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Weber, Andreas G.; Michels, Dominik L.

    2017-01-01

    We devise a symbolic-numeric approach to the integration of the dynamical part of the Cosserat equations, a system of nonlinear partial differential equations describing the mechanical behavior of slender structures, like fibers and rods. This is based on our previous results on the construction of a closed form general solution to the kinematic part of the Cosserat system. Our approach combines methods of numerical exponential integration and symbolic integration of the intermediate system of nonlinear ordinary differential equations describing the dynamics of one of the arbitrary vector-functions in the general solution of the kinematic part in terms of the module of the twist vector-function. We present an experimental comparison with the well-established generalized \\alpha -method illustrating the computational efficiency of our approach for problems in structural mechanics.

  4. Numerical integration of massive two-loop Mellin-Barnes integrals in Minkowskian regions

    International Nuclear Information System (INIS)

    Dubovyk, Ievgen

    2016-07-01

    Mellin-Barnes (MB) techniques applied to integrals emerging in particle physics perturbative calculations are summarized. New versions of AMBRE packages which construct planar and nonplanar MB representations are shortly discussed. The numerical package MBnumerics.m is presented for the first time which is able to calculate with a high precision multidimensional MB integrals in Minkowskian regions. Examples are given for massive vertex integrals which include threshold effects and several scale parameters.

  5. Numerical integration of massive two-loop Mellin-Barnes integrals in Minkowskian regions

    Energy Technology Data Exchange (ETDEWEB)

    Dubovyk, Ievgen [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Gluza, Janusz [Uniwersytet Slaski, Katowice (Poland). Inst. Fizyki; Riemann, Tord [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Uniwersytet Slaski, Katowice (Poland). Inst. Fizyki; Usovitsch, Johann [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2016-07-15

    Mellin-Barnes (MB) techniques applied to integrals emerging in particle physics perturbative calculations are summarized. New versions of AMBRE packages which construct planar and nonplanar MB representations are shortly discussed. The numerical package MBnumerics.m is presented for the first time which is able to calculate with a high precision multidimensional MB integrals in Minkowskian regions. Examples are given for massive vertex integrals which include threshold effects and several scale parameters.

  6. Case studies in the numerical solution of oscillatory integrals

    International Nuclear Information System (INIS)

    Adam, G.

    1992-06-01

    A numerical solution of a number of 53,249 test integrals belonging to nine parametric classes was attempted by two computer codes: EAQWOM (Adam and Nobile, IMA Journ. Numer. Anal. (1991) 11, 271-296) and DO1ANF (Mark 13, 1988) from the NAG library software. For the considered test integrals, EAQWOM was found to be superior to DO1ANF as it concerns robustness, reliability, and friendly user information in case of failure. (author). 9 refs, 3 tabs

  7. INE: a rice genome database with an integrated map view.

    Science.gov (United States)

    Sakata, K; Antonio, B A; Mukai, Y; Nagasaki, H; Sakai, Y; Makino, K; Sasaki, T

    2000-01-01

    The Rice Genome Research Program (RGP) launched a large-scale rice genome sequencing in 1998 aimed at decoding all genetic information in rice. A new genome database called INE (INtegrated rice genome Explorer) has been developed in order to integrate all the genomic information that has been accumulated so far and to correlate these data with the genome sequence. A web interface based on Java applet provides a rapid viewing capability in the database. The first operational version of the database has been completed which includes a genetic map, a physical map using YAC (Yeast Artificial Chromosome) clones and PAC (P1-derived Artificial Chromosome) contigs. These maps are displayed graphically so that the positional relationships among the mapped markers on each chromosome can be easily resolved. INE incorporates the sequences and annotations of the PAC contig. A site on low quality information ensures that all submitted sequence data comply with the standard for accuracy. As a repository of rice genome sequence, INE will also serve as a common database of all sequence data obtained by collaborating members of the International Rice Genome Sequencing Project (IRGSP). The database can be accessed at http://www. dna.affrc.go.jp:82/giot/INE. html or its mirror site at http://www.staff.or.jp/giot/INE.html

  8. An integrated web medicinal materials DNA database: MMDBD (Medicinal Materials DNA Barcode Database

    Directory of Open Access Journals (Sweden)

    But Paul

    2010-06-01

    Full Text Available Abstract Background Thousands of plants and animals possess pharmacological properties and there is an increased interest in using these materials for therapy and health maintenance. Efficacies of the application is critically dependent on the use of genuine materials. For time to time, life-threatening poisoning is found because toxic adulterant or substitute is administered. DNA barcoding provides a definitive means of authentication and for conducting molecular systematics studies. Owing to the reduced cost in DNA authentication, the volume of the DNA barcodes produced for medicinal materials is on the rise and necessitates the development of an integrated DNA database. Description We have developed an integrated DNA barcode multimedia information platform- Medicinal Materials DNA Barcode Database (MMDBD for data retrieval and similarity search. MMDBD contains over 1000 species of medicinal materials listed in the Chinese Pharmacopoeia and American Herbal Pharmacopoeia. MMDBD also contains useful information of the medicinal material, including resources, adulterant information, medical parts, photographs, primers used for obtaining the barcodes and key references. MMDBD can be accessed at http://www.cuhk.edu.hk/icm/mmdbd.htm. Conclusions This work provides a centralized medicinal materials DNA barcode database and bioinformatics tools for data storage, analysis and exchange for promoting the identification of medicinal materials. MMDBD has the largest collection of DNA barcodes of medicinal materials and is a useful resource for researchers in conservation, systematic study, forensic and herbal industry.

  9. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  10. High-integrity databases for helicopter operations

    Science.gov (United States)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  11. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  12. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics.

    Directory of Open Access Journals (Sweden)

    Mohit Verma

    Full Text Available Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB, which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database features many tools for similarity search, functional annotation (putative function, PFAM domain and gene ontology search and comparative gene expression analysis. The current release of CTDB (v2.0 hosts transcriptome datasets with high quality functional annotation from cultivated (desi and kabuli types and wild chickpea. A catalog of transcription factor families and their expression profiles in chickpea are available in the database. The gene expression data have been integrated to study the expression profiles of chickpea transcripts in major tissues/organs and various stages of flower development. The utilities, such as similarity search, ortholog identification and comparative gene expression have also been implemented in the database to facilitate comparative genomic studies among different legumes and Arabidopsis. Furthermore, the CTDB represents a resource for the discovery of functional molecular markers (microsatellites and single nucleotide polymorphisms between different chickpea types. We anticipate that integrated information content of this database will accelerate the functional and applied genomic research for improvement of chickpea. The CTDB web service is freely available at http://nipgr.res.in/ctdb.html.

  13. Recovery Act: An Integrated Experimental and Numerical Study: Developing a Reaction Transport Model that Couples Chemical Reactions of Mineral Dissolution/Precipitation with Spatial and Temporal Flow Variations.

    Energy Technology Data Exchange (ETDEWEB)

    Saar, Martin O. [ETH Zurich (Switzerland); Univ. of Minnesota, Minneapolis, MN (United States); Seyfried, Jr., William E. [Univ. of Minnesota, Minneapolis, MN (United States); Longmire, Ellen K. [Univ. of Minnesota, Minneapolis, MN (United States)

    2016-06-24

    A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies, allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.

  14. Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity

    Science.gov (United States)

    Bridges, Thomas J.; Reich, Sebastian

    2001-06-01

    The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.

  15. KAIKObase: An integrated silkworm genome database and data mining tool

    Directory of Open Access Journals (Sweden)

    Nagaraju Javaregowda

    2009-10-01

    Full Text Available Abstract Background The silkworm, Bombyx mori, is one of the most economically important insects in many developing countries owing to its large-scale cultivation for silk production. With the development of genomic and biotechnological tools, B. mori has also become an important bioreactor for production of various recombinant proteins of biomedical interest. In 2004, two genome sequencing projects for B. mori were reported independently by Chinese and Japanese teams; however, the datasets were insufficient for building long genomic scaffolds which are essential for unambiguous annotation of the genome. Now, both the datasets have been merged and assembled through a joint collaboration between the two groups. Description Integration of the two data sets of silkworm whole-genome-shotgun sequencing by the Japanese and Chinese groups together with newly obtained fosmid- and BAC-end sequences produced the best continuity (~3.7 Mb in N50 scaffold size among the sequenced insect genomes and provided a high degree of nucleotide coverage (88% of all 28 chromosomes. In addition, a physical map of BAC contigs constructed by fingerprinting BAC clones and a SNP linkage map constructed using BAC-end sequences were available. In parallel, proteomic data from two-dimensional polyacrylamide gel electrophoresis in various tissues and developmental stages were compiled into a silkworm proteome database. Finally, a Bombyx trap database was constructed for documenting insertion positions and expression data of transposon insertion lines. Conclusion For efficient usage of genome information for functional studies, genomic sequences, physical and genetic map information and EST data were compiled into KAIKObase, an integrated silkworm genome database which consists of 4 map viewers, a gene viewer, and sequence, keyword and position search systems to display results and data at the level of nucleotide sequence, gene, scaffold and chromosome. Integration of the

  16. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

    International Nuclear Information System (INIS)

    Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

    1994-01-01

    The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package's components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed

  17. Integrated olfactory receptor and microarray gene expression databases

    Directory of Open Access Journals (Sweden)

    Crasto Chiquito J

    2007-06-01

    Full Text Available Abstract Background Gene expression patterns of olfactory receptors (ORs are an important component of the signal encoding mechanism in the olfactory system since they determine the interactions between odorant ligands and sensory neurons. We have developed the Olfactory Receptor Microarray Database (ORMD to house OR gene expression data. ORMD is integrated with the Olfactory Receptor Database (ORDB, which is a key repository of OR gene information. Both databases aim to aid experimental research related to olfaction. Description ORMD is a Web-accessible database that provides a secure data repository for OR microarray experiments. It contains both publicly available and private data; accessing the latter requires authenticated login. The ORMD is designed to allow users to not only deposit gene expression data but also manage their projects/experiments. For example, contributors can choose whether to make their datasets public. For each experiment, users can download the raw data files and view and export the gene expression data. For each OR gene being probed in a microarray experiment, a hyperlink to that gene in ORDB provides access to genomic and proteomic information related to the corresponding olfactory receptor. Individual ORs archived in ORDB are also linked to ORMD, allowing users access to the related microarray gene expression data. Conclusion ORMD serves as a data repository and project management system. It facilitates the study of microarray experiments of gene expression in the olfactory system. In conjunction with ORDB, ORMD integrates gene expression data with the genomic and functional data of ORs, and is thus a useful resource for both olfactory researchers and the public.

  18. An integrated numerical protection system (SPIN)

    International Nuclear Information System (INIS)

    Savornin, J.L.; Bouchet, J.M.; Furet, J.L.; Jover, P.; Sala, A.

    1978-01-01

    Developments in technology have now made it possible to perform more sophisticated protection functions which follow more closely the physical phenomena to be monitored. For this reason the Commissariat a l'energie atomique, Merlin-Gerin, Cerci and Framatome have embarked on the joint development of an Integrated Numerical Protection System (SPIN) which will fulfil this objective and will improve the safety and availability of power stations. The system described involves the use of programmed numerical techniques and a structure based on multiprocessors. The architecture has a redundancy of four. Throughout the development of the project the validity of the studies was confirmed by experiments. A first numerical model of a protection function was tested in the laboratory and is now in operation in a power station. A set of models was then introduced for checking the main components of the equipment finally chosen prior to building and testing a prototype. (author)

  19. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  20. Integrated database for rapid mass movements in Norway

    Directory of Open Access Journals (Sweden)

    C. Jaedicke

    2009-03-01

    Full Text Available Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously.

    An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development.

    Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord

  1. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  2. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.

    Science.gov (United States)

    May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk

    2009-05-04

    The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  3. Analysis of thermal-plastic response of shells of revolution by numerical integration

    International Nuclear Information System (INIS)

    Leonard, J.W.

    1975-01-01

    An economic technique for the numerical analysis of the elasto-plastic behaviour of shells of revolution would be of considerable value in the nuclear reactor industry. A numerical method based on the numerical integration of the governing shell equations has been shown, for elastic cases, to be more efficient than the finite element method when applied to shells of revolution. In the numerical integration method, the governing differential equations of motion are converted into a set of initial-value problems. Each initial-value problem is integrated numerically between meridional boundary points and recombined so as to satisfy boundary conditions. For large-deflection elasto-plastic behaviour, the equations are nonlinear and, hence, are recombined in an iterative manner using the Newton-Raphson procedure. Suppression techniques are incorporated in order to eliminate extraneous solutions within the numerical integration procedure. The Reissner-Meissner shell theory for shells of revolution is adopted to account for large deflection and higher-order rotation effects. The computer modelling of the equations is quite general in that specific shell segment geometries, e.g. cylindrical, spherical, toroidal, conical segments, and any combinations thereof can be handled easily. (Auth.)

  4. Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory contains measured and modeled partnership and contact data. It is comprised of basic...

  5. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  6. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  7. GDR (Genome Database for Rosaceae: integrated web resources for Rosaceae genomics and genetics research

    Directory of Open Access Journals (Sweden)

    Ficklin Stephen

    2004-09-01

    Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  8. GDR (Genome Database for Rosaceae): integrated web resources for Rosaceae genomics and genetics research.

    Science.gov (United States)

    Jung, Sook; Jesudurai, Christopher; Staton, Margaret; Du, Zhidian; Ficklin, Stephen; Cho, Ilhyung; Abbott, Albert; Tomkins, Jeffrey; Main, Dorrie

    2004-09-09

    Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. The Genome Database for Rosaceae (GDR) is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  9. Database of episode-integrated solar energetic proton fluences

    Science.gov (United States)

    Robinson, Zachary D.; Adams, James H.; Xapsos, Michael A.; Stauffer, Craig A.

    2018-04-01

    A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8) and the Geostationary Operational Environmental Satellites (GOES) series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  10. Database of episode-integrated solar energetic proton fluences

    Directory of Open Access Journals (Sweden)

    Robinson Zachary D.

    2018-01-01

    Full Text Available A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8 and the Geostationary Operational Environmental Satellites (GOES series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  11. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  12. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  13. LmSmdB: an integrated database for metabolic and gene regulatory network in Leishmania major and Schistosoma mansoni

    Directory of Open Access Journals (Sweden)

    Priyanka Patel

    2016-03-01

    Full Text Available A database that integrates all the information required for biological processing is essential to be stored in one platform. We have attempted to create one such integrated database that can be a one stop shop for the essential features required to fetch valuable result. LmSmdB (L. major and S. mansoni database is an integrated database that accounts for the biological networks and regulatory pathways computationally determined by integrating the knowledge of the genome sequences of the mentioned organisms. It is the first database of its kind that has together with the network designing showed the simulation pattern of the product. This database intends to create a comprehensive canopy for the regulation of lipid metabolism reaction in the parasite by integrating the transcription factors, regulatory genes and the protein products controlled by the transcription factors and hence operating the metabolism at genetic level. Keywords: L.major, S.mansoni, Regulatory networks, Transcription factors, Database

  14. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii

    Directory of Open Access Journals (Sweden)

    Kempa Stefan

    2009-05-01

    Full Text Available Abstract Background The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. Results In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. Conclusion ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  15. Application of symplectic integrator to numerical fluid analysis

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu

    2000-01-01

    This paper focuses on application of the symplectic integrator to numerical fluid analysis. For the purpose, we introduce Hamiltonian particle dynamics to simulate fluid behavior. The method is based on both the Hamiltonian formulation of a system and the particle methods, and is therefore called Hamiltonian Particle Dynamics (HPD). In this paper, an example of HPD applications, namely the behavior of incompressible inviscid fluid, is solved. In order to improve accuracy of HPD with respect to space, CIVA, which is a highly accurate interpolation method, is combined, but the combined method is subject to problems in that the invariants of the system are not conserved in a long-time computation. For solving the problems, symplectic time integrators are introduced and the effectiveness is confirmed by numerical analyses. (author)

  16. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  17. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  18. Integrating the DLD dosimetry system into the Almaraz NPP Corporative Database

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1996-01-01

    The article discusses the experience acquired during the integration of a new MGP Instruments DLD Dosimetry System into the Almaraz NPP corporative database and general communications network, following a client-server philosophy and taking into account the computer standards of the Plant. The most important results obtained are: Integration of DLD dosimetry information into corporative databases, permitting the use of new applications Sharing of existing personnel information with the DLD dosimetry application, thereby avoiding the redundant work of introducing data and improving the quality of the information. Facilitation of maintenance, both software and hardware, of the DLD system. Maximum explotation, from the computer point of view, of the initial investment. Adaptation of the application to the applicable legislation. (Author)

  19. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  20. Integration of curated databases to identify genotype-phenotype associations

    Directory of Open Access Journals (Sweden)

    Li Jianrong

    2006-10-01

    Full Text Available Abstract Background The ability to rapidly characterize an unknown microorganism is critical in both responding to infectious disease and biodefense. To do this, we need some way of anticipating an organism's phenotype based on the molecules encoded by its genome. However, the link between molecular composition (i.e. genotype and phenotype for microbes is not obvious. While there have been several studies that address this challenge, none have yet proposed a large-scale method integrating curated biological information. Here we utilize a systematic approach to discover genotype-phenotype associations that combines phenotypic information from a biomedical informatics database, GIDEON, with the molecular information contained in National Center for Biotechnology Information's Clusters of Orthologous Groups database (NCBI COGs. Results Integrating the information in the two databases, we are able to correlate the presence or absence of a given protein in a microbe with its phenotype as measured by certain morphological characteristics or survival in a particular growth media. With a 0.8 correlation score threshold, 66% of the associations found were confirmed by the literature and at a 0.9 correlation threshold, 86% were positively verified. Conclusion Our results suggest possible phenotypic manifestations for proteins biochemically associated with sugar metabolism and electron transport. Moreover, we believe our approach can be extended to linking pathogenic phenotypes with functionally related proteins.

  1. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-01-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments

  2. Numerical treatments for solving nonlinear mixed integral equation

    Directory of Open Access Journals (Sweden)

    M.A. Abdou

    2016-12-01

    Full Text Available We consider a mixed type of nonlinear integral equation (MNLIE of the second kind in the space C[0,T]×L2(Ω,T<1. The Volterra integral terms (VITs are considered in time with continuous kernels, while the Fredholm integral term (FIT is considered in position with singular general kernel. Using the quadratic method and separation of variables method, we obtain a nonlinear system of Fredholm integral equations (NLSFIEs with singular kernel. A Toeplitz matrix method, in each case, is then used to obtain a nonlinear algebraic system. Numerical results are calculated when the kernels take a logarithmic form or Carleman function. Moreover, the error estimates, in each case, are then computed.

  3. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics

    OpenAIRE

    Verma, Mohit; Kumar, Vinay; Patel, Ravi K.; Garg, Rohini; Jain, Mukesh

    2015-01-01

    Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB), which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database fea...

  4. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  5. Analysis of thermal-plastic response of shells of revolution by numerical integration

    International Nuclear Information System (INIS)

    Leonard, J.W.

    1975-01-01

    A numerical method based instead on the numerical integration of the governing shell equations has been shown, for elastic cases, to be more efficient than the finite element method when applied to shells of revolution. In the numerical integration method, the governing differential equations of motions are converted into a set of initial-value problems. Each initial-value problem is integrated numerically between meridional boundary points and recombined so as to satisfy boundary conditions. For large-deflection elasto-plastic behavior, the equations are nonlinear and, hence, are recombined in an iterative manner using the Newton-Raphson procedure. Suppression techniques are incorporated in order to eliminate extraneous solutions within the numerical integration procedure. The Reissner-Meissner shell theory for shells of revolution is adopted to account for large deflection and higher-order rotation effects. The computer modelling of the equations is quite general in that specific shell segment geometries, e.g. cylindrical, spherical, toroidal, conical segments, and any combinations thereof can be handled easily. The elasto-plastic constitutive relations adopted are in accordance with currently recommended constitutive equations for inelastic design analysis of FFTF Components. The Von Mises yield criteria and associated flow rule is used and the kinematic hardening law is followed. Examples are considered in which stainless steels common to LMFBR application are used

  6. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  7. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    Science.gov (United States)

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  8. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  9. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Fang, E-mail: fliu@lsec.cc.ac.cn [School of Statistics and Mathematics, Central University of Finance and Economics, Beijing 100081 (China); Lin, Lin, E-mail: linlin@math.berkeley.edu [Department of Mathematics, University of California, Berkeley, CA 94720 (United States); Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Vigil-Fowler, Derek, E-mail: vigil@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States); Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lischner, Johannes, E-mail: jlischner597@gmail.com [Department of Physics, University of California, Berkeley, CA 94720 (United States); Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Kemper, Alexander F., E-mail: afkemper@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sharifzadeh, Sahar, E-mail: ssharifz@bu.edu [Department of Electrical and Computer Engineering and Division of Materials Science and Engineering, Boston University, Boston, MA 02215 (United States); Jornada, Felipe H. da, E-mail: jornada@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States); Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Deslippe, Jack, E-mail: jdeslippe@lbl.gov [NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); and others

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  10. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  11. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  12. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data.

    Science.gov (United States)

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.

  13. PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.

    Science.gov (United States)

    Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X

    2017-01-01

    Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.

  14. Implementation of a revised numerical integration technique into QAD

    International Nuclear Information System (INIS)

    De Gangi, N.L.

    1983-01-01

    A technique for numerical integration through a uniform volume source is developed. It is applied to gamma radiation transport shielding problems. The method is based on performing a numerical angular and ray point kernel integration and is incorporated into the QAD-CG computer code (i.e. QAD-UE). Several test problems are analyzed with this technique. Convergence properties of the method are analyzed. Gamma dose rates from a large tank and post LOCA dose rates inside a containment building are evaluated. Results are consistent with data from other methods. The new technique provides several advantages. User setup requirements for large volume source problems are reduced from standard point kernel requirements. Calculational efficiencies are improved. An order of magnitude improvement is seen with a test problem

  15. Numerical Time Integration Methods for a Point Absorber Wave Energy Converter

    DEFF Research Database (Denmark)

    Zurkinden, Andrew Stephen; Kramer, Morten

    2012-01-01

    on a discretization of the convolution integral. The calculation of the convolution integral is performed at each time step regardless of the chosen numerical scheme. In the second model the convolution integral is replaced by a system of linear ordinary differential equations. The formulation of the state...

  16. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002.

    Science.gov (United States)

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics's usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics. © The Author(s) 2015. Published by Oxford University Press.

  17. Canonical algorithms for numerical integration of charged particle motion equations

    Science.gov (United States)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  18. Numerical solution of integral equations, describing mass spectrum of vector mesons

    International Nuclear Information System (INIS)

    Zhidkov, E.P.; Nikonov, E.G.; Sidorov, A.V.; Skachkov, N.B.; Khoromskij, B.N.

    1988-01-01

    The description of the numerical algorithm for solving quasipotential integral equation in impulse space is presented. The results of numerical computations of the vector meson mass spectrum and the leptonic decay width are given in comparison with the experimental data

  19. How to integrate divergent integrals: a pure numerical approach to complex loop calculations

    International Nuclear Information System (INIS)

    Caravaglios, F.

    2000-01-01

    Loop calculations involve the evaluation of divergent integrals. Usually [G. 't Hooft, M. Veltman, Nucl. Phys. B 44 (1972) 189] one computes them in a number of dimensions different than four where the integral is convergent and then one performs the analytical continuation and considers the Laurent expansion in powers of ε=n-4. In this paper we discuss a method to extract directly all coefficients of this expansion by means of concrete and well defined integrals in a five-dimensional space. We by-pass the formal and symbolic procedure of analytic continuation; instead we can numerically compute the integrals to extract directly both the coefficient of the pole 1/ε and the finite part

  20. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  1. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  2. Dynamically Integrating OSM Data into a Borderland Database

    Directory of Open Access Journals (Sweden)

    Xiaoguang Zhou

    2015-09-01

    Full Text Available Spatial data are fundamental for borderland analyses of geography, natural resources, demography, politics, economy, and culture. As the spatial data used in borderland research usually cover the borderland regions of several neighboring countries, it is difficult for anyone research institution of government to collect them. Volunteered Geographic Information (VGI is a highly successful method for acquiring timely and detailed global spatial data at a very low cost. Therefore, VGI is a reasonable source of borderland spatial data. OpenStreetMap (OSM is known as the most successful VGI resource. However, OSM's data model is far different from the traditional geographic information model. Thus, the OSM data must be converted in the scientist’s customized data model. Because the real world changes rapidly, the converted data must be updated incrementally. Therefore, this paper presents a method used to dynamically integrate OSM data into the borderland database. In this method, a basic transformation rule base is formed by comparing the OSM Map Feature description document and the destination model definitions. Using the basic rules, the main features can be automatically converted to the destination model. A human-computer interaction model transformation and a rule/automatic-remember mechanism are developed to interactively transfer the unusual features that cannot be transferred by the basic rules to the target model and to remember the reusable rules automatically. To keep the borderland database current, the global OsmChange daily diff file is used to extract the change-only information for the research region. To extract the changed objects in the region under study, the relationship between the changed object and the research region is analyzed considering the evolution of the involved objects. In addition, five rules are determined to select the objects and integrate the changed objects with multi-versions over time. The objects

  3. Numerical evaluation of integrals containing a spherical Bessel function by product integration

    International Nuclear Information System (INIS)

    Lehman, D.R.; Parke, W.C.; Maximon, L.C.

    1981-01-01

    A method is developed for numerical evaluation of integrals with k-integration range from 0 to infinity that contain a spherical Bessel function j/sub l/(kr) explicitly. The required quadrature weights are easily calculated and the rate of convergence is rapid: only a relatively small number of quadrature points is needed: for an accurate evaluation even when r is large. The quadrature rule is obtained by the method of product integration. With the abscissas chosen to be those of Clenshaw--Curtis and the Chebyshev polynomials as the interpolating polynomials, quadrature weights are obtained that depend on the spherical Bessel function. An inhomogenous recurrence relation is derived from which the weights can be calculated without accumulation of roundoff error. The procedure is summarized as an easily implementable algorithm. Questions of convergence are discussed and the rate of convergence demonstrated for several test integrals. Alternative procedures are given for generating the integration weights and an error analysis of the method is presented

  4. TRENDS: The aeronautical post-test database management system

    Science.gov (United States)

    Bjorkman, W. S.; Bondi, M. J.

    1990-01-01

    TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.

  5. Integrity Checking and Maintenance with Active Rules in XML Databases

    DEFF Research Database (Denmark)

    Christiansen, Henning; Rekouts, Maria

    2007-01-01

    While specification languages for integrity constraints for XML data have been considered in the literature, actual technologies and methodologies for checking and maintaining integrity are still in their infancy. Triggers, or active rules, which are widely used in previous technologies for the p...... updates, the method indicates trigger conditions and correctness criteria to be met by the trigger code supplied by a developer or possibly automatic methods. We show examples developed in the Sedna XML database system which provides a running implementation of XML triggers....

  6. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  7. Monograph - The Numerical Integration of Ordinary Differential Equations.

    Science.gov (United States)

    Hull, T. E.

    The materials presented in this monograph are intended to be included in a course on ordinary differential equations at the upper division level in a college mathematics program. These materials provide an introduction to the numerical integration of ordinary differential equations, and they can be used to supplement a regular text on this…

  8. Preserving Simplecticity in the Numerical Integration of Linear Beam Optics

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Christopher K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-07-01

    Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms of a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.

  9. Numerical evaluation of two-center integrals over Slater type orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Kurt, S. A., E-mail: slaykurt@gmail.com [Department of Physics, Natural Sciences Institute, Ondokuz Mayıs University, 55139, Samsun (Turkey); Yükçü, N., E-mail: nyukcu@gmail.com [Department of Energy Systems Engineering, Faculty of Technology, Adıyaman University, 02040, Adıyaman (Turkey)

    2016-03-25

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco’s group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  10. Numerical evaluation of two-center integrals over Slater type orbitals

    International Nuclear Information System (INIS)

    Kurt, S. A.; Yükçü, N.

    2016-01-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco’s group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  11. Numerical method for solving linear Fredholm fuzzy integral equations of the second kind

    Energy Technology Data Exchange (ETDEWEB)

    Abbasbandy, S. [Department of Mathematics, Imam Khomeini International University, P.O. Box 288, Ghazvin 34194 (Iran, Islamic Republic of)]. E-mail: saeid@abbasbandy.com; Babolian, E. [Faculty of Mathematical Sciences and Computer Engineering, Teacher Training University, Tehran 15618 (Iran, Islamic Republic of); Alavi, M. [Department of Mathematics, Arak Branch, Islamic Azad University, Arak 38135 (Iran, Islamic Republic of)

    2007-01-15

    In this paper we use parametric form of fuzzy number and convert a linear fuzzy Fredholm integral equation to two linear system of integral equation of the second kind in crisp case. We can use one of the numerical method such as Nystrom and find the approximation solution of the system and hence obtain an approximation for fuzzy solution of the linear fuzzy Fredholm integral equations of the second kind. The proposed method is illustrated by solving some numerical examples.

  12. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  13. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  14. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  15. Data Integration for Spatio-Temporal Patterns of Gene Expression of Zebrafish development: the GEMS database

    Directory of Open Access Journals (Sweden)

    Belmamoune Mounia

    2008-06-01

    Full Text Available The Gene Expression Management System (GEMS is a database system for patterns of gene expression. These patterns result from systematic whole-mount fluorescent in situ hybridization studies on zebrafish embryos. GEMS is an integrative platform that addresses one of the important challenges of developmental biology: how to integrate genetic data that underpin morphological changes during embryogenesis. Our motivation to build this system was by the need to be able to organize and compare multiple patterns of gene expression at tissue level. Integration with other developmental and biomolecular databases will further support our understanding of development. The GEMS operates in concert with a database containing a digital atlas of zebrafish embryo; this digital atlas of zebrafish development has been conceived prior to the expansion of the GEMS. The atlas contains 3D volume models of canonical stages of zebrafish development in which in each volume model element is annotated with an anatomical term. These terms are extracted from a formal anatomical ontology, i.e. the Developmental Anatomy Ontology of Zebrafish (DAOZ. In the GEMS, anatomical terms from this ontology together with terms from the Gene Ontology (GO are also used to annotate patterns of gene expression and in this manner providing mechanisms for integration and retrieval . The annotations are the glue for integration of patterns of gene expression in GEMS as well as in other biomolecular databases. At the one hand, zebrafish anatomy terminology allows gene expression data within GEMS to be integrated with phenotypical data in the 3D atlas of zebrafish development. At the other hand, GO terms extend GEMS expression patterns integration to a wide range of bioinformatics resources.

  16. dbPAF: an integrative database of protein phosphorylation in animals and fungi.

    Science.gov (United States)

    Ullah, Shahid; Lin, Shaofeng; Xu, Yang; Deng, Wankun; Ma, Lili; Zhang, Ying; Liu, Zexian; Xue, Yu

    2016-03-24

    Protein phosphorylation is one of the most important post-translational modifications (PTMs) and regulates a broad spectrum of biological processes. Recent progresses in phosphoproteomic identifications have generated a flood of phosphorylation sites, while the integration of these sites is an urgent need. In this work, we developed a curated database of dbPAF, containing known phosphorylation sites in H. sapiens, M. musculus, R. norvegicus, D. melanogaster, C. elegans, S. pombe and S. cerevisiae. From the scientific literature and public databases, we totally collected and integrated 54,148 phosphoproteins with 483,001 phosphorylation sites. Multiple options were provided for accessing the data, while original references and other annotations were also present for each phosphoprotein. Based on the new data set, we computationally detected significantly over-represented sequence motifs around phosphorylation sites, predicted potential kinases that are responsible for the modification of collected phospho-sites, and evolutionarily analyzed phosphorylation conservation states across different species. Besides to be largely consistent with previous reports, our results also proposed new features of phospho-regulation. Taken together, our database can be useful for further analyses of protein phosphorylation in human and other model organisms. The dbPAF database was implemented in PHP + MySQL and freely available at http://dbpaf.biocuckoo.org.

  17. Different nonideality relationships, different databases and their effects on modeling precipitation from concentrated solutions using numerical speciation codes

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.F.; Ebinger, M.H.

    1996-08-01

    Four simple precipitation problems are solved to examine the use of numerical equilibrium codes. The study emphasizes concentrated solutions, assumes both ideal and nonideal solutions, and employs different databases and different activity-coefficient relationships. The study uses the EQ3/6 numerical speciation codes. The results show satisfactory material balances and agreement between solubility products calculated from free-energy relationships and those calculated from concentrations and activity coefficients. Precipitates show slightly higher solubilities when the solutions are regarded as nonideal than when considered ideal, agreeing with theory. When a substance may precipitate from a solution dilute in the precipitating substance, a code may or may not predict precipitation, depending on the database or activity-coefficient relationship used. In a problem involving a two-component precipitation, there are only small differences in the precipitate mass and composition between the ideal and nonideal solution calculations. Analysis of this result indicates that this may be a frequent occurrence. An analytical approach is derived for judging whether this phenomenon will occur in any real or postulated precipitation situation. The discussion looks at applications of this approach. In the solutes remaining after the precipitations, there seems to be little consistency in the calculated concentrations and activity coefficients. They do not appear to depend in any coherent manner on the database or activity-coefficient relationship used. These results reinforce warnings in the literature about perfunctory or mechanical use of numerical speciation codes.

  18. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  19. PharmDB-K: Integrated Bio-Pharmacological Network Database for Traditional Korean Medicine.

    Directory of Open Access Journals (Sweden)

    Ji-Hyun Lee

    Full Text Available Despite the growing attention given to Traditional Medicine (TM worldwide, there is no well-known, publicly available, integrated bio-pharmacological Traditional Korean Medicine (TKM database for researchers in drug discovery. In this study, we have constructed PharmDB-K, which offers comprehensive information relating to TKM-associated drugs (compound, disease indication, and protein relationships. To explore the underlying molecular interaction of TKM, we integrated fourteen different databases, six Pharmacopoeias, and literature, and established a massive bio-pharmacological network for TKM and experimentally validated some cases predicted from the PharmDB-K analyses. Currently, PharmDB-K contains information about 262 TKMs, 7,815 drugs, 3,721 diseases, 32,373 proteins, and 1,887 side effects. One of the unique sets of information in PharmDB-K includes 400 indicator compounds used for standardization of herbal medicine. Furthermore, we are operating PharmDB-K via phExplorer (a network visualization software and BioMart (a data federation framework for convenient search and analysis of the TKM network. Database URL: http://pharmdb-k.org, http://biomart.i-pharm.org.

  20. Methods for enhancing numerical integration

    International Nuclear Information System (INIS)

    Doncker, Elise de

    2003-01-01

    We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

  1. Med-records: an ADD database of AAEC medical records since 1966

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.; Tucker, A.D.

    1986-08-01

    Since its inception in 1958 most of the staff of the AAEC Research Establishment at Lucas Heights have had annual medical examinations. Medical information accrued since 1966 has been collected as an ADD database to allow ad hoc enquiries to be made against the data. Details are given of the database schema and numerous support routines ranging from the integrity checking of input data to analysis and plotting of the summary results

  2. DPTEdb, an integrative database of transposable elements in dioecious plants.

    Science.gov (United States)

    Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun

    2016-01-01

    Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.

  3. Numerical calculations in elementary quantum mechanics using Feynman path integrals

    International Nuclear Information System (INIS)

    Scher, G.; Smith, M.; Baranger, M.

    1980-01-01

    We show that it is possible to do numerical calculations in elementary quantum mechanics using Feynman path integrals. Our method involves discretizing both time and space, and summing paths through matrix multiplication. We give numerical results for various one-dimensional potentials. The calculations of energy levels and wavefunctions take approximately 100 times longer than with standard methods, but there are other problems for which such an approach should be more efficient

  4. Numerical evaluation of path-integral solutions to Fokker-Planck equations. II. Restricted stochastic processes

    International Nuclear Information System (INIS)

    Wehner, M.F.

    1983-01-01

    A path-integral solution is derived for processes described by nonlinear Fokker-Plank equations together with externally imposed boundary conditions. This path-integral solution is written in the form of a path sum for small time steps and contains, in addition to the conventional volume integral, a surface integral which incorporates the boundary conditions. A previously developed numerical method, based on a histogram representation of the probability distribution, is extended to a trapezoidal representation. This improved numerical approach is combined with the present path-integral formalism for restricted processes and is show t give accurate results. 35 refs., 5 figs

  5. Free and constrained symplectic integrators for numerical general relativity

    International Nuclear Information System (INIS)

    Richter, Ronny; Lubich, Christian

    2008-01-01

    We consider symplectic time integrators in numerical general relativity and discuss both free and constrained evolution schemes. For free evolution of ADM-like equations we propose the use of the Stoermer-Verlet method, a standard symplectic integrator which here is explicit in the computationally expensive curvature terms. For the constrained evolution we give a formulation of the evolution equations that enforces the momentum constraints in a holonomically constrained Hamiltonian system and turns the Hamilton constraint function from a weak to a strong invariant of the system. This formulation permits the use of the constraint-preserving symplectic RATTLE integrator, a constrained version of the Stoermer-Verlet method. The behavior of the methods is illustrated on two effectively (1+1)-dimensional versions of Einstein's equations, which allow us to investigate a perturbed Minkowski problem and the Schwarzschild spacetime. We compare symplectic and non-symplectic integrators for free evolution, showing very different numerical behavior for nearly-conserved quantities in the perturbed Minkowski problem. Further we compare free and constrained evolution, demonstrating in our examples that enforcing the momentum constraints can turn an unstable free evolution into a stable constrained evolution. This is demonstrated in the stabilization of a perturbed Minkowski problem with Dirac gauge, and in the suppression of the propagation of boundary instabilities into the interior of the domain in Schwarzschild spacetime

  6. Advanced Numerical Integration Techniques for HighFidelity SDE Spacecraft Simulation

    Data.gov (United States)

    National Aeronautics and Space Administration — Classic numerical integration techniques, such as the ones at the heart of several NASA GSFC analysis tools, are known to work well for deterministic differential...

  7. Integration of a clinical trial database with a PACS

    International Nuclear Information System (INIS)

    Van Herk, M

    2014-01-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  8. Microwave Breast Imaging System Prototype with Integrated Numerical Characterization

    Directory of Open Access Journals (Sweden)

    Mark Haynes

    2012-01-01

    Full Text Available The increasing number of experimental microwave breast imaging systems and the need to properly model them have motivated our development of an integrated numerical characterization technique. We use Ansoft HFSS and a formalism we developed previously to numerically characterize an S-parameter- based breast imaging system and link it to an inverse scattering algorithm. We show successful reconstructions of simple test objects using synthetic and experimental data. We demonstrate the sensitivity of image reconstructions to knowledge of the background dielectric properties and show the limits of the current model.

  9. Integration of TGS and CTEN assays using the CTENFIT analysis and databasing program

    International Nuclear Information System (INIS)

    Estep, R.

    2000-01-01

    The CTEN F IT program, written for Windows 9x/NT in C++, performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplified record keeping tasks

  10. Note on the numerical calculation of the Fermi-Dirac integrals

    International Nuclear Information System (INIS)

    Graef, H.; Pabst, M.

    1977-11-01

    Expansions of the Fermi-Dirac integrals Fsub(α)(x) are developed, suitable for numerical computation. Only integrals of integer- or half-integer order are treated and expansion coefficients are tabulated for F 1 (x),....,F 9 (x); Fsub(-1/2)(x),...,Fsub(7/2)(x). Maximal relative errors vary with the function and interval considered, but are less than 3 x 10 -6 . (orig.) [de

  11. Using ontology databases for scalable query answering, inconsistency detection, and data integration

    Science.gov (United States)

    Dou, Dejing

    2011-01-01

    An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378

  12. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    Directory of Open Access Journals (Sweden)

    Stobbe Miranda D

    2011-10-01

    Full Text Available Abstract Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison

  13. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    Science.gov (United States)

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  14. Experimental research and numerical simulation on flow resistance of integrated valve

    International Nuclear Information System (INIS)

    Cai Wei; Bo Hanliang; Qin Benke

    2008-01-01

    The flow resistance of the integrated valve is one of the key parameters for the design of the control rod hydraulic drive system (CRHDS). Experimental research on the improved new integrated valve was performed, and the key data such as pressure difference, volume flow, resistance coefficient and flow coefficient of each flow channel were obtained. With the computational fluid dynamics software CFX, numerical simulation was executed to analyze the effect of Re on the flow resistance. On the basis of experimental and numerical results, fitting empirical formulas of resistance coefficient were obtained, which provide experimental and theoretical foundations for CRHDS's optimized design and theoretical analysis. (authors)

  15. IPAD: the Integrated Pathway Analysis Database for Systematic Enrichment Analysis.

    Science.gov (United States)

    Zhang, Fan; Drabier, Renee

    2012-01-01

    Next-Generation Sequencing (NGS) technologies and Genome-Wide Association Studies (GWAS) generate millions of reads and hundreds of datasets, and there is an urgent need for a better way to accurately interpret and distill such large amounts of data. Extensive pathway and network analysis allow for the discovery of highly significant pathways from a set of disease vs. healthy samples in the NGS and GWAS. Knowledge of activation of these processes will lead to elucidation of the complex biological pathways affected by drug treatment, to patient stratification studies of new and existing drug treatments, and to understanding the underlying anti-cancer drug effects. There are approximately 141 biological human pathway resources as of Jan 2012 according to the Pathguide database. However, most currently available resources do not contain disease, drug or organ specificity information such as disease-pathway, drug-pathway, and organ-pathway associations. Systematically integrating pathway, disease, drug and organ specificity together becomes increasingly crucial for understanding the interrelationships between signaling, metabolic and regulatory pathway, drug action, disease susceptibility, and organ specificity from high-throughput omics data (genomics, transcriptomics, proteomics and metabolomics). We designed the Integrated Pathway Analysis Database for Systematic Enrichment Analysis (IPAD, http://bioinfo.hsc.unt.edu/ipad), defining inter-association between pathway, disease, drug and organ specificity, based on six criteria: 1) comprehensive pathway coverage; 2) gene/protein to pathway/disease/drug/organ association; 3) inter-association between pathway, disease, drug, and organ; 4) multiple and quantitative measurement of enrichment and inter-association; 5) assessment of enrichment and inter-association analysis with the context of the existing biological knowledge and a "gold standard" constructed from reputable and reliable sources; and 6) cross-linking of

  16. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  17. An integrated algorithm for hypersonic fluid-thermal-structural numerical simulation

    Science.gov (United States)

    Li, Jia-Wei; Wang, Jiang-Feng

    2018-05-01

    In this paper, a fluid-structural-thermal integrated method is presented based on finite volume method. A unified integral equations system is developed as the control equations for physical process of aero-heating and structural heat transfer. The whole physical field is discretized by using an up-wind finite volume method. To demonstrate its capability, the numerical simulation of Mach 6.47 flow over stainless steel cylinder shows a good agreement with measured values, and this method dynamically simulates the objective physical processes. Thus, the integrated algorithm proves to be efficient and reliable.

  18. Numerical Integration of the Transport Equation For Infinite Homogeneous Media

    Energy Technology Data Exchange (ETDEWEB)

    Haakansson, Rune

    1962-01-15

    The transport equation for neutrons in infinite homogeneous media is solved by direct numerical integration. Accounts are taken to the anisotropy and the inelastic scattering. The integration has been performed by means of the trapezoidal rule and the length of the energy intervals are constant in lethargy scale. The machine used is a Ferranti Mercury computer. Results are given for water, heavy water, aluminium water mixture and iron-aluminium-water mixture.

  19. Development of integrated parameter database for risk assessment at the Rokkasho Reprocessing Plant

    International Nuclear Information System (INIS)

    Tamauchi, Yoshikazu

    2011-01-01

    A study to develop a parameter database for Probabilistic Safety Assessment (PSA) for the application of risk information on plant operation and maintenance activity is important because the transparency, consistency, and traceability of parameters are needed to explanation adequacy of the evaluation to third parties. Application of risk information for the plant operation and maintenance activity, equipment reliability data, human error rate, and 5 factors of 'five-factor formula' for estimation of the amount of radioactive material discharge (source term) are key inputs. As a part of the infrastructure development for the risk information application, we developed the integrated parameter database, 'R-POD' (Rokkasho reprocessing Plant Omnibus parameter Database) on the trial basis for the PSA of the Rokkasho Reprocessing Plant. This database consists primarily of the following 3 parts, 1) an equipment reliability database, 2) a five-factor formula database, and 3) a human reliability database. The underpinning for explaining the validity of the risk assessment can be improved by developing this database. Furthermore, this database is an important tool for the application of risk information, because it provides updated data by incorporating the accumulated operation experiences of the Rokkasho reprocessing plant. (author)

  20. The Center for Integrated Molecular Brain Imaging (Cimbi) database

    DEFF Research Database (Denmark)

    Knudsen, Gitte M.; Jensen, Peter S.; Erritzoe, David

    2016-01-01

    We here describe a multimodality neuroimaging containing data from healthy volunteers and patients, acquired within the Lundbeck Foundation Center for Integrated Molecular Brain Imaging (Cimbi) in Copenhagen, Denmark. The data is of particular relevance for neurobiological research questions rela...... currently contains blood and in some instances saliva samples from about 500 healthy volunteers and 300 patients with e.g., major depression, dementia, substance abuse, obesity, and impulsive aggression. Data continue to be added to the Cimbi database and biobank....

  1. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  2. Numerical Treatment of Fixed Point Applied to the Nonlinear Fredholm Integral Equation

    Directory of Open Access Journals (Sweden)

    Berenguer MI

    2009-01-01

    Full Text Available The authors present a method of numerical approximation of the fixed point of an operator, specifically the integral one associated with a nonlinear Fredholm integral equation, that uses strongly the properties of a classical Schauder basis in the Banach space .

  3. Numerical integration of some new unified plasticity-creep formulations

    International Nuclear Information System (INIS)

    Krieg, R.D.

    1977-01-01

    The unified formulations seem to lead to very non-linear systems of equations which are very well behaved in some regions and very stiff in other regions where the word 'stiff' is used in the mathematical sense. Simple conventional methods of integrating incremental constitutive equations are observed to be totally inadequate. A method of numerically integrating the equations is presented. Automatic step size determination based on accuracy and stability is a necessary expense. In the region where accuracy is the limiting condition the equations can be integrated directly. A forward Euler predictor with a trapezoidal corrector is used in the paper. In the region where stability is the limiting condition, direct integration methods become inefficient and an implicit integrator which is suited to stiff equations must be used. A backward Euler method is used in the paper. It is implemented with a Picard iteration method in which a Newton method is used to predict inelastic strainrate and speed convergence in a Newton-Raphson manner. This allows an analytic expression for the Jacobian to be used, where a full Newton-Raphson would require a numerical approximation to the Jacobian. The starting procedure for the iteration is an adaptation of time independent plasticity ideas. Because of the inherent capability of the unified plasticity-creep formulations, it is felt that these theories will become accepted in the metallurgical community. Structural analysts will then be required to incorporate these formulations and must be prepared to face the difficult implementation inherent in these models. This paper is an attempt to shed some light on the difficulties and expenses involved

  4. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  5. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  6. DENdb: database of integrated human enhancers

    KAUST Repository

    Ashoor, Haitham

    2015-09-05

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information.

  7. DENdb: database of integrated human enhancers

    KAUST Repository

    Ashoor, Haitham; Kleftogiannis, Dimitrios A.; Radovanovic, Aleksandar; Bajic, Vladimir B.

    2015-01-01

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information.

  8. Visualization of numerically simulated aerodynamic flow fields

    International Nuclear Information System (INIS)

    Hian, Q.L.; Damodaran, M.

    1991-01-01

    The focus of this paper is to describe the development and the application of an interactive integrated software to visualize numerically simulated aerodynamic flow fields so as to enable the practitioner of computational fluid dynamics to diagnose the numerical simulation and to elucidate essential flow physics from the simulation. The input to the software is the numerical database crunched by a supercomputer and typically consists of flow variables and computational grid geometry. This flow visualization system (FVS), written in C language is targetted at the Personal IRIS Workstations. In order to demonstrate the various visualization modules, the paper also describes the application of this software to visualize two- and three-dimensional flow fields past aerodynamic configurations which have been numerically simulated on the NEC-SXIA Supercomputer. 6 refs

  9. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  10. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  11. Integrated optical circuits for numerical computation

    Science.gov (United States)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  12. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.

    Science.gov (United States)

    Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P

    2010-10-07

    Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database

  13. Brassica database (BRAD) version 2.0: integrating and mining Brassicaceae species genomic resources.

    Science.gov (United States)

    Wang, Xiaobo; Wu, Jian; Liang, Jianli; Cheng, Feng; Wang, Xiaowu

    2015-01-01

    The Brassica database (BRAD) was built initially to assist users apply Brassica rapa and Arabidopsis thaliana genomic data efficiently to their research. However, many Brassicaceae genomes have been sequenced and released after its construction. These genomes are rich resources for comparative genomics, gene annotation and functional evolutionary studies of Brassica crops. Therefore, we have updated BRAD to version 2.0 (V2.0). In BRAD V2.0, 11 more Brassicaceae genomes have been integrated into the database, namely those of Arabidopsis lyrata, Aethionema arabicum, Brassica oleracea, Brassica napus, Camelina sativa, Capsella rubella, Leavenworthia alabamica, Sisymbrium irio and three extremophiles Schrenkiella parvula, Thellungiella halophila and Thellungiella salsuginea. BRAD V2.0 provides plots of syntenic genomic fragments between pairs of Brassicaceae species, from the level of chromosomes to genomic blocks. The Generic Synteny Browser (GBrowse_syn), a module of the Genome Browser (GBrowse), is used to show syntenic relationships between multiple genomes. Search functions for retrieving syntenic and non-syntenic orthologs, as well as their annotation and sequences are also provided. Furthermore, genome and annotation information have been imported into GBrowse so that all functional elements can be visualized in one frame. We plan to continually update BRAD by integrating more Brassicaceae genomes into the database. Database URL: http://brassicadb.org/brad/. © The Author(s) 2015. Published by Oxford University Press.

  14. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R.; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-01-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  15. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  16. Modelling of multidimensional quantum systems by the numerical functional integration

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.; Zhidkov, E.P.

    1990-01-01

    The employment of the numerical functional integration for the description of multidimensional systems in quantum and statistical physics is considered. For the multiple functional integrals with respect to Gaussian measures in the full separable metric spaces the new approximation formulas exact on a class of polynomial functionals of a given summary degree are constructed. The use of the formulas is demonstrated on example of computation of the Green function and the ground state energy in multidimensional Calogero model. 15 refs.; 2 tabs

  17. An integral equation-based numerical solver for Taylor states in toroidal geometries

    Science.gov (United States)

    O'Neil, Michael; Cerfon, Antoine J.

    2018-04-01

    We present an algorithm for the numerical calculation of Taylor states in toroidal and toroidal-shell geometries using an analytical framework developed for the solution to the time-harmonic Maxwell equations. Taylor states are a special case of what are known as Beltrami fields, or linear force-free fields. The scheme of this work relies on the generalized Debye source representation of Maxwell fields and an integral representation of Beltrami fields which immediately yields a well-conditioned second-kind integral equation. This integral equation has a unique solution whenever the Beltrami parameter λ is not a member of a discrete, countable set of resonances which physically correspond to spontaneous symmetry breaking. Several numerical examples relevant to magnetohydrodynamic equilibria calculations are provided. Lastly, our approach easily generalizes to arbitrary geometries, both bounded and unbounded, and of varying genus.

  18. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    Science.gov (United States)

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

  19. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    Science.gov (United States)

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  20. Structure and representation of data elements on factual database - SIST activity in Japan

    International Nuclear Information System (INIS)

    Nakamoto, H.; Onodera, N.

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author)

  1. Structure and representation of data elements on factual database - SIST activity in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, H [Integrated Researches for Information Science, Tokyo (Japan); Onodera, N [Japan Information Center of Science and Technology, Tokyo (Japan)

    1990-05-01

    A factual database has a variety of forms and types of data structure that produces various kinds of records composed of a great number of data items, which differ from file to file. Second, a factual database needs higher speciality in preparation on content analysis, and users wish to process download-ed data successively for analysis, diagnosis, simulation, projecting, design, linguistic processing and so on. A meaningful quantitative datum can be divided into some consistent sub-elements. In addition to this fine structure of data elements, representation of data elements is also very important to integrate factual data on to public files. In this paper we shall discuss problems and thoughts about the structure and representation of data elements contained in numerical information on a practical basis. The guideline discussed here is under draft by sponsorship of the Government and is being implemented to build database of space experiments. The guideline involves expression, unification, notification and handling of data for numerical information in machine readable form, such as numerical value, numerical formula, graphics, semi-quantitative value, significant figures, ranged data, accuracy and precision, conversion of unit, semi-quantitative values, error information and so on. (author).

  2. Digital database architecture and delineation methodology for deriving drainage basins, and a comparison of digitally and non-digitally derived numeric drainage areas

    Science.gov (United States)

    Dupree, Jean A.; Crowfoot, Richard M.

    2012-01-01

    The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)

  3. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    Science.gov (United States)

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  4. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  5. OECD/NEA data bank scientific and integral experiments databases in support of knowledge preservation and transfer

    International Nuclear Information System (INIS)

    Sartori, E.; Kodeli, I.; Mompean, F.J.; Briggs, J.B.; Gado, J.; Hasegawa, A.; D'hondt, P.; Wiesenack, W.; Zaetta, A.

    2004-01-01

    The OECD/Nuclear Energy Data Bank was established by its member countries as an institution to allow effective sharing of knowledge and its basic underlying information and data in key areas of nuclear science and technology. The activities as regards preserving and transferring knowledge consist of the: 1) Acquisition of basic nuclear data, computer codes and experimental system data needed over a wide range of nuclear and radiation applications; 2) Independent verification and validation of these data using quality assurance methods, adding value through international benchmark exercises, workshops and meetings and by issuing relevant reports with conclusions and recommendations, as well as by organising training courses to ensure their qualified and competent use; 3) Dissemination of the different products to authorised establishments in member countries and collecting and integrating user feedback. Of particular importance has been the establishment of basic and integral experiments databases and the methodology developed with the aim of knowledge preservation and transfer. Databases established thus far include: 1) IRPhE - International Reactor Physics Experimental Benchmarks Evaluations, 2) SINBAD - a radiation shielding experiments database (nuclear reactors, fusion neutronics and accelerators), 3) IFPE - International Fuel Performance Benchmark Experiments Database, 4) TDB - The Thermochemical Database Project, 5) ICSBE - International Nuclear Criticality Safety Benchmark Evaluations, 6) CCVM - CSNI Code Validation Matrix of Thermal-hydraulic Codes for LWR LOCA and Transients. This paper will concentrate on knowledge preservation and transfer concepts and methods related to some of the integral experiments and TDB. (author)

  6. MAGIC Database and Interfaces: An Integrated Package for Gene Discovery and Expression

    Directory of Open Access Journals (Sweden)

    Lee H. Pratt

    2006-03-01

    Full Text Available The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs, and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.

  7. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  8. An Integrated Molecular Database on Indian Insects.

    Science.gov (United States)

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  9. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  10. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  11. Practical integrated simulation systems for coupled numerical simulations in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Osamu, Hazama; Zhihong, Guo [Japan Atomic Energy Research Inst., Centre for Promotion of Computational Science and Engineering, Tokyo (Japan)

    2003-07-01

    In order for the numerical simulations to reflect 'real-world' phenomena and occurrences, incorporation of multidisciplinary and multi-physics simulations considering various physical models and factors are becoming essential. However, there still exist many obstacles which inhibit such numerical simulations. For example, it is still difficult in many instances to develop satisfactory software packages which allow for such coupled simulations and such simulations will require more computational resources. A precise multi-physics simulation today will require parallel processing which again makes it a complicated process. Under the international cooperative efforts between CCSE/JAERI and Fraunhofer SCAI, a German institute, a library called the MpCCI, or Mesh-based Parallel Code Coupling Interface, has been implemented together with a library called STAMPI to couple two existing codes to develop an 'integrated numerical simulation system' intended for meta-computing environments. (authors)

  12. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  13. CPLA 1.0: an integrated database of protein lysine acetylation.

    Science.gov (United States)

    Liu, Zexian; Cao, Jun; Gao, Xinjiao; Zhou, Yanhong; Wen, Longping; Yang, Xiangjiao; Yao, Xuebiao; Ren, Jian; Xue, Yu

    2011-01-01

    As a reversible post-translational modification (PTM) discovered decades ago, protein lysine acetylation was known for its regulation of transcription through the modification of histones. Recent studies discovered that lysine acetylation targets broad substrates and especially plays an essential role in cellular metabolic regulation. Although acetylation is comparable with other major PTMs such as phosphorylation, an integrated resource still remains to be developed. In this work, we presented the compendium of protein lysine acetylation (CPLA) database for lysine acetylated substrates with their sites. From the scientific literature, we manually collected 7151 experimentally identified acetylation sites in 3311 targets. We statistically studied the regulatory roles of lysine acetylation by analyzing the Gene Ontology (GO) and InterPro annotations. Combined with protein-protein interaction information, we systematically discovered a potential human lysine acetylation network (HLAN) among histone acetyltransferases (HATs), substrates and histone deacetylases (HDACs). In particular, there are 1862 triplet relationships of HAT-substrate-HDAC retrieved from the HLAN, at least 13 of which were previously experimentally verified. The online services of CPLA database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0). The CPLA database is freely available for all users at: http://cpla.biocuckoo.org.

  14. pySecDec: A toolbox for the numerical evaluation of multi-scale integrals

    Science.gov (United States)

    Borowka, S.; Heinrich, G.; Jahn, S.; Jones, S. P.; Kerner, M.; Schlenk, J.; Zirke, T.

    2018-01-01

    We present pySECDEC, a new version of the program SECDEC, which performs the factorization of dimensionally regulated poles in parametric integrals, and the subsequent numerical evaluation of the finite coefficients. The algebraic part of the program is now written in the form of python modules, which allow a very flexible usage. The optimization of the C++ code, generated using FORM, is improved, leading to a faster numerical convergence. The new version also creates a library of the integrand functions, such that it can be linked to user-specific codes for the evaluation of matrix elements in a way similar to analytic integral libraries.

  15. A semantic data dictionary method for database schema integration in CIESIN

    Science.gov (United States)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  16. A numerical integration approach suitable for simulating PWR dynamics using a microcomputer system

    International Nuclear Information System (INIS)

    Zhiwei, L.; Kerlin, T.W.

    1983-01-01

    It is attractive to use microcomputer systems to simulate nuclear power plant dynamics for the purpose of teaching and/or control system design. An analysis and a comparison of feasibility of existing numerical integration methods have been made. The criteria for choosing the integration step using various numerical integration methods including the matrix exponential method are derived. In order to speed up the simulation, an approach is presented using the Newton recursion calculus which can avoid convergence limitations in choosing the integration step size. The accuracy consideration will dominate the integration step limited. The advantages of this method have been demonstrated through a case study using CBM model 8032 microcomputer to simulate a reduced order linear PWR model under various perturbations. It has been proven theoretically and practically that the Runge-Kutta method and Adams-Moulton method are not feasible. The matrix exponential method is good at accuracy and fairly good at speed. The Newton recursion method can save 3/4 to 4/5 time compared to the matrix exponential method with reasonable accuracy. Vertical Barhis method can be expanded to deal with nonlinear nuclear power plant models and higher order models as well

  17. Structural Health Monitoring of Tall Buildings with Numerical Integrator and Convex-Concave Hull Classification

    Directory of Open Access Journals (Sweden)

    Suresh Thenozhi

    2012-01-01

    Full Text Available An important objective of health monitoring systems for tall buildings is to diagnose the state of the building and to evaluate its possible damage. In this paper, we use our prototype to evaluate our data-mining approach for the fault monitoring. The offset cancellation and high-pass filtering techniques are combined effectively to solve common problems in numerical integration of acceleration signals in real-time applications. The integration accuracy is improved compared with other numerical integrators. Then we introduce a novel method for support vector machine (SVM classification, called convex-concave hull. We use the Jarvis march method to decide the concave (nonconvex hull for the inseparable points. Finally the vertices of the convex-concave hull are applied for SVM training.

  18. Quadrature theory the theory of numerical integration on a compact interval

    CERN Document Server

    Brass, Helmut

    2011-01-01

    Every book on numerical analysis covers methods for the approximate calculation of definite integrals. The authors of this book provide a complementary treatment of the topic by presenting a coherent theory of quadrature methods that encompasses many deep and elegant results as well as a large number of interesting (solved and open) problems. The inclusion of the word "theory" in the title highlights the authors' emphasis on analytical questions, such as the existence and structure of quadrature methods and selection criteria based on strict error bounds for quadrature rules. Systematic analyses of this kind rely on certain properties of the integrand, called "co-observations," which form the central organizing principle for the authors' theory, and distinguish their book from other texts on numerical integration. A wide variety of co-observations are examined, as a detailed understanding of these is useful for solving problems in practical contexts. While quadrature theory is often viewed as a branch of nume...

  19. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis

    Directory of Open Access Journals (Sweden)

    Raquel L. Costa

    2017-07-01

    Full Text Available There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were

  20. Automated granularity to integrate digital information: the "Antarctic Treaty Searchable Database" case study

    Directory of Open Access Journals (Sweden)

    Paul Arthur Berkman

    2006-06-01

    Full Text Available Access to information is necessary, but not sufficient in our digital era. The challenge is to objectively integrate digital resources based on user-defined objectives for the purpose of discovering information relationships that facilitate interpretations and decision making. The Antarctic Treaty Searchable Database (http://aspire.nvi.net, which is in its sixth edition, provides an example of digital integration based on the automated generation of information granules that can be dynamically combined to reveal objective relationships within and between digital information resources. This case study further demonstrates that automated granularity and dynamic integration can be accomplished simply by utilizing the inherent structure of the digital information resources. Such information integration is relevant to library and archival programs that require long-term preservation of authentic digital resources.

  1. Conservation properties of numerical integration methods for systems of ordinary differential equations

    Science.gov (United States)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  2. Numerical simulation and experimental research of the integrated high-power LED radiator

    Science.gov (United States)

    Xiang, J. H.; Zhang, C. L.; Gan, Z. J.; Zhou, C.; Chen, C. G.; Chen, S.

    2017-01-01

    The thermal management has become an urgent problem to be solved with the increasing power and the improving integration of the LED (light emitting diode) chip. In order to eliminate the contact resistance of the radiator, this paper presented an integrated high-power LED radiator based on phase-change heat transfer, which realized the seamless connection between the vapor chamber and the cooling fins. The radiator was optimized by combining the numerical simulation and the experimental research. The effects of the chamber diameter and the parameters of fin on the heat dissipation performance were analyzed. The numerical simulation results were compared with the measured values by experiment. The results showed that the fin thickness, the fin number, the fin height and the chamber diameter were the factors which affected the performance of radiator from primary to secondary.

  3. Document control system as an integral part of RA documentation database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    The decision about the final shutdown of the RA research reactor in Vinca Institute has been brought in 2002, and therefore the preparations for its decommissioning have begun. All activities are supervised by the International Atomic Energy Agency (IAEA), which also provides technical and experts' support. This paper describes the document control system is an integral part of the existing RA documentation database. (author)

  4. Reactor core materials research and integrated material database establishment

    International Nuclear Information System (INIS)

    Ryu, Woo Seog; Jang, J. S.; Kim, D. W.

    2002-03-01

    Mainly two research areas were covered in this project. One is to establish the integrated database of nuclear materials, and the other is to study the behavior of reactor core materials, which are usually under the most severe condition in the operating plants. During the stage I of the project (for three years since 1999) in- and out of reactor properties of stainless steel, the major structural material for the core structures of PWR (Pressurized Water Reactor), were evaluated and specification of nuclear grade material was established. And the damaged core components from domestic power plants, e.g. orifice of CVCS, support pin of CRGT, etc. were investigated and the causes were revealed. To acquire more resistant materials to the nuclear environments, development of the alternative alloys was also conducted. For the integrated DB establishment, a task force team was set up including director of nuclear materials technology team, and projector leaders and relevant members from each project. The DB is now opened in public through the Internet

  5. Data Integration The Relational Logic Approach

    CERN Document Server

    Genesereth, Michael

    2010-01-01

    Data integration is a critical problem in our increasingly interconnected but inevitably heterogeneous world. There are numerous data sources available in organizational databases and on public information systems like the World Wide Web. Not surprisingly, the sources often use different vocabularies and different data structures, being created, as they are, by different people, at different times, for different purposes. The goal of data integration is to provide programmatic and human users with integrated access to multiple, heterogeneous data sources, giving each user the illusion of a sin

  6. Integrating Environmental and Human Health Databases in the Great Lakes Basin: Themes, Challenges and Future Directions

    Directory of Open Access Journals (Sweden)

    Kate L. Bassil

    2015-03-01

    Full Text Available Many government, academic and research institutions collect environmental data that are relevant to understanding the relationship between environmental exposures and human health. Integrating these data with health outcome data presents new challenges that are important to consider to improve our effective use of environmental health information. Our objective was to identify the common themes related to the integration of environmental and health data, and suggest ways to address the challenges and make progress toward more effective use of data already collected, to further our understanding of environmental health associations in the Great Lakes region. Environmental and human health databases were identified and reviewed using literature searches and a series of one-on-one and group expert consultations. Databases identified were predominantly environmental stressors databases, with fewer found for health outcomes and human exposure. Nine themes or factors that impact integration were identified: data availability, accessibility, harmonization, stakeholder collaboration, policy and strategic alignment, resource adequacy, environmental health indicators, and data exchange networks. The use and cost effectiveness of data currently collected could be improved by strategic changes to data collection and access systems to provide better opportunities to identify and study environmental exposures that may impact human health.

  7. A purely Lagrangian method for the numerical integration of Fokker-Planck equations

    International Nuclear Information System (INIS)

    Combis, P.; Fronteau, J.

    1986-01-01

    A new numerical approach to Fokker-Planck equations is presented, in which the integration grid moves according to the solution of a differential system. The method is purely Lagrangian, the mean effect of the diffusion being inserted into the differential system itself

  8. Numerical counting ratemeter with variable time constant and integrated circuits

    International Nuclear Information System (INIS)

    Kaiser, J.; Fuan, J.

    1967-01-01

    We present here the prototype of a numerical counting ratemeter which is a special version of variable time-constant frequency meter (1). The originality of this work lies in the fact that the change in the time constant is carried out automatically. Since the criterion for this change is the accuracy in the annunciated result, the integration time is varied as a function of the frequency. For the prototype described in this report, the time constant varies from 1 sec to 1 millisec. for frequencies in the range 10 Hz to 10 MHz. This prototype is built entirely of MECL-type integrated circuits from Motorola and is thus contained in two relatively small boxes. (authors) [fr

  9. Numerical simulation of a lattice polymer model at its integrable point

    International Nuclear Information System (INIS)

    Bedini, A; Owczarek, A L; Prellberg, T

    2013-01-01

    We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions x t = 1/12 and x h = −5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003). (paper)

  10. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  11. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore.

    Science.gov (United States)

    Ren, Jian; Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0).

  12. Application of Numerical Integration and Data Fusion in Unit Vector Method

    Science.gov (United States)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of

  13. Exponential Convergence for Numerical Solution of Integral Equations Using Radial Basis Functions

    Directory of Open Access Journals (Sweden)

    Zakieh Avazzadeh

    2014-01-01

    Full Text Available We solve some different type of Urysohn integral equations by using the radial basis functions. These types include the linear and nonlinear Fredholm, Volterra, and mixed Volterra-Fredholm integral equations. Our main aim is to investigate the rate of convergence to solve these equations using the radial basis functions which have normic structure that utilize approximation in higher dimensions. Of course, the use of this method often leads to ill-posed systems. Thus we propose an algorithm to improve the results. Numerical results show that this method leads to the exponential convergence for solving integral equations as it was already confirmed for partial and ordinary differential equations.

  14. Analysis and databasing software for integrated tomographic gamma scanner (TGS) and passive-active neutron (PAN) assay systems

    International Nuclear Information System (INIS)

    Estep, R.J.; Melton, S.G.; Buenafe, C.

    2000-01-01

    The CTEN-FIT program, written for Windows 9x/NT in C++,performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplify record keeping tasks

  15. Current situation and future usage of anticancer drug databases.

    Science.gov (United States)

    Wang, Hongzhi; Yin, Yuanyuan; Wang, Peiqi; Xiong, Chenyu; Huang, Lingyu; Li, Sijia; Li, Xinyi; Fu, Leilei

    2016-07-01

    Cancer is a deadly disease with increasing incidence and mortality rates and affects the life quality of millions of people per year. The past 15 years have witnessed the rapid development of targeted therapy for cancer treatment, with numerous anticancer drugs, drug targets and related gene mutations been identified. The demand for better anticancer drugs and the advances in database technologies have propelled the development of databases related to anticancer drugs. These databases provide systematic collections of integrative information either directly on anticancer drugs or on a specific type of anticancer drugs with their own emphases on different aspects, such as drug-target interactions, the relationship between mutations in drug targets and drug resistance/sensitivity, drug-drug interactions, natural products with anticancer activity, anticancer peptides, synthetic lethality pairs and histone deacetylase inhibitors. We focus on a holistic view of the current situation and future usage of databases related to anticancer drugs and further discuss their strengths and weaknesses, in the hope of facilitating the discovery of new anticancer drugs with better clinical outcomes.

  16. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    Science.gov (United States)

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  17. Some applications of perturbation theory to numerical integration methods for the Schroedinger equation

    International Nuclear Information System (INIS)

    Killingbeck, J.

    1979-01-01

    By using the methods of perturbation theory it is possible to construct simple formulae for the numerical integration of the Schroedinger equation, and also to calculate expectation values solely by means of simple eigenvalue calculations. (Auth.)

  18. An XCT image database system

    International Nuclear Information System (INIS)

    Komori, Masaru; Minato, Kotaro; Koide, Harutoshi; Hirakawa, Akina; Nakano, Yoshihisa; Itoh, Harumi; Torizuka, Kanji; Yamasaki, Tetsuo; Kuwahara, Michiyoshi.

    1984-01-01

    In this paper, an expansion of X-ray CT (XCT) examination history database to XCT image database is discussed. The XCT examination history database has been constructed and used for daily examination and investigation in our hospital. This database consists of alpha-numeric information (locations, diagnosis and so on) of more than 15,000 cases, and for some of them, we add tree structured image data which has a flexibility for various types of image data. This database system is written by MUMPS database manipulation language. (author)

  19. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-01-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  20. Development of the Lymphoma Enterprise Architecture Database: A caBIG(TM Silver Level Compliant System

    Directory of Open Access Journals (Sweden)

    Taoying Huang

    2009-04-01

    Full Text Available Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™ Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™, which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data.

  1. Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    Science.gov (United States)

    Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.

    2009-01-01

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074

  2. Numerical Algorithms for Acoustic Integrals - The Devil is in the Details

    Science.gov (United States)

    Brentner, Kenneth S.

    1996-01-01

    The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.

  3. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  4. Implementation of numerical integration schemes for the simulation of magnetic SMA constitutive response

    International Nuclear Information System (INIS)

    Kiefer, B; Bartel, T; Menzel, A

    2012-01-01

    Several constitutive models for magnetic shape memory alloys (MSMAs) have been proposed in the literature. The implementation of numerical integration schemes, which allow the prediction of constitutive response for general loading cases and ultimately the incorporation of MSMA response into numerical solution algorithms for fully coupled magneto-mechanical boundary value problems, however, has received only very limited attention. In this work, we establish two algorithmic implementations of the internal variable model for MSMAs proposed in (Kiefer and Lagoudas 2005 Phil. Mag. Spec. Issue: Recent Adv. Theor. Mech. 85 4289–329, Kiefer and Lagoudas 2009 J. Intell. Mater. Syst. 20 143–70), where we restrict our attention to pure martensitic variant reorientation to limit complexity. The first updating scheme is based on the numerical integration of the reorientation strain evolution equation and represents a classical predictor–corrector-type general return mapping algorithm. In the second approach, the inequality-constrained optimization problem associated with internal variable evolution is converted into an unconstrained problem via Fischer–Burmeister complementarity functions and then iteratively solved in standard Newton–Raphson format. Simulations are verified by comparison to closed-form solutions for experimentally relevant loading cases. (paper)

  5. Development of an Integrated Natural Barrier Database System for Site Evaluation of a Deep Geologic Repository in Korea - 13527

    International Nuclear Information System (INIS)

    Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong; Lee, Jeong-Hwan

    2013-01-01

    Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel and other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)

  6. Pentaho data integration beginner's guide

    CERN Document Server

    Roldán, María Carina

    2013-01-01

    This book focuses on teaching you by example. The book walks you through every aspect of Pentaho Data Integration, giving systematic instructions in a friendly style, allowing you to learn in front of your computer, playing with the tool. The extensive use of drawings and screenshots make the process of learning Pentaho Data Integration easy. Throughout the book, numerous tips and helpful hints are provided that you will not find anywhere else.This book is a must-have for software developers, database administrators, IT students, and everyone involved or interested in developing ETL solutions,

  7. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  8. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  9. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  10. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii

    OpenAIRE

    May, P.; Christian, J.O.; Kempa, S.; Walther, D.

    2009-01-01

    Abstract Background The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. Results In the fra...

  11. A global database of seismically and non-seismically triggered landslides for 2D/3D numerical modeling

    Science.gov (United States)

    Domej, Gisela; Bourdeau, Céline; Lenti, Luca; Pluta, Kacper

    2017-04-01

    Landsliding is a worldwide common phenomenon. Every year, and ranging in size from very small to enormous, landslides cause all too often loss of life and disastrous damage to infrastructure, property and the environment. One main reason for more frequent catastrophes is the growth of population on the Earth which entails extending urbanization to areas at risk. Landslides are triggered by a variety and combination of causes, among which the role of water and seismic activity appear to have the most serious consequences. In this regard, seismic shaking is of particular interest since topographic elevation as well as the landslide mass itself can trap waves and hence amplify incoming surface waves - a phenomenon known as "site effects". Research on the topic of landsliding due to seismic and non-seismic activity is extensive and a broad spectrum of methods for modeling slope deformation is available. Those methods range from pseudo-static and rigid-block based models to numerical models. The majority is limited to 2D modeling since more sophisticated approaches in 3D are still under development or calibration. However, the effect of lateral confinement as well as the mechanical properties of the adjacent bedrock might be of great importance because they may enhance the focusing of trapped waves in the landslide mass. A database was created to study 3D landslide geometries. It currently contains 277 distinct seismically and non-seismically triggered landslides spread all around the globe whose rupture bodies were measured in all available details. Therefore a specific methodology was developed to maintain predefined standards, to keep the bias as low as possible and to set up a query tool to explore the database. Besides geometry, additional information such as location, date, triggering factors, material, sliding mechanisms, event chronology, consequences, related literature, among other things are stored for every case. The aim of the database is to enable

  12. Integrated numerical platforms for environmental dose assessments of large tritium inventory facilities

    International Nuclear Information System (INIS)

    Castro, P.; Ardao, J.; Velarde, M.; Sedano, L.; Xiberta, J.

    2013-01-01

    Related with a prospected new scenario of large inventory tritium facilities [KATRIN at TLK, CANDUs, ITER, EAST, other coming] the prescribed dosimetric limits by ICRP-60 for tritium committed-doses are under discussion requiring, in parallel, to surmount the highly conservative assessments by increasing the refinement of dosimetric-assessments in many aspects. Precise Lagrangian-computations of dosimetric cloud-evolution after standardized (normal/incidental/SBO) tritium cloud emissions are today numerically open to the perfect match of real-time meteorological-data, and patterns data at diverse scales for prompt/early and chronic tritium dose assessments. The trends towards integrated-numerical-platforms for environmental-dose assessments of large tritium inventory facilities under development.

  13. The NCBI BioSystems database.

    Science.gov (United States)

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  14. A Comprehensive Database and Analysis Framework To Incorporate Multiscale Data Types and Enable Integrated Analysis of Bioactive Polyphenols.

    Science.gov (United States)

    Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M

    2018-03-05

    The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative

  15. Numerical simulation of liquid film flow on revolution surfaces with momentum integral method

    International Nuclear Information System (INIS)

    Bottoni Maurizio

    2005-01-01

    The momentum integral method is applied in the frame of safety analysis of pressure water reactors under hypothetical loss of coolant accident (LOCA) conditions to simulate numerically film condensation, rewetting and vaporization on the inner surface of pressure water reactor containment. From the conservation equations of mass and momentum of a liquid film arising from condensation of steam upon the inner of the containment during a LOCA in a pressure water reactor plant, an integro-differential equation is derived, referring to an arbitrary axisymmetric surface of revolution. This equation describes the velocity distribution of the liquid film along a meridian of a surface of revolution. From the integro-differential equation and ordinary differential equation of first order for the film velocity is derived and integrated numerically. From the velocity distribution the film thickness distribution is obtained. The solution of the enthalpy equation for the liquid film yields the temperature distribution on the inner surface of the containment. (authors)

  16. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    Science.gov (United States)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  17. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  18. Numerical integration of some new unified plasticity-creep formulations

    International Nuclear Information System (INIS)

    Krieg, R.D.

    1977-01-01

    The usual constitutive description of metals at high temperature treats creep as a phenomenon which must be added to time independent phenomena. A new approach is now being advocated by some people, principally metallurgists. They all treat the inelastic strain as a unified quantity, incapable of being separated into time dependent and time independent parts. This paper examines the behavior of the differential formulations reported in the literature together with one proposed by the author. These formulations are capable of representing primary and secondary creep, cyclic hardening to a stable cyclic stress-strain loop, a conventional plasticity behavior, and a Bauchinger effect which may be creep induced and discernable either at fast or slow loading rates. The new unified formulations seem to lead to very non-linear systems of equations which are very well behaved in some regions and very stiff in other regions where the word 'stiff' is used in the mathematical sense. Simple conventional methods of integrating incremental constitutive equations are observed to be totally inadequate. A method of numerically integrating the equations is presented. (Auth.)

  19. Numerical evaluation of Feynman loop integrals by reduction to tree graphs

    International Nuclear Information System (INIS)

    Kleinschmidt, T.

    2007-12-01

    We present a method for the numerical evaluation of loop integrals, based on the Feynman Tree Theorem. This states that loop graphs can be expressed as a sum of tree graphs with additional external on-shell particles. The original loop integral is replaced by a phase space integration over the additional particles. In cross section calculations and for event generation, this phase space can be sampled simultaneously with the phase space of the original external particles. Since very sophisticated matrix element generators for tree graph amplitudes exist and phase space integrations are generically well understood, this method is suited for a future implementation in a fully automated Monte Carlo event generator. A scheme for renormalization and regularization is presented. We show the construction of subtraction graphs which cancel ultraviolet divergences and present a method to cancel internal on-shell singularities. Real emission graphs can be naturally included in the phase space integral of the additional on-shell particles to cancel infrared divergences. As a proof of concept, we apply this method to NLO Bhabha scattering in QED. Cross sections are calculated and are in agreement with results from conventional methods. We also construct a Monte Carlo event generator and present results. (orig.)

  20. Numerical evaluation of Feynman loop integrals by reduction to tree graphs

    Energy Technology Data Exchange (ETDEWEB)

    Kleinschmidt, T.

    2007-12-15

    We present a method for the numerical evaluation of loop integrals, based on the Feynman Tree Theorem. This states that loop graphs can be expressed as a sum of tree graphs with additional external on-shell particles. The original loop integral is replaced by a phase space integration over the additional particles. In cross section calculations and for event generation, this phase space can be sampled simultaneously with the phase space of the original external particles. Since very sophisticated matrix element generators for tree graph amplitudes exist and phase space integrations are generically well understood, this method is suited for a future implementation in a fully automated Monte Carlo event generator. A scheme for renormalization and regularization is presented. We show the construction of subtraction graphs which cancel ultraviolet divergences and present a method to cancel internal on-shell singularities. Real emission graphs can be naturally included in the phase space integral of the additional on-shell particles to cancel infrared divergences. As a proof of concept, we apply this method to NLO Bhabha scattering in QED. Cross sections are calculated and are in agreement with results from conventional methods. We also construct a Monte Carlo event generator and present results. (orig.)

  1. Bending of Euler-Bernoulli nanobeams based on the strain-driven and stress-driven nonlocal integral models: a numerical approach

    Science.gov (United States)

    Oskouie, M. Faraji; Ansari, R.; Rouhi, H.

    2018-04-01

    Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.

  2. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  3. On the numerical evaluation of algebro-geometric solutions to integrable equations

    International Nuclear Information System (INIS)

    Kalla, C; Klein, C

    2012-01-01

    Physically meaningful periodic solutions to certain integrable partial differential equations are given in terms of multi-dimensional theta functions associated with real Riemann surfaces. Typical analytical problems in the numerical evaluation of these solutions are studied. In the case of hyperelliptic surfaces efficient algorithms exist even for almost degenerate surfaces. This allows the numerical study of solitonic limits. For general real Riemann surfaces, the choice of a homology basis adapted to the anti-holomorphic involution is important for a convenient formulation of the solutions and smoothness conditions. Since existing algorithms for algebraic curves produce a homology basis not related to automorphisms of the curve, we study symplectic transformations to an adapted basis and give explicit formulae for M-curves. As examples we discuss solutions of the Davey–Stewartson and the multi-component nonlinear Schrödinger equations

  4. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  5. Numerical Simulation and Experimental Validation of an Integrated Sleeve-Wedge Anchorage for CFRP Rods

    DEFF Research Database (Denmark)

    Schmidt, Jacob Wittrup; Smith, Scott T.; Täljsten, Björn

    2011-01-01

    . Recently, an integrated sleeve-wedge anchorage has been successfully developed specifically for CFRP rods. This paper in turn presents a numerical simulation of the newly developed anchorage using ABAQUS. The three-dimensional finite element (FE) model, which considers material non-linearity, uses...

  6. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    Science.gov (United States)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  7. Bio-optical data integration based on a 4 D database system approach

    Science.gov (United States)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  8. KALIMER design database development and operation manual

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment

  9. KALIMER design database development and operation manual

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Hahn, Do Hee; Lee, Yong Bum; Chang, Won Pyo

    2000-12-01

    KALIMER Design Database is developed to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applications. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), 3D CAD database, Team Cooperation System, and Reserved Documents. Results Database is a research results database for mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is a schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment.

  10. Fundamental aspects of the integration of seismic monitoring with numerical modelling.

    CSIR Research Space (South Africa)

    Mendecki, AJ

    2001-06-01

    Full Text Available of the physical state of the rock- mass. ! It must be equipped with the capability of converting the parameters of a real seismic event into a corresponding model-compatible input in the form of an additional loading on the rock-mass. ! It must allow... for an unambiguous identification and quantification of Aseismic events @ among the model-generated data. Structure of an integrated numerical model The functionality interrelations between the different components of a software package designed to implement...

  11. A difference quotient-numerical integration method for solving radiative transfer problems

    International Nuclear Information System (INIS)

    Ding Peizhu

    1992-01-01

    A difference quotient-numerical integration method is adopted to solve radiative transfer problems in an anisotropic scattering slab medium. By using the method, the radiative transfer problem is separated into a system of linear algebraic equations and the coefficient matrix of the system is a band matrix, so the method is very simple to evaluate on computer and to deduce formulae and easy to master for experimentalists. An example is evaluated and it is shown that the method is precise

  12. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Directory of Open Access Journals (Sweden)

    Bradley Michael E

    2006-02-01

    Full Text Available Abstract Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1 multiple sequence alignments, 2 mapping of alignment sites to crystal structure sites, 3 phylogenetic trees, 4 inferred ancestral sequences at internal tree nodes, and 5 amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural

  13. Semi-empirical γ-ray peak efficiency determination including self-absorption correction based on numerical integration

    International Nuclear Information System (INIS)

    Noguchi, M.; Takeda, K.; Higuchi, H.

    1981-01-01

    A method of γ-ray efficiency determination for extended (plane or bulk) samples based on numerical integration of point source efficiency is studied. The proposed method is widely applicable to samples of various shapes and materials. The geometrical factor in the peak efficiency can easily be corrected for by simply changing the integration region, and γ-ray self-absorption is also corrected by the absorption coefficients for the sample matrix. (author)

  14. Formulations by surface integral equations for numerical simulation of non-destructive testing by eddy currents

    International Nuclear Information System (INIS)

    Vigneron, Audrey

    2015-01-01

    The thesis addresses the numerical simulation of non-destructive testing (NDT) using eddy currents, and more precisely the computation of induced electromagnetic fields by a transmitter sensor in a healthy part. This calculation is the first step of the modeling of a complete control process in the CIVA software platform developed at CEA LIST. Currently, models integrated in CIVA are restricted to canonical (modal computation) or axially-symmetric geometries. The need for more diverse and complex configurations requires the introduction of new numerical modeling tools. In practice the sensor may be composed of elements with different shapes and physical properties. The inspected parts are conductive and may contain dielectric or magnetic elements. Due to the cohabitation of different materials in one configuration, different regimes (static, quasi-static or dynamic) may coexist. Under the assumption of linear, isotropic and piecewise homogeneous material properties, the surface integral equation (SIE) approach allows to reduce a volume-based problem to an equivalent surface-based problem. However, the usual SIE formulations for the Maxwell's problem generally suffer from numerical noise in asymptotic situations, and especially at low frequencies. The objective of this study is to determine a version that is stable for a range of physical parameters typical of eddy-current NDT applications. In this context, a block-iterative scheme based on a physical decomposition is proposed for the computation of primary fields. This scheme is accurate and well-conditioned. An asymptotic study of the integral Maxwell's problem at low frequencies is also performed, allowing to establish the eddy-current integral problem as an asymptotic case of the corresponding Maxwell problem. (author) [fr

  15. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  16. Numerical Modelling of Mechanical Integrity of the Copper-Cast Iron Canister. A Literature Review

    International Nuclear Information System (INIS)

    Lanru Jing

    2004-04-01

    This review article presents a summary of the research works on the numerical modelling of the mechanical integrity of the composite copper-cast iron canisters for the final disposal of Swedish nuclear wastes, conducted by SKB and SKI since 1992. The objective of the review is to evaluate the outstanding issues existing today about the basic design concepts and premises, fundamental issues on processes, properties and parameters considered for the functions and requirements of canisters under the conditions of a deep geological repository. The focus is placed on the adequacy of numerical modelling approaches adopted in regards to the overall mechanical integrity of the canisters, especially the initial state of canisters regarding defects and the consequences of their evolution under external and internal loading mechanisms adopted in the design premises. The emphasis is the stress-strain behaviour and failure/strength, with creep and plasticity involved. Corrosion, although one of the major concerns in the field of canister safety, was not included

  17. Whistleblowing: An integrative literature review of data-based studies involving nurses.

    Science.gov (United States)

    Jackson, Debra; Hickman, Louise D; Hutchinson, Marie; Andrew, Sharon; Smith, James; Potgieter, Ingrid; Cleary, Michelle; Peters, Kath

    2014-01-01

    Abstract Aim: To summarise and critique the research literature about whistleblowing and nurses. Whistleblowing is identified as a crucial issue in maintenance of healthcare standards and nurses are frequently involved in whistleblowing events. Despite the importance of this issue, to our knowledge an evaluation of this body of the data-based literature has not been undertaken. An integrative literature review approach was used to summarise and critique the research literature. A comprehensive search of five databases including Medline, CINAHL, PubMed and Health Science: Nursing/Academic Edition, and Google, were searched using terms including: 'Whistleblow*,' 'nurs*.' In addition, relevant journals were examined, as well as reference lists of retrieved papers. Papers published during the years 2007-2013 were selected for inclusion. Fifteen papers were identified, capturing data from nurses in seven countries. The findings in this review demonstrate a growing body of research for the nursing profession at large to engage and respond appropriately to issues involving suboptimal patient care or organisational wrongdoing. Nursing plays a key role in maintaining practice standards and in reporting care that is unacceptable although the repercussions to nurses who raise concerns are insupportable. Overall, whistleblowing and how it influences the individual, their family, work colleagues, nursing practice and policy overall, requires further national and international research attention.

  18. Numerical Study of Two-Dimensional Volterra Integral Equations by RDTM and Comparison with DTM

    Directory of Open Access Journals (Sweden)

    Reza Abazari

    2013-01-01

    Full Text Available The two-dimensional Volterra integral equations are solved using more recent semianalytic method, the reduced differential transform method (the so-called RDTM, and compared with the differential transform method (DTM. The concepts of DTM and RDTM are briefly explained, and their application to the two-dimensional Volterra integral equations is studied. The results obtained by DTM and RDTM together are compared with exact solution. As an important result, it is depicted that the RDTM results are more accurate in comparison with those obtained by DTM applied to the same Volterra integral equations. The numerical results reveal that the RDTM is very effective, convenient, and quite accurate compared to the other kind of nonlinear integral equations. It is predicted that the RDTM can be found widely applicable in engineering sciences.

  19. Integrated numerical modeling of a laser gun injector

    International Nuclear Information System (INIS)

    Liu, H.; Benson, S.; Bisognano, J.; Liger, P.; Neil, G.; Neuffer, D.; Sinclair, C.; Yunn, B.

    1993-06-01

    CEBAF is planning to incorporate a laser gun injector into the linac front end as a high-charge cw source for a high-power free electron laser and nuclear physics. This injector consists of a DC laser gun, a buncher, a cryounit and a chicane. The performance of the injector is predicted based on integrated numerical modeling using POISSON, SUPERFISH and PARMELA. The point-by-point method incorporated into PARMELA by McDonald is chosen for space charge treatment. The concept of ''conditioning for final bunching'' is employed to vary several crucial parameters of the system for achieving highest peak current while maintaining low emittance and low energy spread. Extensive parameter variation studies show that the design will perform beyond the specifications for FEL operations aimed at industrial applications and fundamental scientific research. The calculation also shows that the injector will perform as an extremely bright cw electron source

  20. Integrated Storage and Management of Vector and Raster Data Based on Oracle Database

    Directory of Open Access Journals (Sweden)

    WU Zheng

    2017-05-01

    Full Text Available At present, there are many problems in the storage and management of multi-source heterogeneous spatial data, such as the difficulty of transferring, the lack of unified storage and the low efficiency. By combining relational database and spatial data engine technology, an approach for integrated storage and management of vector and raster data is proposed on the basis of Oracle in this paper. This approach establishes an integrated storage model on vector and raster data and optimizes the retrieval mechanism at first, then designs a framework for the seamless data transfer, finally realizes the unified storage and efficient management of multi-source heterogeneous data. By comparing experimental results with the international leading similar software ArcSDE, it is proved that the proposed approach has higher data transfer performance and better query retrieval efficiency.

  1. The Future of Asset Management for Human Space Exploration: Supply Classification and an Integrated Database

    Science.gov (United States)

    Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert

    2006-01-01

    One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.

  2. Implementing Families of Implicit Chebyshev Methods with Exact Coefficients for the Numerical Integration of First- and Second-Order Differential Equations

    National Research Council Canada - National Science Library

    Mitchell, Jason

    2002-01-01

    A method is presented for the generation of exact numerical coefficients found in two families of implicit Chebyshev methods for the numerical integration of first- and second-order ordinary differential equations...

  3. TOMATOMICS: A Web Database for Integrated Omics Information in Tomato

    KAUST Repository

    Kudo, Toru; Kobayashi, Masaaki; Terashima, Shin; Katayama, Minami; Ozaki, Soichi; Kanno, Maasa; Saito, Misa; Yokoyama, Koji; Ohyanagi, Hajime; Aoki, Koh; Kubo, Yasutaka; Yano, Kentaro

    2016-01-01

    Solanum lycopersicum (tomato) is an important agronomic crop and a major model fruit-producing plant. To facilitate basic and applied research, comprehensive experimental resources and omics information on tomato are available following their development. Mutant lines and cDNA clones from a dwarf cultivar, Micro-Tom, are two of these genetic resources. Large-scale sequencing data for ESTs and full-length cDNAs from Micro-Tom continue to be gathered. In conjunction with information on the reference genome sequence of another cultivar, Heinz 1706, the Micro-Tom experimental resources have facilitated comprehensive functional analyses. To enhance the efficiency of acquiring omics information for tomato biology, we have integrated the information on the Micro-Tom experimental resources and the Heinz 1706 genome sequence. We have also inferred gene structure by comparison of sequences between the genome of Heinz 1706 and the transcriptome, which are comprised of Micro-Tom full-length cDNAs and Heinz 1706 RNA-seq data stored in the KaFTom and Sequence Read Archive databases. In order to provide large-scale omics information with streamlined connectivity we have developed and maintain a web database TOMATOMICS (http://bioinf.mind.meiji.ac.jp/tomatomics/). In TOMATOMICS, access to the information on the cDNA clone resources, full-length mRNA sequences, gene structures, expression profiles and functional annotations of genes is available through search functions and the genome browser, which has an intuitive graphical interface.

  4. TOMATOMICS: A Web Database for Integrated Omics Information in Tomato

    KAUST Repository

    Kudo, Toru

    2016-11-29

    Solanum lycopersicum (tomato) is an important agronomic crop and a major model fruit-producing plant. To facilitate basic and applied research, comprehensive experimental resources and omics information on tomato are available following their development. Mutant lines and cDNA clones from a dwarf cultivar, Micro-Tom, are two of these genetic resources. Large-scale sequencing data for ESTs and full-length cDNAs from Micro-Tom continue to be gathered. In conjunction with information on the reference genome sequence of another cultivar, Heinz 1706, the Micro-Tom experimental resources have facilitated comprehensive functional analyses. To enhance the efficiency of acquiring omics information for tomato biology, we have integrated the information on the Micro-Tom experimental resources and the Heinz 1706 genome sequence. We have also inferred gene structure by comparison of sequences between the genome of Heinz 1706 and the transcriptome, which are comprised of Micro-Tom full-length cDNAs and Heinz 1706 RNA-seq data stored in the KaFTom and Sequence Read Archive databases. In order to provide large-scale omics information with streamlined connectivity we have developed and maintain a web database TOMATOMICS (http://bioinf.mind.meiji.ac.jp/tomatomics/). In TOMATOMICS, access to the information on the cDNA clone resources, full-length mRNA sequences, gene structures, expression profiles and functional annotations of genes is available through search functions and the genome browser, which has an intuitive graphical interface.

  5. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  6. A development and integration of the concentration database for relative method, k0 method and absolute method in instrumental neutron activation analysis using Microsoft Access

    International Nuclear Information System (INIS)

    Hoh Siew Sin

    2012-01-01

    Instrumental Neutron Activation Analysis (INAA) is offen used to determine and calculate the concentration of an element in the sample by the National University of Malaysia, especially students of Nuclear Science Program. The lack of a database service leads consumers to take longer time to calculate the concentration of an element in the sample. This is because we are more dependent on software that is developed by foreign researchers which are costly. To overcome this problem, a study has been carried out to build an INAA database software. The objective of this study is to build a database software that help the users of INAA in Relative Method and Absolute Method for calculating the element concentration in the sample using Microsoft Excel 2010 and Microsoft Access 2010. The study also integrates k 0 data, k 0 Concent and k 0 -Westcott to execute and complete the system. After the integration, a study was conducted to test the effectiveness of the database software by comparing the concentrations between the experiments and in the database. Triple Bare Monitor Zr-Au and Cr-Mo-Au were used in Abs-INAA as monitor to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration are the net peak area (N p ), the measurement time (t m ), the irradiation time (t irr ), k-factor (k), thermal to epithermal neutron flux ratio (f), the parameters of the neutron flux distribution epithermal (α) and detection efficiency (ε p ). For Com-INAA databases, reference material IAEA-375 Soil was used to calculate the concentration of elements in the sample. CRM, SRM are also used in this database. After the INAA database integration, a verification process was to examine the effectiveness of the Abs-INAA was carried out by comparing the sample concentration between the in database and the experiment. The result of the experimental concentration value of INAA database software performed with high accuracy and precision. ICC

  7. A Generalized Technique in Numerical Integration

    Science.gov (United States)

    Safouhi, Hassan

    2018-02-01

    Integration by parts is one of the most popular techniques in the analysis of integrals and is one of the simplest methods to generate asymptotic expansions of integral representations. The product of the technique is usually a divergent series formed from evaluating boundary terms; however, sometimes the remaining integral is also evaluated. Due to the successive differentiation and anti-differentiation required to form the series or the remaining integral, the technique is difficult to apply to problems more complicated than the simplest. In this contribution, we explore a generalized and formalized integration by parts to create equivalent representations to some challenging integrals. As a demonstrative archetype, we examine Bessel integrals, Fresnel integrals and Airy functions.

  8. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  9. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  10. Phase-field model and its numerical solution for coring and microstructure evolution studies in alloys

    Science.gov (United States)

    Turchi, Patrice E. A.; Fattebert, Jean-Luc; Dorr, Milo R.; Wickett, Michael E.; Belak, James F.

    2011-03-01

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in alloys using physical parameters from thermodynamic (CALPHAD) and kinetic databases. The coupled system of PFM equations includes a local order parameter, a quaternion representation of local crystal orientation and a species composition parameter. Time evolution of microstructures and alloy composition is obtained using an implicit time integration of the system. Physical parameters in databases can be obtained either through experiment or first-principles calculations. Application to coring studies and microstructure evolution of Au-Ni will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344

  11. Numerical simulations of inertial confinement fusion hohlraum with LARED-integration code

    International Nuclear Information System (INIS)

    Li Jinghong; Li Shuanggui; Zhai Chuanlei

    2011-01-01

    In the target design of the Inertial Confinement Fusion (ICF) program, it is common practice to apply radiation hydrodynamics code to study the key physical processes happened in ICF process, such as hohlraum physics, radiation drive symmetry, capsule implosion physics in the radiation-drive approach of ICF. Recently, many efforts have been done to develop our 2D integrated simulation capability of laser fusion with a variety of optional physical models and numerical methods. In order to effectively integrate the existing codes and to facilitate the development of new codes, we are developing an object-oriented structured-mesh parallel code-supporting infrastructure, called JASMIN. Based on two-dimensional three-temperature hohlraum physics code LARED-H and two-dimensional multi-group radiative transfer code LARED-R, we develop a new generation two-dimensional laser fusion code under the JASMIN infrastructure, which enable us to simulate the whole process of laser fusion from the laser beams' entrance into the hohlraum to the end of implosion. In this paper, we will give a brief description of our new-generation two-dimensional laser fusion code, named LARED-Integration, especially in its physical models, and present some simulation results of holhraum. (author)

  12. Integrating query of relational and textual data in clinical databases: a case study.

    Science.gov (United States)

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  13. Experimental and numerical analyses on thermal performance of different typologies of PCMs integrated in the roof space

    DEFF Research Database (Denmark)

    Elarga, Hagar; Fantucci, Stefano; Serra, Valentina

    2017-01-01

    portions, one, the bare roof, representing the reference case without PCMs, the other two integrating two PCM's typologies with different melting/solidification temperatures range. A numerical model was furthermore developed implementing the equivalent capacitance numerical method to describe the substance...... peak load between 13% and 59% depending on the PCM typology, highlighting that to reach the expected performance the proper PCM type should be carefully selected....

  14. Integrating pattern mining in relational databases

    NARCIS (Netherlands)

    Calders, T.; Goethals, B.; Prado, A.; Fürnkranz, J.; Scheffer, T.; Spiliopoulou, M.

    2006-01-01

    Almost a decade ago, Imielinski and Mannila introduced the notion of Inductive Databases to manage KDD applications just as DBMSs successfully manage business applications. The goal is to follow one of the key DBMS paradigms: building optimizing compilers for ad hoc queries. During the past decade,

  15. Development of Integrated Simulation System for Helical Plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Y.; Yokoyama, M.; Nakajima, N.; Fukuyama, A.; Watanabe, K. Y.; Funaba, H.; Suzuki, Y.; Murakami, S.; Ida, K.; Sakakibara, S.; Yamada, H.

    2005-07-01

    Recent progress of computers (parallel/vector-parallel computers, PC clusters, for example) and numerical codes for helical plasmas like three-dimensional MHD equilibrium codes, combined with the development of the plasma diagnostics technique, enable us to do the detailed theoretical analyses of the individual experimental observations. Now, it is pointed out that the experimental data analysis from the viewpoints of integrated physics is an important issue to understand the confinement physics globally. In addition to that, there are international movements towards the integrated numerical simulation study. One is several proposals of integrated modeling of burning tokamak plasmas, motivated by the ITER activity. The integrated numerical simulation will be a good help to draw up new experimental plans especially for burning plasma experiments. Another movement is international collaborations on the confinement database and neoclassical transport in helical plasmas/stellarators. These backgrounds motivate us to start the development of the integrated simulation system which has a modular structure and user-friendly interfaces. The integrated simulation system, which is based on the hierarchical and multi-scale (time and space) modeling, will also be a platform for theoreticians to test their own model such as turbulent transport model. In this paper, we will show the strategy of developing the integrated simulation system and present status of the development. Especially, we discuss the modeling of the time evolution of the plasma net current profile, which is equivalent to the time evolution of the rotational transform profile, in the resistive time scale. (Author)

  16. Experimental and Numerical Investigations in Shallow Cut Grinding by Workpiece Integrated Infrared Thermopile Array.

    Science.gov (United States)

    Reimers, Marcel; Lang, Walter; Dumstorff, Gerrit

    2017-09-30

    The purpose of our study is to investigate the heat distribution and the occurring temperatures during grinding. Therefore, we did both experimental and numerical investigations. In the first part, we present the integration of an infrared thermopile array in a steel workpiece. Experiments are done by acquiring data from the thermopile array during grinding of a groove in a workpiece made of steel. In the second part, we present numerical investigations in the grinding process to further understand the thermal characteristic during grinding. Finally, we conclude our work. Increasing the feed speed leads to two things: higher heat flux densities in the workpiece and higher temperature gradients in the material.

  17. DBGC: A Database of Human Gastric Cancer

    Science.gov (United States)

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288

  18. Brassica ASTRA: an integrated database for Brassica genomic research.

    Science.gov (United States)

    Love, Christopher G; Robinson, Andrew J; Lim, Geraldine A C; Hopkins, Clare J; Batley, Jacqueline; Barker, Gary; Spangenberg, German C; Edwards, David

    2005-01-01

    Brassica ASTRA is a public database for genomic information on Brassica species. The database incorporates expressed sequences with Swiss-Prot and GenBank comparative sequence annotation as well as secondary Gene Ontology (GO) annotation derived from the comparison with Arabidopsis TAIR GO annotations. Simple sequence repeat molecular markers are identified within resident sequences and mapped onto the closely related Arabidopsis genome sequence. Bacterial artificial chromosome (BAC) end sequences derived from the Multinational Brassica Genome Project are also mapped onto the Arabidopsis genome sequence enabling users to identify candidate Brassica BACs corresponding to syntenic regions of Arabidopsis. This information is maintained in a MySQL database with a web interface providing the primary means of interrogation. The database is accessible at http://hornbill.cspp.latrobe.edu.au.

  19. KRILLBASE: a circumpolar database of Antarctic krill and salp numerical densities, 1926-2016

    Science.gov (United States)

    Atkinson, Angus; Hill, Simeon L.; Pakhomov, Evgeny A.; Siegel, Volker; Anadon, Ricardo; Chiba, Sanae; Daly, Kendra L.; Downie, Rod; Fielding, Sophie; Fretwell, Peter; Gerrish, Laura; Hosie, Graham W.; Jessopp, Mark J.; Kawaguchi, So; Krafft, Bjørn A.; Loeb, Valerie; Nishikawa, Jun; Peat, Helen J.; Reiss, Christian S.; Ross, Robin M.; Quetin, Langdon B.; Schmidt, Katrin; Steinberg, Deborah K.; Subramaniam, Roshni C.; Tarling, Geraint A.; Ward, Peter

    2017-03-01

    Antarctic krill (Euphausia superba) and salps are major macroplankton contributors to Southern Ocean food webs and krill are also fished commercially. Managing this fishery sustainably, against a backdrop of rapid regional climate change, requires information on distribution and time trends. Many data on the abundance of both taxa have been obtained from net sampling surveys since 1926, but much of this is stored in national archives, sometimes only in notebooks. In order to make these important data accessible we have collated available abundance data (numerical density, no. m-2) of postlarval E. superba and salp individual (multiple species, and whether singly or in chains). These were combined into a central database, KRILLBASE, together with environmental information, standardisation and metadata. The aim is to provide a temporal-spatial data resource to support a variety of research such as biogeochemistry, autecology, higher predator foraging and food web modelling in addition to fisheries management and conservation. Previous versions of KRILLBASE have led to a series of papers since 2004 which illustrate some of the potential uses of this database. With increasing numbers of requests for these data we here provide an updated version of KRILLBASE that contains data from 15 194 net hauls, including 12 758 with krill abundance data and 9726 with salp abundance data. These data were collected by 10 nations and span 56 seasons in two epochs (1926-1939 and 1976-2016). Here, we illustrate the seasonal, inter-annual, regional and depth coverage of sampling, and provide both circumpolar- and regional-scale distribution maps. Krill abundance data have been standardised to accommodate variation in sampling methods, and we have presented these as well as the raw data. Information is provided on how to screen, interpret and use KRILLBASE to reduce artefacts in interpretation, with contact points for the main data providers. The DOI for the published data set is doi:10

  20. Data integration for European marine biodiversity research: creating a database on benthos and plankton to study large-scale patterns and long-term changes

    NARCIS (Netherlands)

    Vandepitte, L.; Vanhoorne, B.; Kraberg, A.; Anisimova, N.; Antoniadou, C.; Araújo, R.; Bartsch, I.; Beker, B.; Benedetti-Cecchi, L.; Bertocci, I.; Cochrane, S.J.; Cooper, K.; Craeymeersch, J.A.; Christou, E.; Crisp, D.J.; Dahle, S.; de Boissier, M.; De Kluijver, M.; Denisenko, S.; De Vito, D.; Duineveld, G.; Escaravage, V.L.; Fleischer, D.; Fraschetti, S.; Giangrande, A.; Heip, C.H.R.; Hummel, H.; Janas, U.; Karez, R.; Kedra, M.; Kingston, P.; Kuhlenkamp, R.; Libes, M.; Martens, P.; Mees, J.; Mieszkowska, N.; Mudrak, S.; Munda, I.; Orfanidis, S.; Orlando-Bonaca, M.; Palerud, R.; Rachor, E.; Reichert, K.; Rumohr, H.; Schiedek, D.; Schubert, P.; Sistermans, W.C.H.; Sousa Pinto, I.S.; Southward, A.J.; Terlizzi, A.; Tsiaga, E.; Van Beusekom, J.E.E.; Vanden Berghe, E.; Warzocha, J.; Wasmund, N.; Weslawski, J.M.; Widdicombe, C.; Wlodarska-Kowalczuk, M.; Zettler, M.L.

    2010-01-01

    The general aim of setting up a central database on benthos and plankton was to integrate long-, medium- and short-term datasets on marine biodiversity. Such a database makes it possible to analyse species assemblages and their changes on spatial and temporal scales across Europe. Data collation

  1. BIOSPIDA: A Relational Database Translator for NCBI.

    Science.gov (United States)

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  2. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  3. EVpedia: an integrated database of high-throughput data for systemic analyses of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Dae-Kyum Kim

    2013-03-01

    Full Text Available Secretion of extracellular vesicles is a general cellular activity that spans the range from simple unicellular organisms (e.g. archaea; Gram-positive and Gram-negative bacteria to complex multicellular ones, suggesting that this extracellular vesicle-mediated communication is evolutionarily conserved. Extracellular vesicles are spherical bilayered proteolipids with a mean diameter of 20–1,000 nm, which are known to contain various bioactive molecules including proteins, lipids, and nucleic acids. Here, we present EVpedia, which is an integrated database of high-throughput datasets from prokaryotic and eukaryotic extracellular vesicles. EVpedia provides high-throughput datasets of vesicular components (proteins, mRNAs, miRNAs, and lipids present on prokaryotic, non-mammalian eukaryotic, and mammalian extracellular vesicles. In addition, EVpedia also provides an array of tools, such as the search and browse of vesicular components, Gene Ontology enrichment analysis, network analysis of vesicular proteins and mRNAs, and a comparison of vesicular datasets by ortholog identification. Moreover, publications on extracellular vesicle studies are listed in the database. This free web-based database of EVpedia (http://evpedia.info might serve as a fundamental repository to stimulate the advancement of extracellular vesicle studies and to elucidate the novel functions of these complex extracellular organelles.

  4. Database Security: A Historical Perspective

    OpenAIRE

    Lesov, Paul

    2010-01-01

    The importance of security in database research has greatly increased over the years as most of critical functionality of the business and military enterprises became digitized. Database is an integral part of any information system and they often hold sensitive data. The security of the data depends on physical security, OS security and DBMS security. Database security can be compromised by obtaining sensitive data, changing data or degrading availability of the database. Over the last 30 ye...

  5. Prediction methods and databases within chemoinformatics

    DEFF Research Database (Denmark)

    Jónsdóttir, Svava Osk; Jørgensen, Flemming Steen; Brunak, Søren

    2005-01-01

    MOTIVATION: To gather information about available databases and chemoinformatics methods for prediction of properties relevant to the drug discovery and optimization process. RESULTS: We present an overview of the most important databases with 2-dimensional and 3-dimensional structural information...... about drugs and drug candidates, and of databases with relevant properties. Access to experimental data and numerical methods for selecting and utilizing these data is crucial for developing accurate predictive in silico models. Many interesting predictive methods for classifying the suitability...

  6. Some considerations on displacement assumed finite elements with the reduced numerical integration technique

    International Nuclear Information System (INIS)

    Takeda, H.; Isha, H.

    1981-01-01

    The paper is concerned with the displacement-assumed-finite elements by applying the reduced numerical integration technique in structural problems. The first part is a general consideration on the technique. Its purpose is to examine a variational interpretation of the finite element displacement formulation with the reduced integration technique in structural problems. The formulation is critically studied from a standpoint of the natural stiffness approach. It is shown that these types of elements are equivalent to a certain type of displacement and stress assumed mixed elements. The rank deficiency of the stiffness matrix of these elements is interpreted as a problem in the transformation from the natural system to a Cartesian system. It will be shown that a variational basis of the equivalent mixed formulation is closely related to the Hellinger-Reissner's functional. It is presented that for simple elements, e.g. bilinear quadrilateral plane stress and plate bending there are corresponding mixed elements from the functional. For relatively complex types of these elements, it is shown that they are equivalent to localized mixed elements from the Hellinger-Reissner's functional. In the second part, typical finite elements with the reduced integration technique are studied to demonstrate this equivalence. A bilinear displacement and rotation assumed shear beam element, a bilinear displacement assumed quadrilateral plane stress element and a bilinear deflection and rotation assumed quadrilateral plate bending element are examined to present equivalent mixed elements. Not only the theoretical consideration is presented but numerical studies are shown to demonstrate the effectiveness of these elements in practical analysis. (orig.)

  7. On numerical Bessel transformation

    International Nuclear Information System (INIS)

    Sommer, B.; Zabolitzky, J.G.

    1979-01-01

    The authors present a computer program to calculate the three dimensional Fourier or Bessel transforms and definite integrals with Bessel functions. Numerical integration of systems containing Bessel functions occurs in many physical problems, e.g. electromagnetic form factor of nuclei, all transitions involving multipole expansions at high momenta. Filon's integration rule is extended to spherical Bessel functions. The numerical error is of the order of the Simpson error term of the function which has to be transformed. Thus one gets a stable integral even at large arguments of the transformed function. (Auth.)

  8. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    Science.gov (United States)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  9. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

    Science.gov (United States)

    Bonotto, C.

    1995-01-01

    Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

  10. Advances in Integrated Vehicle Thermal Management and Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2017-10-01

    Full Text Available With the increasing demands for vehicle dynamic performance, economy, safety and comfort, and with ever stricter laws concerning energy conservation and emissions, vehicle power systems are becoming much more complex. To pursue high efficiency and light weight in automobile design, the power system and its vehicle integrated thermal management (VITM system have attracted widespread attention as the major components of modern vehicle technology. Regarding the internal combustion engine vehicle (ICEV, its integrated thermal management (ITM mainly contains internal combustion engine (ICE cooling, turbo-charged cooling, exhaust gas recirculation (EGR cooling, lubrication cooling and air conditioning (AC or heat pump (HP. As for electric vehicles (EVs, the ITM mainly includes battery cooling/preheating, electric machines (EM cooling and AC or HP. With the rational effective and comprehensive control over the mentioned dynamic devices and thermal components, the modern VITM can realize collaborative optimization of multiple thermodynamic processes from the aspect of system integration. Furthermore, the computer-aided calculation and numerical simulation have been the significant design methods, especially for complex VITM. The 1D programming can correlate multi-thermal components and the 3D simulating can develop structuralized and modularized design. Additionally, co-simulations can virtualize simulation of various thermo-hydraulic behaviors under the vehicle transient operational conditions. This article reviews relevant researching work and current advances in the ever broadening field of modern vehicle thermal management (VTM. Based on the systematic summaries of the design methods and applications of ITM, future tasks and proposals are presented. This article aims to promote innovation of ITM, strengthen the precise control and the performance predictable ability, furthermore, to enhance the level of research and development (R&D.

  11. Beginning database design from novice to professional

    CERN Document Server

    Churcher, Clare

    2012-01-01

    Beginning Database Design, Second Edition provides short, easy-to-read explanations of how to get database design right the first time. This book offers numerous examples to help you avoid the many pitfalls that entrap new and not-so-new database designers. Through the help of use cases and class diagrams modeled in the UML, you'll learn to discover and represent the details and scope of any design problem you choose to attack. Database design is not an exact science. Many are surprised to find that problems with their databases are caused by poor design rather than by difficulties in using th

  12. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  13. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  14. Experimental and Numerical Investigations in Shallow Cut Grinding by Workpiece Integrated Infrared Thermopile Array

    Directory of Open Access Journals (Sweden)

    Marcel Reimers

    2017-09-01

    Full Text Available The purpose of our study is to investigate the heat distribution and the occurring temperatures during grinding. Therefore, we did both experimental and numerical investigations. In the first part, we present the integration of an infrared thermopile array in a steel workpiece. Experiments are done by acquiring data from the thermopile array during grinding of a groove in a workpiece made of steel. In the second part, we present numerical investigations in the grinding process to further understand the thermal characteristic during grinding. Finally, we conclude our work. Increasing the feed speed leads to two things: higher heat flux densities in the workpiece and higher temperature gradients in the material.

  15. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002

    OpenAIRE

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present Cyan...

  16. Numerical Development

    Science.gov (United States)

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  17. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  18. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  19. RECOGNITION AND VERIFICATION OF TOUCHING HANDWRITTEN NUMERALS

    NARCIS (Netherlands)

    Zhou, J.; Kryzak, A.; Suen, C.Y.

    2004-01-01

    In the field of financial document processing, recognition of touching handwritten numerals has been limited by lack of good benchmarking databases and low reliability of algorithms. This paper addresses the efforts toward solving the two problems. Two databases IRIS-Bell\\\\\\'98 and TNIST are

  20. Computation of Green function of the Schroedinger-like partial differential equations by the numerical functional integration

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.; Shahbagian, R.R.; Zhidkov, E.P.

    1991-01-01

    A new method for numerical solution of the boundary problem for Schroedinger-like partial differential equations in R n is elaborated. The method is based on representation of multidimensional Green function in the form of multiple functional integral and on the use of approximation formulas which are constructed for such integrals. The convergence of approximations to the exact value is proved, the remainder of the formulas is estimated. Method reduces the initial differential problem to quadratures. 16 refs.; 7 tabs

  1. Calculations of the electromechanical transfer processes using implicit methods of numerical integration

    Energy Technology Data Exchange (ETDEWEB)

    Pogosyan, T A

    1983-01-01

    The article is dedicated to the solution of systems of differential equations which describe the transfer processes in an electric power system (EES) by implicit methods of numerical integration. The distinguishing feature of the implicit methods (Euler's reverse method and the trapeze method) is their absolute stability and, consequently, the relatively small accumulation of errors in each step of integration. Therefore, they are found to be very convenient for solving problems of electric power engineering, when the transfer processes are described by a rigid system of differential equations. The rigidity is associated with the range of values of the time constants considered. The advantage of the implicit methods over explicit are shown in a specific example (calculation of the dynamic stability of the simplest electric power system), along with the field of use of the implicit methods and the expedience of their use in power engineering problems.

  2. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Science.gov (United States)

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  3. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  4. A guide to Internet atomic databases for hot plasmas

    International Nuclear Information System (INIS)

    Ralchenko, Yuri

    2006-01-01

    Internet atomic databases are nowadays considered to be the primary tool for dissemination of atomic data. We present here a review of numerical and bibliographic databases of importance for diagnostics of hot plasmas. Special attention is given to new and emerging trends, such as online calculation of various atomic parameters. The recently updated NIST databases are presented in detail

  5. A guide to Internet atomic databases for hot plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Ralchenko, Yuri [National Institute of Standards and Technology, Gaithersburg, MD 20899-8422 (United States)]. E-mail: yuri.ralchenko@nist.gov

    2006-05-15

    Internet atomic databases are nowadays considered to be the primary tool for dissemination of atomic data. We present here a review of numerical and bibliographic databases of importance for diagnostics of hot plasmas. Special attention is given to new and emerging trends, such as online calculation of various atomic parameters. The recently updated NIST databases are presented in detail.

  6. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    Science.gov (United States)

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Numerical Solution of Nonlinear Volterra Integral Equations System Using Simpson’s 3/8 Rule

    Directory of Open Access Journals (Sweden)

    Adem Kılıçman

    2012-01-01

    Full Text Available The Simpson’s 3/8 rule is used to solve the nonlinear Volterra integral equations system. Using this rule the system is converted to a nonlinear block system and then by solving this nonlinear system we find approximate solution of nonlinear Volterra integral equations system. One of the advantages of the proposed method is its simplicity in application. Further, we investigate the convergence of the proposed method and it is shown that its convergence is of order O(h4. Numerical examples are given to show abilities of the proposed method for solving linear as well as nonlinear systems. Our results show that the proposed method is simple and effective.

  8. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  9. Integration of published information into a resistance-associated mutation database for Mycobacterium tuberculosis.

    Science.gov (United States)

    Salamon, Hugh; Yamaguchi, Ken D; Cirillo, Daniela M; Miotto, Paolo; Schito, Marco; Posey, James; Starks, Angela M; Niemann, Stefan; Alland, David; Hanna, Debra; Aviles, Enrique; Perkins, Mark D; Dolinger, David L

    2015-04-01

    Tuberculosis remains a major global public health challenge. Although incidence is decreasing, the proportion of drug-resistant cases is increasing. Technical and operational complexities prevent Mycobacterium tuberculosis drug susceptibility phenotyping in the vast majority of new and retreatment cases. The advent of molecular technologies provides an opportunity to obtain results rapidly as compared to phenotypic culture. However, correlations between genetic mutations and resistance to multiple drugs have not been systematically evaluated. Molecular testing of M. tuberculosis sampled from a typical patient continues to provide a partial picture of drug resistance. A database of phenotypic and genotypic testing results, especially where prospectively collected, could document statistically significant associations and may reveal new, predictive molecular patterns. We examine the feasibility of integrating existing molecular and phenotypic drug susceptibility data to identify associations observed across multiple studies and demonstrate potential for well-integrated M. tuberculosis mutation data to reveal actionable findings. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko

    2017-05-10

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  11. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database.

    Science.gov (United States)

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-06-23

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max 'Enrei'). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. The Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all predicted proteins from

  12. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-01-01

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  13. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  14. Numerical functional integration method for studying the properties of the physical vacuum

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1998-01-01

    The new approach to investigate the physical vacuum in quantum theories including its nonperturbative topological structure is discussed. This approach is based on the representation of the matrix element of the evolution operator in Euclidean metrics in a form of the functional integral with a certain measure in the corresponding space and on the use of approximation formulas which we constructed for this kind of integral. No preliminary discretization of space and time is required, as well as no simplifying assumptions like semiclassical approximation, collective excitations, introduction of ''short-time'' propagators, etc. are necessary in this approach. The method allows to use the more preferable deterministic algorithms instead of the traditional stochastic technique. It has been proven that our approach has important advantages over the other known methods, including the higher efficiency of computations. Examples of application of the method to the numerical study of some potential nuclear models and to the computation of the topological susceptibility and the θ-vacua energy are presented. (author)

  15. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    Science.gov (United States)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending

  16. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  17. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  18. High-accuracy numerical integration of charged particle motion – with application to ponderomotive force

    International Nuclear Information System (INIS)

    Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu

    2016-01-01

    A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)

  19. Database automation of accelerator operation

    International Nuclear Information System (INIS)

    Casstevens, B.J.; Ludemann, C.A.

    1983-01-01

    Database management techniques are applied to automating the setup of operating parameters of a heavy-ion accelerator used in nuclear physics experiments. Data files consist of ion-beam attributes, the interconnection assignments of the numerous power supplies and magnetic elements that steer the ions' path through the system, the data values that represent the electrical currents supplied by the power supplies, as well as the positions of motors and status of mechanical actuators. The database is relational and permits searching on ranges of any subset of the ion-beam attributes. A file selected from the database is used by the control software to replicate the ion beam conditions by adjusting the physical elements in a continuous manner

  20. Study of developing a database of energy statistics

    Energy Technology Data Exchange (ETDEWEB)

    Park, T.S. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    An integrated energy database should be prepared in advance for managing energy statistics comprehensively. However, since much manpower and budget is required for developing an integrated energy database, it is difficult to establish a database within a short period of time. Therefore, this study sets the purpose in drawing methods to analyze existing statistical data lists and to consolidate insufficient data as first stage work for the energy database, and at the same time, in analyzing general concepts and the data structure of the database. I also studied the data content and items of energy databases in operation in international energy-related organizations such as IEA, APEC, Japan, and the USA as overseas cases as well as domestic conditions in energy databases, and the hardware operating systems of Japanese databases. I analyzed the making-out system of Korean energy databases, discussed the KEDB system which is representative of total energy databases, and present design concepts for new energy databases. In addition, I present the establishment directions and their contents of future Korean energy databases, data contents that should be collected by supply and demand statistics, and the establishment of data collection organization, etc. by analyzing the Korean energy statistical data and comparing them with the system of OECD/IEA. 26 refs., 15 figs., 11 tabs.

  1. Study of relational nuclear databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Tinjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Liu Chi; Chen Jiaer; Huang Xiaolong

    2004-01-01

    A relational nuclear database management and web-based services software system has been developed. Its objective is to allow users to access numerical and graphical representation of nuclear data and to easily reconstruct nuclear data in original standardized formats from the relational databases. It presents 9 relational nuclear libraries: 5 ENDF format neutron reaction databases (BROND), CENDL, ENDF, JEF and JENDL), the ENSDF database, the EXFOR database, the IAEA Photonuclear Data Library and the charged particle reaction data from the FENDL database. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software, and are platform-independent. The first version of this software was officially released in September 2001

  2. Database Translator (DATALATOR) for Integrated Exploitation

    Science.gov (United States)

    2010-10-31

    via the Internet to Fortune 1000 clients including Mercedes Benz , Procter & Gamble, and HP. I look forward to hearing of your successful proposal and working with you to build a successful business. Sincerely, ...testing the DATALATOR experimental prototype (IRL 4) designed to demonstrate its core functions based on Next (icneration Software technology . Die...sources, but is not directly dependent on the platform such as database technology or data formats. In other words, there is a clear air gap between

  3. LSDB Archive - KEGG MEDICUS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available [ Credits ] English ]; } else if ( url.search(//en//) != -1 ) { url = url.replace(/...switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us KEGG MEDI...CUS Database Description General information of database Database name KEGG MEDICUS...ug design Organism Taxonomy Name: Human Taxonomy ID: 9606 Database description KEGG MEDICUS is an integrated...ge inserts) of all marketed drugs in Japan and the USA are integrated with the KEGG DRUG and KEGG DISEASE databases in KEGG MEDI

  4. An object-oriented framework for managing cooperating legacy databases

    NARCIS (Netherlands)

    Balsters, H; de Brock, EO

    2003-01-01

    We describe a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. Our approach to database federation is based on the UML/OCL data

  5. Database citation in full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.

  6. Search across Different Media: Numeric Data Sets and Text Files

    Directory of Open Access Journals (Sweden)

    Michael Buckland

    2006-12-01

    Full Text Available Digital technology encourages the hope of searching across and between different media forms (text, sound, image, numeric data. Topic searches are described in two different media: text files and socioeconomic numeric databases and also for transverse searching, whereby retrieved text is used to find topically related numeric data and vice versa. Direct transverse searching across different media is impossible. Descriptive metadata provide enabling infrastructure, but usually require mappings between different vocabularies and a search-term recommender system. Statistical association techniques and natural-language processing can help. Searches in socioeconomic numeric databases ordinarily require that place and time be specified.

  7. Numerical investigation of premixed combustion in a porous burner with integrated heat exchanger

    Energy Technology Data Exchange (ETDEWEB)

    Farzaneh, Meisam; Shafiey, Mohammad; Shams, Mehrzad [K.N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran, Islamic Republic of); Ebrahimi, Reza [K.N. Toosi University of Technology, Department of Aerospace Engineering, Tehran (Iran, Islamic Republic of)

    2012-07-15

    In this paper, we perform a numerical analysis of a two-dimensional axisymmetric problem arising in premixed combustion in a porous burner with integrated heat exchanger. The physical domain consists of two zones, porous and heat exchanger zones. Two dimensional Navier-Stokes equations, gas and solid energy equations, and chemical species transport equations are solved and heat release is described by a multistep kinetics mechanism. The solid matrix is modeled as a gray medium, and the finite volume method is used to solve the radiative transfer equation to calculate the local radiation source/sink in the solid phase energy equation. Special attention is given to model heat transfer between the hot gas and the heat exchanger tube. Thus, the corresponding terms are added to the energy equations of the flow and the solid matrix. Gas and solid temperature profiles and species mole fractions on the burner centerline, predicted 2D temperature fields, species concentrations and streamlines are presented. Calculated results for temperature profiles are compared to experimental data. It is shown that there is good agreement between the numerical solutions and the experimental data and it is concluded that the developed numerical program is an excellent tool to investigate combustion in porous burner. (orig.)

  8. Self-adaptive numerical integrator for analytic functions

    International Nuclear Information System (INIS)

    Garribba, S.; Quartapelle, L.; Reina, G.

    1978-01-01

    A new adaptive algorithm for the integration of analytical functions is presented. The algorithm processes the integration interval by generating local subintervals whose length is controlled through a feedback loop. The control is obtained by means of a relation derived on an analytical basis and valid for an arbitrary integration rule: two different estimates of an integral are used to compute the interval length necessary to obtain an integral estimate with accuracy within the assigned error bounds. The implied method for local generation of subintervals and an effective assumption of error partition among subintervals give rise to an adaptive algorithm provided with a highly accurate and very efficient integration procedure. The particular algorithm obtained by choosing the 6-point Gauss-Legendre integration rule is considered and extensive comparisons are made with other outstanding integration algorithms

  9. European Vegetation Archive (EVA): an integrated database of European vegetation plots

    DEFF Research Database (Denmark)

    Chytrý, M; Hennekens, S M; Jiménez-Alfaro, B

    2015-01-01

    vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management......The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional...... data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database....

  10. A database and tool, IM Browser, for exploring and integrating emerging gene and protein interaction data for Drosophila

    Directory of Open Access Journals (Sweden)

    Parrish Jodi R

    2006-04-01

    Full Text Available Abstract Background Biological processes are mediated by networks of interacting genes and proteins. Efforts to map and understand these networks are resulting in the proliferation of interaction data derived from both experimental and computational techniques for a number of organisms. The volume of this data combined with the variety of specific forms it can take has created a need for comprehensive databases that include all of the available data sets, and for exploration tools to facilitate data integration and analysis. One powerful paradigm for the navigation and analysis of interaction data is an interaction graph or map that represents proteins or genes as nodes linked by interactions. Several programs have been developed for graphical representation and analysis of interaction data, yet there remains a need for alternative programs that can provide casual users with rapid easy access to many existing and emerging data sets. Description Here we describe a comprehensive database of Drosophila gene and protein interactions collected from a variety of sources, including low and high throughput screens, genetic interactions, and computational predictions. We also present a program for exploring multiple interaction data sets and for combining data from different sources. The program, referred to as the Interaction Map (IM Browser, is a web-based application for searching and visualizing interaction data stored in a relational database system. Use of the application requires no downloads and minimal user configuration or training, thereby enabling rapid initial access to interaction data. IM Browser was designed to readily accommodate and integrate new types of interaction data as it becomes available. Moreover, all information associated with interaction measurements or predictions and the genes or proteins involved are accessible to the user. This allows combined searches and analyses based on either common or technique-specific attributes

  11. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  12. An inductive database system based on virtual mining views

    NARCIS (Netherlands)

    Blockeel, H.; Calders, T.G.K.; Fromont, É.; Goethals, B.; Prado, A.; Robardet, C.

    2012-01-01

    Inductive databases integrate database querying with database mining. In this article, we present an inductive database system that does not rely on a new data mining query language, but on plain SQL. We propose an intuitive and elegant framework based on virtual mining views, which are relational

  13. Neutron metrology file NMF-90. An integrated database for performing neutron spectrum adjustment calculations

    International Nuclear Information System (INIS)

    Kocherov, N.P.

    1996-01-01

    The Neutron Metrology File NMF-90 is an integrated database for performing neutron spectrum adjustment (unfolding) calculations. It contains 4 different adjustment codes, the dosimetry reaction cross-section library IRDF-90/NMF-G with covariances files, 6 input data sets for reactor benchmark neutron fields and a number of utility codes for processing and plotting the input and output data. The package consists of 9 PC HD diskettes and manuals for the codes. It is distributed by the Nuclear Data Section of the IAEA on request free of charge. About 10 MB of diskspace is needed to install and run a typical reactor neutron dosimetry unfolding problem. (author). 8 refs

  14. Characterisation of large catastrophic landslides using an integrated field, remote sensing and numerical modelling approach

    OpenAIRE

    Wolter, Andrea Elaine

    2014-01-01

    I apply a forensic, multidisciplinary approach that integrates engineering geology field investigations, engineering geomorphology mapping, long-range terrestrial photogrammetry, and a numerical modelling toolbox to two large rock slope failures to study their causes, initiation, kinematics, and dynamics. I demonstrate the significance of endogenic and exogenic processes, both separately and in concert, in contributing to landscape evolution and conditioning slopes for failure, and use geomor...

  15. A framework for organizing cancer-related variations from existing databases, publications and NGS data using a High-performance Integrated Virtual Environment (HIVE).

    Science.gov (United States)

    Wu, Tsung-Jung; Shamsaddini, Amirhossein; Pan, Yang; Smith, Krista; Crichton, Daniel J; Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    Years of sequence feature curation by UniProtKB/Swiss-Prot, PIR-PSD, NCBI-CDD, RefSeq and other database biocurators has led to a rich repository of information on functional sites of genes and proteins. This information along with variation-related annotation can be used to scan human short sequence reads from next-generation sequencing (NGS) pipelines for presence of non-synonymous single-nucleotide variations (nsSNVs) that affect functional sites. This and similar workflows are becoming more important because thousands of NGS data sets are being made available through projects such as The Cancer Genome Atlas (TCGA), and researchers want to evaluate their biomarkers in genomic data. BioMuta, an integrated sequence feature database, provides a framework for automated and manual curation and integration of cancer-related sequence features so that they can be used in NGS analysis pipelines. Sequence feature information in BioMuta is collected from the Catalogue of Somatic Mutations in Cancer (COSMIC), ClinVar, UniProtKB and through biocuration of information available from publications. Additionally, nsSNVs identified through automated analysis of NGS data from TCGA are also included in the database. Because of the petabytes of data and information present in NGS primary repositories, a platform HIVE (High-performance Integrated Virtual Environment) for storing, analyzing, computing and curating NGS data and associated metadata has been developed. Using HIVE, 31 979 nsSNVs were identified in TCGA-derived NGS data from breast cancer patients. All variations identified through this process are stored in a Curated Short Read archive, and the nsSNVs from the tumor samples are included in BioMuta. Currently, BioMuta has 26 cancer types with 13 896 small-scale and 308 986 large-scale study-derived variations. Integration of variation data allows identifications of novel or common nsSNVs that can be prioritized in validation studies. Database URL: BioMuta: http

  16. Numerical Asymptotic Solutions Of Differential Equations

    Science.gov (United States)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  17. Evaluation of wave power by integrating numerical models and measures at the Port of Civitavecchia

    International Nuclear Information System (INIS)

    Paladini de Mendoza, Francesco; Bonamano, Simone; Carli, Filippo Maria; Marcelli, Marco; Danelli, Andrea; Peviani, Maximo Aurelio; Burgio, Calogero

    2015-01-01

    An assessment of the available wave power at regional and local scale was carried out. Two hot spots of higher wave power level were identified and characterized along the coastline of northern Latium Region, near the 'Torre Valdaliga' power plant and in proximity of Civitavecchia’s breakwater, where the presence of a harbour and an electric power plant allows wave energy exploitation. The evaluation process was implemented through measurements, and numerical model assessment and validation. The integration of wave gauges measurements with numerical simulations made it possible to estimate the wave power on the extended area near shore. A down scaling process allowed to proceed from regional to local scale providing increased resolution thanks to highly detailed bathymetry.

  18. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

    Science.gov (United States)

    Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

    2016-01-01

    This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

  19. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    Science.gov (United States)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  20. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    Directory of Open Access Journals (Sweden)

    Lemoine Nicholas R

    2007-11-01

    Full Text Available Abstract Background Pancreatic cancer is the 5th leading cause of cancer death in both males and females. In recent years, a wealth of gene and protein expression studies have been published broadening our understanding of pancreatic cancer biology. Due to the explosive growth in publicly available data from multiple different sources it is becoming increasingly difficult for individual researchers to integrate these into their current research programmes. The Pancreatic Expression database, a generic web-based system, is aiming to close this gap by providing the research community with an open access tool, not only to mine currently available pancreatic cancer data sets but also to include their own data in the database. Description Currently, the database holds 32 datasets comprising 7636 gene expression measurements extracted from 20 different published gene or protein expression studies from various pancreatic cancer types, pancreatic precursor lesions (PanINs and chronic pancreatitis. The pancreatic data are stored in a data management system based on the BioMart technology alongside the human genome gene and protein annotations, sequence, homologue, SNP and antibody data. Interrogation of the database can be achieved through both a web-based query interface and through web services using combined criteria from pancreatic (disease stages, regulation, differential expression, expression, platform technology, publication and/or public data (antibodies, genomic region, gene-related accessions, ontology, expression patterns, multi-species comparisons, protein data, SNPs. Thus, our database enables connections between otherwise disparate data sources and allows relatively simple navigation between all data types and annotations. Conclusion The database structure and content provides a powerful and high-speed data-mining tool for cancer research. It can be used for target discovery i.e. of biomarkers from body fluids, identification and analysis

  1. InterAction Database (IADB)

    Science.gov (United States)

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  2. Database Systems - Present and Future

    Directory of Open Access Journals (Sweden)

    2009-01-01

    Full Text Available The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics.

  3. The bovine QTL viewer: a web accessible database of bovine Quantitative Trait Loci

    Directory of Open Access Journals (Sweden)

    Xavier Suresh R

    2006-06-01

    Full Text Available Abstract Background Many important agricultural traits such as weight gain, milk fat content and intramuscular fat (marbling in cattle are quantitative traits. Most of the information on these traits has not previously been integrated into a genomic context. Without such integration application of these data to agricultural enterprises will remain slow and inefficient. Our goal was to populate a genomic database with data mined from the bovine quantitative trait literature and to make these data available in a genomic context to researchers via a user friendly query interface. Description The QTL (Quantitative Trait Locus data and related information for bovine QTL are gathered from published work and from existing databases. An integrated database schema was designed and the database (MySQL populated with the gathered data. The bovine QTL Viewer was developed for the integration of QTL data available for cattle. The tool consists of an integrated database of bovine QTL and the QTL viewer to display QTL and their chromosomal position. Conclusion We present a web accessible, integrated database of bovine (dairy and beef cattle QTL for use by animal geneticists. The viewer and database are of general applicability to any livestock species for which there are public QTL data. The viewer can be accessed at http://bovineqtl.tamu.edu.

  4. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  5. Numeric databases on the kinetics of transient species in solution

    International Nuclear Information System (INIS)

    Helman, W.P.; Hug, G.L.; Carmichael, Ian; Ross, A.B.

    1988-01-01

    A description is given of data compilations on the kinetics of transient species in solution. In particular information is available for the reactions of radicals in aqueous solution and for excited states such as singlet molecular oxygen and those of metal complexes in solution. Methods for compilation and use of the information in computer-readable form are also described. Emphasis is placed on making the database available for online searching. (author)

  6. The STRING database in 2017

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Morris, John H; Cook, Helen

    2017-01-01

    A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number of organi......A system-wide understanding of cellular function requires knowledge of all functional interactions between the expressed proteins. The STRING database aims to collect and integrate this information, by consolidating known and predicted protein-protein association data for a large number...... of organisms. The associations in STRING include direct (physical) interactions, as well as indirect (functional) interactions, as long as both are specific and biologically meaningful. Apart from collecting and reassessing available experimental data on protein-protein interactions, and importing known...... pathways and protein complexes from curated databases, interaction predictions are derived from the following sources: (i) systematic co-expression analysis, (ii) detection of shared selective signals across genomes, (iii) automated text-mining of the scientific literature and (iv) computational transfer...

  7. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  8. 1.15 - Structural Chemogenomics Databases to Navigate Protein–Ligand Interaction Space

    NARCIS (Netherlands)

    Kanev, G.K.; Kooistra, A.J.; de Esch, I.J.P.; de Graaf, C.

    2017-01-01

    Structural chemogenomics databases allow the integration and exploration of heterogeneous genomic, structural, chemical, and pharmacological data in order to extract useful information that is applicable for the discovery of new protein targets and biologically active molecules. Integrated databases

  9. Databases for INDUS-1 and INDUS-2

    International Nuclear Information System (INIS)

    Merh, Bhavna N.; Fatnani, Pravin

    2003-01-01

    The databases for Indus are relational databases designed to store various categories of data related to the accelerator. The data archiving and retrieving system in Indus is based on a client/sever model. A general purpose commercial database is used to store parameters and equipment data for the whole machine. The database manages configuration, on-line and historical databases. On line and off line applications distributed in several systems can store and retrieve the data from the database over the network. This paper describes the structure of databases for Indus-1 and Indus-2 and their integration within the software architecture. The data analysis, design, resulting data-schema and implementation issues are discussed. (author)

  10. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  11. Numerical integration methods and layout improvements in the context of dynamic RNA visualization.

    Science.gov (United States)

    Shabash, Boris; Wiese, Kay C

    2017-05-30

    RNA visualization software tools have traditionally presented a static visualization of RNA molecules with limited ability for users to interact with the resulting image once it is complete. Only a few tools allowed for dynamic structures. One such tool is jViz.RNA. Currently, jViz.RNA employs a unique method for the creation of the RNA molecule layout by mapping the RNA nucleotides into vertexes in a graph, which we call the detailed graph, and then utilizes a Newtonian mechanics inspired system of forces to calculate a layout for the RNA molecule. The work presented here focuses on improvements to jViz.RNA that allow the drawing of RNA secondary structures according to common drawing conventions, as well as dramatic run-time performance improvements. This is done first by presenting an alternative method for mapping the RNA molecule into a graph, which we call the compressed graph, and then employing advanced numerical integration methods for the compressed graph representation. Comparing the compressed graph and detailed graph implementations, we find that the compressed graph produces results more consistent with RNA drawing conventions. However, we also find that employing the compressed graph method requires a more sophisticated initial layout to produce visualizations that would require minimal user interference. Comparing the two numerical integration methods demonstrates the higher stability of the Backward Euler method, and its resulting ability to handle much larger time steps, a high priority feature for any software which entails user interaction. The work in this manuscript presents the preferred use of compressed graphs to detailed ones, as well as the advantages of employing the Backward Euler method over the Forward Euler method. These improvements produce more stable as well as visually aesthetic representations of the RNA secondary structures. The results presented demonstrate that both the compressed graph representation, as well as the Backward

  12. Numerical analysis

    CERN Document Server

    Rao, G Shanker

    2006-01-01

    About the Book: This book provides an introduction to Numerical Analysis for the students of Mathematics and Engineering. The book is designed in accordance with the common core syllabus of Numerical Analysis of Universities of Andhra Pradesh and also the syllabus prescribed in most of the Indian Universities. Salient features: Approximate and Numerical Solutions of Algebraic and Transcendental Equation Interpolation of Functions Numerical Differentiation and Integration and Numerical Solution of Ordinary Differential Equations The last three chapters deal with Curve Fitting, Eigen Values and Eigen Vectors of a Matrix and Regression Analysis. Each chapter is supplemented with a number of worked-out examples as well as number of problems to be solved by the students. This would help in the better understanding of the subject. Contents: Errors Solution of Algebraic and Transcendental Equations Finite Differences Interpolation with Equal Intervals Interpolation with Unequal Int...

  13. Introduction to numerical analysis

    CERN Document Server

    Hildebrand, F B

    1987-01-01

    Well-known, respected introduction, updated to integrate concepts and procedures associated with computers. Computation, approximation, interpolation, numerical differentiation and integration, smoothing of data, other topics in lucid presentation. Includes 150 additional problems in this edition. Bibliography.

  14. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  15. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  16. Numerical path integral solution to strong Coulomb correlation in one dimensional Hooke's atom

    Science.gov (United States)

    Ruokosenmäki, Ilkka; Gholizade, Hossein; Kylänpää, Ilkka; Rantala, Tapio T.

    2017-01-01

    We present a new approach based on real time domain Feynman path integrals (RTPI) for electronic structure calculations and quantum dynamics, which includes correlations between particles exactly but within the numerical accuracy. We demonstrate that incoherent propagation by keeping the wave function real is a novel method for finding and simulation of the ground state, similar to Diffusion Monte Carlo (DMC) method, but introducing new useful tools lacking in DMC. We use 1D Hooke's atom, a two-electron system with very strong correlation, as our test case, which we solve with incoherent RTPI (iRTPI) and compare against DMC. This system provides an excellent test case due to exact solutions for some confinements and because in 1D the Coulomb singularity is stronger than in two or three dimensional space. The use of Monte Carlo grid is shown to be efficient for which we determine useful numerical parameters. Furthermore, we discuss another novel approach achieved by combining the strengths of iRTPI and DMC. We also show usefulness of the perturbation theory for analytical approximates in case of strong confinements.

  17. An integrated database on ticks and tick-borne zoonoses in the tropics and subtropics with special reference to developing and emerging countries.

    Science.gov (United States)

    Vesco, Umberto; Knap, Nataša; Labruna, Marcelo B; Avšič-Županc, Tatjana; Estrada-Peña, Agustín; Guglielmone, Alberto A; Bechara, Gervasio H; Gueye, Arona; Lakos, Andras; Grindatto, Anna; Conte, Valeria; De Meneghi, Daniele

    2011-05-01

    Tick-borne zoonoses (TBZ) are emerging diseases worldwide. A large amount of information (e.g. case reports, results of epidemiological surveillance, etc.) is dispersed through various reference sources (ISI and non-ISI journals, conference proceedings, technical reports, etc.). An integrated database-derived from the ICTTD-3 project ( http://www.icttd.nl )-was developed in order to gather TBZ records in the (sub-)tropics, collected both by the authors and collaborators worldwide. A dedicated website ( http://www.tickbornezoonoses.org ) was created to promote collaboration and circulate information. Data collected are made freely available to researchers for analysis by spatial methods, integrating mapped ecological factors for predicting TBZ risk. The authors present the assembly process of the TBZ database: the compilation of an updated list of TBZ relevant for (sub-)tropics, the database design and its structure, the method of bibliographic search, the assessment of spatial precision of geo-referenced records. At the time of writing, 725 records extracted from 337 publications related to 59 countries in the (sub-)tropics, have been entered in the database. TBZ distribution maps were also produced. Imported cases have been also accounted for. The most important datasets with geo-referenced records were those on Spotted Fever Group rickettsiosis in Latin-America and Crimean-Congo Haemorrhagic Fever in Africa. The authors stress the need for international collaboration in data collection to update and improve the database. Supervision of data entered remains always necessary. Means to foster collaboration are discussed. The paper is also intended to describe the challenges encountered to assemble spatial data from various sources and to help develop similar data collections.

  18. Numerical implementation of the loop-tree duality method

    Energy Technology Data Exchange (ETDEWEB)

    Buchta, Sebastian; Rodrigo, German [Universitat de Valencia-Consejo Superior de Investigaciones Cientificas, Parc Cientific, Instituto de Fisica Corpuscular, Valencia (Spain); Chachamis, Grigorios [Universidad Autonoma de Madrid, Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Draggiotis, Petros [Institute of Nuclear and Particle Physics, NCSR ' ' Demokritos' ' , Agia Paraskevi (Greece)

    2017-05-15

    We present a first numerical implementation of the loop-tree duality (LTD) method for the direct numerical computation of multi-leg one-loop Feynman integrals. We discuss in detail the singular structure of the dual integrands and define a suitable contour deformation in the loop three-momentum space to carry out the numerical integration. Then we apply the LTD method to the computation of ultraviolet and infrared finite integrals, and we present explicit results for scalar and tensor integrals with up to eight external legs (octagons). The LTD method features an excellent performance independently of the number of external legs. (orig.)

  19. The UCSC Genome Browser Database: 2008 update

    DEFF Research Database (Denmark)

    Karolchik, D; Kuhn, R M; Baertsch, R

    2007-01-01

    The University of California, Santa Cruz, Genome Browser Database (GBD) provides integrated sequence and annotation data for a large collection of vertebrate and model organism genomes. Seventeen new assemblies have been added to the database in the past year, for a total coverage of 19 vertebrat...

  20. Mining Views : database views for data mining

    NARCIS (Netherlands)

    Blockeel, H.; Calders, T.; Fromont, É.; Goethals, B.; Prado, A.; Nijssen, S.; De Raedt, L.

    2007-01-01

    We propose a relational database model towards the integration of data mining into relational database systems, based on the so called virtual mining views. We show that several types of patterns and models over the data, such as itemsets, association rules, decision trees and clusterings, can be

  1. Numerical simulation of laser resonators

    International Nuclear Information System (INIS)

    Yoo, J. G.; Jeong, Y. U.; Lee, B. C.; Rhee, Y. J.; Cho, S. O.

    2004-01-01

    We developed numerical simulation packages for laser resonators on the bases of a pair of integral equations. Two numerical schemes, a matrix formalism and an iterative method, were programmed for finding numeric solutions to the pair of integral equations. The iterative method was tried by Fox and Li, but it was not applicable for high Fresnel numbers since the numerical errors involved propagate and accumulate uncontrollably. In this paper, we implement the matrix method to extend the computational limit further. A great number of case studies are carried out with various configurations of stable and unstable r;esonators to compute diffraction losses, phase shifts, intensity distributions and phases of the radiation fields on mirrors. Our results presented in this paper show not only a good agreement with the results previously obtained by Fox and Li, but also the legitimacy of our numerical procedures for high Fresnel numbers.

  2. Numerical solution of Boltzmann's equation

    International Nuclear Information System (INIS)

    Sod, G.A.

    1976-04-01

    The numerical solution of Boltzmann's equation is considered for a gas model consisting of rigid spheres by means of Hilbert's expansion. If only the first two terms of the expansion are retained, Boltzmann's equation reduces to the Boltzmann-Hilbert integral equation. Successive terms in the Hilbert expansion are obtained by solving the same integral equation with a different source term. The Boltzmann-Hilbert integral equation is solved by a new very fast numerical method. The success of the method rests upon the simultaneous use of four judiciously chosen expansions; Hilbert's expansion for the distribution function, another expansion of the distribution function in terms of Hermite polynomials, the expansion of the kernel in terms of the eigenvalues and eigenfunctions of the Hilbert operator, and an expansion involved in solving a system of linear equations through a singular value decomposition. The numerical method is applied to the study of the shock structure in one space dimension. Numerical results are presented for Mach numbers of 1.1 and 1.6. 94 refs, 7 tables, 1 fig

  3. Integral and differential methods for the numerical solution of 2-D field problems in high energy physics magnets and electrical machines

    International Nuclear Information System (INIS)

    Hannalla, A.Y.; Simkin, J.; Trowbridge, C.W.

    1979-10-01

    Numerical calculations of electromagnetic fields have been performed by solving integral or differential equations. Integral methods are ideally suited to open boundary problems and on the other hand the geometric complexity of electrical machines makes differential methods more attractive. In this paper both integral and differential equation methods are reviewed, and the limitations of the methods are highlighted, in an attempt to show how to select the best method for a particular problem. (author)

  4. The Sequenced Angiosperm Genomes and Genome Databases.

    Science.gov (United States)

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  5. Numerical Modeling of an Integrated Vehicle Fluids System Loop for Pressurizing a Cryogenic Tank

    Science.gov (United States)

    LeClair, A. C.; Hedayat, A.; Majumdar, A. K.

    2017-01-01

    This paper presents a numerical model of the pressurization loop of the Integrated Vehicle Fluids (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance to reduce system weight and enhance reliability, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) conducted tests to verify the functioning of the IVF system using a flight-like tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to support the test program. This paper presents the simulation of three different test series, comparison of numerical prediction and test data and a novel method of presenting data in a dimensionless form. The paper also presents a methodology of implementing a compressor map in a system level code.

  6. Numerical Integration of a Class of Singularly Perturbed Delay Differential Equations with Small Shift

    Directory of Open Access Journals (Sweden)

    Gemechis File

    2012-01-01

    Full Text Available We have presented a numerical integration method to solve a class of singularly perturbed delay differential equations with small shift. First, we have replaced the second-order singularly perturbed delay differential equation by an asymptotically equivalent first-order delay differential equation. Then, Simpson’s rule and linear interpolation are employed to get the three-term recurrence relation which is solved easily by discrete invariant imbedding algorithm. The method is demonstrated by implementing it on several linear and nonlinear model examples by taking various values for the delay parameter and the perturbation parameter .

  7. Mining Views : database views for data mining

    NARCIS (Netherlands)

    Blockeel, H.; Calders, T.; Fromont, É.; Goethals, B.; Prado, A.

    2008-01-01

    We present a system towards the integration of data mining into relational databases. To this end, a relational database model is proposed, based on the so called virtual mining views. We show that several types of patterns and models over the data, such as itemsets, association rules and decision

  8. A Sustainable Spacecraft Component Database Solution, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerous spacecraft component databases have been developed to support NASA, DoD, and contractor design centers and design tools. Despite the clear utility of...

  9. A dedicated database system for handling multi-level data in systems biology.

    Science.gov (United States)

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  10. Integrated data management for RODOS

    International Nuclear Information System (INIS)

    Abramowicz, K.; Koschel, A.; Rafat, M.; Wendelgass, R.

    1995-12-01

    The report presents the results of a feasibility study on an integrated data organisation and management in RODOS, the real-time on-line decision support system for off-site nuclear emergency management. The conceptual design of the functional components of the integrated data management are described taking account of the software components and the operation environment of the RODOS system. In particular, the scheme architecture of a database integration manager for accessing and updating a multi-database system is discussed in detail under a variety of database management aspects. Furthermore, the structural design of both a simple knowledge database and a real-time database are described. Finally, some short comments on the benefits and disadvantages of the proposed concept of data integration in RODOS are given. (orig.) [de

  11. Materials data through a bibliographic database INIS

    International Nuclear Information System (INIS)

    Yamamoto, Akira; Itabashi, Keizo; Nakajima, Hidemitsu

    1992-01-01

    INIS (International Nuclear Information System) is a bibliographic database produced by collaboration of IAEA and its member countries, holding 1,500,000 records as of 1991. Although a bibliographic database does not provide numerical data itself, specific materials information can be obtained through retrieval specifying materials, properties conditions, measuring methods, etc. Also, 'data flagging' facilitates searching a record containing data. INIS has also a function of clearing house that provides original documents of scarce distribution. Hard copies of the technical reports or other non-conventional literatures are available. An efficient use of INIS database for the materials data is presented using an on-line terminal. (author)

  12. A Lie-admissible method of integration of Fokker-Planck equations with non-linear coefficients (exact and numerical solutions)

    International Nuclear Information System (INIS)

    Fronteau, J.; Combis, P.

    1984-08-01

    A Lagrangian method is introduced for the integration of non-linear Fokker-Planck equations. Examples of exact solutions obtained in this way are given, and also the explicit scheme used for the computation of numerical solutions. The method is, in addition, shown to be of a Lie-admissible type

  13. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  14. Databases in Cloud - Solutions for Developing Renewable Energy Informatics Systems

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2017-08-01

    Full Text Available The paper presents the data model of a decision support prototype developed for generation monitoring, forecasting and advanced analysis in the renewable energy filed. The solutions considered for developing this system include databases in cloud, XML integration, spatial data representation and multidimensional modeling. This material shows the advantages of Cloud databases and spatial data representation and their implementation in Oracle Database 12 c. Also, it contains a data integration part and a multidimensional analysis. The presentation of output data is made using dashboards.

  15. The Mars Climate Database (MCD version 5.2)

    Science.gov (United States)

    Millour, E.; Forget, F.; Spiga, A.; Navarro, T.; Madeleine, J.-B.; Montabone, L.; Pottier, A.; Lefevre, F.; Montmessin, F.; Chaufray, J.-Y.; Lopez-Valverde, M. A.; Gonzalez-Galindo, F.; Lewis, S. R.; Read, P. L.; Huot, J.-P.; Desjean, M.-C.; MCD/GCM development Team

    2015-10-01

    The Mars Climate Database (MCD) is a database of meteorological fields derived from General Circulation Model (GCM) numerical simulations of the Martian atmosphere and validated using available observational data. The MCD includes complementary post-processing schemes such as high spatial resolution interpolation of environmental data and means of reconstructing the variability thereof. We have just completed (March 2015) the generation of a new version of the MCD, MCD version 5.2

  16. Numerical analysis

    CERN Document Server

    Brezinski, C

    2012-01-01

    Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<

  17. A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information

    Directory of Open Access Journals (Sweden)

    Emmanouil Papadakis

    2017-10-01

    Full Text Available This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics.

  18. Trajectory errors of different numerical integration schemes diagnosed with the MPTRAC advection module driven by ECMWF operational analyses

    Science.gov (United States)

    Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars

    2018-02-01

    The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration

  19. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  20. Professional iOS database application programming

    CERN Document Server

    Alessi, Patrick

    2013-01-01

    Updated and revised coverage that includes the latest versions of iOS and Xcode Whether you're a novice or experienced developer, you will want to dive into this updated resource on database application programming for the iPhone and iPad. Packed with more than 50 percent new and revised material - including completely rebuilt code, screenshots, and full coverage of new features pertaining to database programming and enterprise integration in iOS 6 - this must-have book intends to continue the precedent set by the previous edition by helping thousands of developers master database

  1. Concurrency control in distributed database systems

    CERN Document Server

    Cellary, W; Gelenbe, E

    1989-01-01

    Distributed Database Systems (DDBS) may be defined as integrated database systems composed of autonomous local databases, geographically distributed and interconnected by a computer network.The purpose of this monograph is to present DDBS concurrency control algorithms and their related performance issues. The most recent results have been taken into consideration. A detailed analysis and selection of these results has been made so as to include those which will promote applications and progress in the field. The application of the methods and algorithms presented is not limited to DDBSs but a

  2. The ESID Online Database network.

    Science.gov (United States)

    Guzman, D; Veit, D; Knerr, V; Kindle, G; Gathmann, B; Eades-Perner, A M; Grimbacher, B

    2007-03-01

    Primary immunodeficiencies (PIDs) belong to the group of rare diseases. The European Society for Immunodeficiencies (ESID), is establishing an innovative European patient and research database network for continuous long-term documentation of patients, in order to improve the diagnosis, classification, prognosis and therapy of PIDs. The ESID Online Database is a web-based system aimed at data storage, data entry, reporting and the import of pre-existing data sources in an enterprise business-to-business integration (B2B). The online database is based on Java 2 Enterprise System (J2EE) with high-standard security features, which comply with data protection laws and the demands of a modern research platform. The ESID Online Database is accessible via the official website (http://www.esid.org/). Supplementary data are available at Bioinformatics online.

  3. Transaction management with integrity checking

    DEFF Research Database (Denmark)

    Martinenghi, Davide; Christiansen, Henning

    2005-01-01

    Database integrity constraints, understood as logical conditions that must hold for any database state, are not fully supported by current database technology. It is typically up to the database designer and application programmer to enforce integrity via triggers or tests at the application level....... 2.~In concurrent database systems, besides the traditional correctness criterion, the execution schedule must ensure that the different transactions can overlap in time without destroying the consistency requirements tested by other, concurrent transactions....

  4. Numerical integration of electromagnetic cascade equations, discussion of results for air, copper, iron, and lead

    International Nuclear Information System (INIS)

    Adler, A.; Fuchs, B.; Thielheim, K.O.

    1977-01-01

    The longitudinal development of electromagnetic cascades in air, copper, iron, and lead is studied on the basis of results derived recently by numerical integration of the cascade equations applying rather accurate expressions for the cross-sections involved with the interactions of high energy electrons, positrons, and photons in electromagnetic cascades. Special attention is given to scaling properties of transition curves. It is demonstrated that a good scaling may be achieved by means of the depth of maximum cascade development. (author)

  5. Follicle Online: an integrated database of follicle assembly, development and ovulation.

    Science.gov (United States)

    Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Cooke, Howard J; Zhang, Yuanwei; Shi, Qinghua

    2015-01-01

    Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database 'Follicle Online' that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43,000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php © The Author(s) 2015. Published by Oxford University Press.

  6. The PEP-II project-wide database

    International Nuclear Information System (INIS)

    Chan, A.; Calish, S.; Crane, G.; MacGregor, I.; Meyer, S.; Wong, J.

    1995-05-01

    The PEP-II Project Database is a tool for monitoring the technical and documentation aspects of this accelerator construction. It holds the PEP-II design specifications, fabrication and installation data in one integrated system. Key pieces of the database include the machine parameter list, magnet and vacuum fabrication data. CAD drawings, publications and documentation, survey and alignment data and property control. The database can be extended to contain information required for the operations phase of the accelerator and detector. Features such as viewing CAD drawing graphics from the database will be implemented in the future. This central Oracle database on a UNIX server is built using ORACLE Case tools. Users at the three collaborating laboratories (SLAC, LBL, LLNL) can access the data remotely, using various desktop computer platforms and graphical interfaces

  7. Low-temperature baseboard heaters with integrated air supply - An analytical and numerical investigation

    Energy Technology Data Exchange (ETDEWEB)

    Ploskic, Adnan; Holmberg, Sture [Fluid and Climate Technology, School of Architecture and Built Environment, KTH, Marinens vaeg 30, SE-13640 Handen, Stockholm (Sweden)

    2011-01-15

    The functioning of a hydronic baseboard heating system with integrated air supply was analyzed. The aim was to investigate thermal performance of the system when cold outdoor (ventilation) airflow was forced through the baseboard heater. The performance of the system was evaluated for different ventilation rates at typical outdoor temperatures during the Swedish winter season. Three different analytical models and Computational Fluid Dynamics (CFD) were used to predict the temperature rise of the airflow inside the baseboard heater. Good agreement between numerical (CFD) and analytical calculations was obtained. Calculations showed that it was fully possible to pre-heat the incoming airflow to the indoor temperature and to cover transmission losses, using 45 C supply water flow. The analytical calculations also showed that the airflow per supply opening in the baseboard heater needed to be limited to 7.0 l/s due to pressure losses inside the channel. At this ventilation rate, the integrated system with one air supply gave about 2.1 more heat output than a conventional baseboard heating system. CFD simulations also showed that the integrated system was capable of countering downdraught created by 2.0 m high glazed areas and a cold outdoor environment. Draught discomfort in the case with the conventional system was slightly above the recommended upper limit, but heat distribution across whole analyzed office space was uniform for both heating systems. It was concluded that low-temperature baseboard heating systems with integrated air supply can meet both international comfort requirements, and lead to energy savings in cold climates. (author)

  8. Coordinate Systems Integration for Craniofacial Database from Multimodal Devices

    Directory of Open Access Journals (Sweden)

    Deni Suwardhi

    2005-05-01

    Full Text Available This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography scan and CR (Close Range Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the craniofacial spatial data in one reference system in database.

  9. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  10. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  11. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    Directory of Open Access Journals (Sweden)

    Elizabeth M Azzato

    2014-01-01

    Full Text Available Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS. We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS.

  12. Zdeněk Kopal: Numerical Analyst

    Science.gov (United States)

    Křížek, M.

    2015-07-01

    We give a brief overview of Zdeněk Kopal's life, his activities in the Czech Astronomical Society, his collaboration with Vladimír Vand, and his studies at Charles University, Cambridge, Harvard, and MIT. Then we survey Kopal's professional life. He published 26 monographs and 20 conference proceedings. We will concentrate on Kopal's extensive monograph Numerical Analysis (1955, 1961) that is widely accepted to be the first comprehensive textbook on numerical methods. It describes, for instance, methods for polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations with initial or boundary conditions, and numerical solution of integral and integro-differential equations. Special emphasis will be laid on error analysis. Kopal himself applied numerical methods to celestial mechanics, in particular to the N-body problem. He also used Fourier analysis to investigate light curves of close binaries to discover their properties. This is, in fact, a problem from mathematical analysis.

  13. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Braams, Bastiaan J.; Chung, Hyun-Kyung [Nuclear Data Section, NAPC Division, International Atomic Energy Agency P. O. Box 100, Vienna International Centre, AT-1400 Vienna (Austria)

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  14. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    Science.gov (United States)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  15. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    International Nuclear Information System (INIS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-01-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  16. An integrated numerical and physical modeling system for an enhanced in situ bioremediation process

    International Nuclear Information System (INIS)

    Huang, Y.F.; Huang, G.H.; Wang, G.Q.; Lin, Q.G.; Chakma, A.

    2006-01-01

    Groundwater contamination due to releases of petroleum products is a major environmental concern in many urban districts and industrial zones. Over the past years, a few studies were undertaken to address in situ bioremediation processes coupled with contaminant transport in two- or three-dimensional domains. However, they were concentrated on natural attenuation processes for petroleum contaminants or enhanced in situ bioremediation processes in laboratory columns. In this study, an integrated numerical and physical modeling system is developed for simulating an enhanced in situ biodegradation (EISB) process coupled with three-dimensional multiphase multicomponent flow and transport simulation in a multi-dimensional pilot-scale physical model. The designed pilot-scale physical model is effective in tackling natural attenuation and EISB processes for site remediation. The simulation results demonstrate that the developed system is effective in modeling the EISB process, and can thus be used for investigating the effects of various uncertainties. - An integrated modeling system was developed to enhance in situ bioremediation processes

  17. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  18. Customer database for Watrec Oy

    OpenAIRE

    Melnichikhina, Ekaterina

    2016-01-01

    This thesis is a development project for Watrec Oy. Watrec Oy is a Finnish company specializes in “waste-to-energy” issues. Customer Relation Management (CRM) strategies are now being applied within the company. The customer database is the first and trial step towards CRM strategy in Watrec Oy. The reasons for database project lie in lacking of clear customers’ data. The main objectives are: - To integrate the customers’ and project data; - To improve the level of sales and mar...

  19. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    International Nuclear Information System (INIS)

    Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.

    1990-01-01

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystem models, and incorporates application models relevant to a specific trade-off or design study. In other words, FELPPC solves the complete physical process model using realistic physics and technology constraints. Because FELPPC provides a detailed design, a good estimate for the FEL mass, cost, and size can be made from a piece-part count of the FEL. FELPPC requires significant accelerator and FEL expertise to operate. The code can calculate complex FEL configurations including multiple accelerator and wiggler combinations

  20. Development of IAEA nuclear reaction databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Zerkin, V.; Trkov, A. [International Atomic Energy Agency, Dept. of Nuclear Sciences and Applications, Vienna (Austria)

    2008-07-01

    From mid-2004 onwards, the major nuclear reaction databases (EXFOR, CINDA and Endf) and services (Web and CD-Roms retrieval systems and specialized applications) have been functioning within a modern computing environment as multi-platform software, working under several operating systems with relational databases. Subsequent work at the IAEA has focused on three areas of development: revision and extension of the contents of the databases; extension and improvement of the functionality and integrity of the retrieval systems; development of software for database maintenance and system deployment. (authors)

  1. FCDD: A Database for Fruit Crops Diseases.

    Science.gov (United States)

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  2. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  3. Secure Distributed Databases Using Cryptography

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2006-01-01

    Full Text Available The computational encryption is used intensively by different databases management systems for ensuring privacy and integrity of information that are physically stored in files. Also, the information is sent over network and is replicated on different distributed systems. It is proved that a satisfying level of security is achieved if the rows and columns of tables are encrypted independently of table or computer that sustains the data. Also, it is very important that the SQL - Structured Query Language query requests and responses to be encrypted over the network connection between the client and databases server. All this techniques and methods must be implemented by the databases administrators, designer and developers in a consistent security policy.

  4. Real Time Baseball Database

    Science.gov (United States)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  5. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  6. Solutions for medical databases optimal exploitation.

    Science.gov (United States)

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  7. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    , because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...

  8. BOKASUN: a fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    OpenAIRE

    Caffo, Michele; Czyz, Henryk; Gunia, Michal; Remiddi, Ettore

    2008-01-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations.

  9. SmallSat Database

    Science.gov (United States)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  10. Using Bitmap Indexing Technology for Combined Numerical and TextQueries

    Energy Technology Data Exchange (ETDEWEB)

    Stockinger, Kurt; Cieslewicz, John; Wu, Kesheng; Rotem, Doron; Shoshani, Arie

    2006-10-16

    In this paper, we describe a strategy of using compressedbitmap indices to speed up queries on both numerical data and textdocuments. By using an efficient compression algorithm, these compressedbitmap indices are compact even for indices with millions of distinctterms. Moreover, bitmap indices can be used very efficiently to answerBoolean queries over text documents involving multiple query terms.Existing inverted indices for text searches are usually inefficient forcorpora with a very large number of terms as well as for queriesinvolving a large number of hits. We demonstrate that our compressedbitmap index technology overcomes both of those short-comings. In aperformance comparison against a commonly used database system, ourindices answer queries 30 times faster on average. To provide full SQLsupport, we integrated our indexing software, called FastBit, withMonetDB. The integrated system MonetDB/FastBit provides not onlyefficient searches on a single table as FastBit does, but also answersjoin queries efficiently. Furthermore, MonetDB/FastBit also provides avery efficient retrieval mechanism of result records.

  11. A variable timestep generalized Runge-Kutta method for the numerical integration of the space-time diffusion equations

    International Nuclear Information System (INIS)

    Aviles, B.N.; Sutton, T.M.; Kelly, D.J. III.

    1991-09-01

    A generalized Runge-Kutta method has been employed in the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic timestep control. The efficiency of the Runge-Kutta method is enhanced by a block-factorization technique that exploits the sparse structure of the matrix system resulting from the space and energy discretized form of the time-dependent neutron diffusion equations. Preliminary numerical evaluation using a one-dimensional finite difference code shows the sparse matrix implementation of the generalized Runge-Kutta method to be highly accurate and efficient when compared to an optimized iterative theta method. 12 refs., 5 figs., 4 tabs

  12. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  13. Nuclear models, experiments and data libraries needed for numerical simulation of accelerator-driven system

    International Nuclear Information System (INIS)

    Bauge, E.; Bersillon, O.

    2000-01-01

    This paper presents the transparencies of the speech concerning the nuclear models, experiments and data libraries needed for numerical simulation of Accelerator-Driven Systems. The first part concerning the nuclear models defines the spallation process, the corresponding models (intra-nuclear cascade, statistical model, Fermi breakup, fission, transport, decay and macroscopic aspects) and the code systems. The second part devoted to the experiments presents the angular measurements, the integral measurements, the residual nuclei and the energy deposition. In the last part, dealing with the data libraries, the author details the fundamental quantities as the reaction cross-section, the low energy transport databases and the decay libraries. (A.L.B.)

  14. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  15. Legume and Lotus japonicus Databases

    DEFF Research Database (Denmark)

    Hirakawa, Hideki; Mun, Terry; Sato, Shusei

    2014-01-01

    Since the genome sequence of Lotus japonicus, a model plant of family Fabaceae, was determined in 2008 (Sato et al. 2008), the genomes of other members of the Fabaceae family, soybean (Glycine max) (Schmutz et al. 2010) and Medicago truncatula (Young et al. 2011), have been sequenced. In this sec....... In this section, we introduce representative, publicly accessible online resources related to plant materials, integrated databases containing legume genome information, and databases for genome sequence and derived marker information of legume species including L. japonicus...

  16. System/subsystem specifications for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Rollow, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States); Faby, E.Z.; Fluker, J.; Grubb, J.; Hancock, B.R. [Univ. of Tennessee, Knoxville, TN (United States); Ferguson, R.A. [Science Applications International Corp., Oak Ridge, TN (United States)

    1995-11-20

    A system is being developed by the Military Traffic Management Command (MTMC) to provide data integration and worldwide management and tracking of surface cargo movements. The Integrated Cargo Database (ICDB) will be a data repository for the WPS terminal-level system, will be a primary source of queries and cargo traffic reports, will receive data from and provide data to other MTMC and non-MTMC systems, will provide capabilities for processing Advance Transportation Control and Movement Documents (ATCMDs), and will process and distribute manifests. This System/Subsystem Specifications for the Worldwide Port System Regional ICDB documents the system/subsystem functions, provides details of the system/subsystem analysis in order to provide a communication link between developers and operational personnel, and identifies interfaces with other systems and subsystems. It must be noted that this report is being produced near the end of the initial development phase of ICDB, while formal software testing is being done. Following the initial implementation of the ICDB system, maintenance contractors will be in charge of making changes and enhancing software modules. Formal testing and user reviews may indicate the need for additional software units or changes to existing ones. This report describes the software units that are components of this ICDB system as of August 1995.

  17. Nuclear data processing using a database management system

    International Nuclear Information System (INIS)

    Castilla, V.; Gonzalez, L.

    1991-01-01

    A database management system that permits the design of relational models was used to create an integrated database with experimental and evaluated nuclear data.A system that reduces the time and cost of processing was created for computers type EC or compatibles.A set of programs for the conversion from nuclear calculated data output format to EXFOR format was developed.A dictionary to perform a retrospective search in the ENDF database was created too

  18. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    Science.gov (United States)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  19. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    Science.gov (United States)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each

  20. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

    Science.gov (United States)

    Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

    2016-04-01

    Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

  1. Databases for marine biologists and biotechnologists: The state-of-the art and prospects

    Digital Repository Service at National Institute of Oceanography (India)

    Chavan, V.S.

    Only 1% of the presently available 5000 database titles are relevant to marine biology and biotechnology. Nearly 60% of these are bibliographic in nature. There are almost no textural and numeric databases, which are the prime need of researchers...

  2. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    Science.gov (United States)

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  3. On the hydrodynamics of archer fish jumping out of the water: Integrating experiments with numerical simulations

    Science.gov (United States)

    Sotiropoulos, Fotis; Angelidis, Dionysios; Mendelson, Leah; Techet, Alexandra

    2017-11-01

    Evolution has enabled fish to develop a range of thrust producing mechanisms to allow skillful movement and give them the ability to catch prey or avoid danger. Several experimental and numerical studies have been performed to investigate how complex maneuvers are executed and develop bioinspired strategies for aquatic robot design. We will discuss recent numerical advances toward the development of a computational framework for performing turbulent, two-phase flow, fluid-structure-interaction (FSI) simulations to investigate the dynamics of aquatic jumpers. We will also discuss the integration of such numerics with high-speed imaging and particle image velocimetry data to reconstruct anatomic fish models and prescribe realistic kinematics of fish motion. The capabilities of our method will be illustrated by applying it to simulate the motion of a small scale archer fish jumping out of the water to capture prey. We will discuss the rich vortex dynamics emerging during the hovering, rapid upward and gliding phases. The simulations will elucidate the thrust production mechanisms by the movement of the pectoral and anal fins and we will show that the fins significantly contribute to the rapid acceleration.

  4. Uses and limitations of registry and academic databases.

    Science.gov (United States)

    Williams, William G

    2010-01-01

    A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  5. Usability of some databases for information services in Czechoslovak nuclear programme

    International Nuclear Information System (INIS)

    Kakos, A.

    1988-01-01

    The contents were compared of the databases Chemical Abstracts Search, World Patent Index, Excerpta Medica, Inspec and Compendex with INIS, with regard to possible completing of INIS searches with searches in these other databases. On the basis of the results of test searches made in all said databases on selected topics falling under the INIS scope, concrete cases were determined when INIS searches should be completed with data in some of the other databases. The contents analysis method is described with regard to the concrete search topics and areas are given of the overlapping of the databases with INIS. Numerical results are given. (J.B.). 2 tabs

  6. High Quality Data for Grid Integration Studies

    Energy Technology Data Exchange (ETDEWEB)

    Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit; Hodge, Bri-Mathias

    2017-01-22

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather prediction to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.

  7. MIPS Arabidopsis thaliana Database (MAtDB): an integrated biological knowledge resource based on the first complete plant genome

    Science.gov (United States)

    Schoof, Heiko; Zaccaria, Paolo; Gundlach, Heidrun; Lemcke, Kai; Rudd, Stephen; Kolesov, Grigory; Arnold, Roland; Mewes, H. W.; Mayer, Klaus F. X.

    2002-01-01

    Arabidopsis thaliana is the first plant for which the complete genome has been sequenced and published. Annotation of complex eukaryotic genomes requires more than the assignment of genetic elements to the sequence. Besides completing the list of genes, we need to discover their cellular roles, their regulation and their interactions in order to understand the workings of the whole plant. The MIPS Arabidopsis thaliana Database (MAtDB; http://mips.gsf.de/proj/thal/db) started out as a repository for genome sequence data in the European Scientists Sequencing Arabidopsis (ESSA) project and the Arabidopsis Genome Initiative. Our aim is to transform MAtDB into an integrated biological knowledge resource by integrating diverse data, tools, query and visualization capabilities and by creating a comprehensive resource for Arabidopsis as a reference model for other species, including crop plants. PMID:11752263

  8. Direct numerical solution of the Ornstein-Zernike integral equation and spatial distribution of water around hydrophobic molecules

    Science.gov (United States)

    Ikeguchi, Mitsunori; Doi, Junta

    1995-09-01

    The Ornstein-Zernike integral equation (OZ equation) has been used to evaluate the distribution function of solvents around solutes, but its numerical solution is difficult for molecules with a complicated shape. This paper proposes a numerical method to directly solve the OZ equation by introducing the 3D lattice. The method employs no approximation the reference interaction site model (RISM) equation employed. The method enables one to obtain the spatial distribution of spherical solvents around solutes with an arbitrary shape. Numerical accuracy is sufficient when the grid-spacing is less than 0.5 Å for solvent water. The spatial water distribution around a propane molecule is demonstrated as an example of a nonspherical hydrophobic molecule using iso-value surfaces. The water model proposed by Pratt and Chandler is used. The distribution agrees with the molecular dynamics simulation. The distribution increases offshore molecular concavities. The spatial distribution of water around 5α-cholest-2-ene (C27H46) is visualized using computer graphics techniques and a similar trend is observed.

  9. TRENDS: A flight test relational database user's guide and reference manual

    Science.gov (United States)

    Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.

    1994-01-01

    This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.

  10. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  11. 77 FR 71089 - Pilot Loading of Aeronautical Database Updates

    Science.gov (United States)

    2012-11-29

    ...) card, rather than in resident memory. The database update was accomplished by removing the SD card with... frequency distance measuring equipment (DME), and any updates that affect system operating software--that... developed with attention to data integrity. Current technology uses databases which are developed in...

  12. The NAGRA/PSI thermochemical database: new developments

    International Nuclear Information System (INIS)

    Hummel, W.; Berner, U.; Thoenen, T.; Pearson, F.J.Jr.

    2000-01-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  13. The NAGRA/PSI thermochemical database: new developments

    Energy Technology Data Exchange (ETDEWEB)

    Hummel, W.; Berner, U.; Thoenen, T. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Pearson, F.J.Jr. [Ground-Water Geochemistry, New Bern, NC (United States)

    2000-07-01

    The development of a high quality thermochemical database for performance assessment is a scientifically fascinating and demanding task, and is not simply collecting and recording numbers. The final product can by visualised as a complex building with different storeys representing different levels of complexity. The present status report illustrates the various building blocks which we believe are integral to such a database structure. (authors)

  14. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  15. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  16. Database Aspects of Location-Based Services

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard

    2004-01-01

    in the databases underlying high-quality services. Several integrated representations - which capture different aspects of the same infrastructure - are needed. Further, all other content that can be related to geographical space must be integrated with the infrastructure representations. The chapter describes...... the general concepts underlying one approach to data modeling for location-based services. The chapter also covers techniques that are needed to keep a database for location-based services up to date with the reality it models. As part of this, caching is touched upon briefly. The notion of linear referencing......Adopting a data management perspective on location-based services, this chapter explores central challenges to data management posed by location-based services. Because service users typically travel in, and are constrained to, transportation infrastructures, such structures must be represented...

  17. VariVis: a visualisation toolkit for variation databases

    Directory of Open Access Journals (Sweden)

    Smith Timothy D

    2008-04-01

    Full Text Available Abstract Background With the completion of the Human Genome Project and recent advancements in mutation detection technologies, the volume of data available on genetic variations has risen considerably. These data are stored in online variation databases and provide important clues to the cause of diseases and potential side effects or resistance to drugs. However, the data presentation techniques employed by most of these databases make them difficult to use and understand. Results Here we present a visualisation toolkit that can be employed by online variation databases to generate graphical models of gene sequence with corresponding variations and their consequences. The VariVis software package can run on any web server capable of executing Perl CGI scripts and can interface with numerous Database Management Systems and "flat-file" data files. VariVis produces two easily understandable graphical depictions of any gene sequence and matches these with variant data. While developed with the goal of improving the utility of human variation databases, the VariVis package can be used in any variation database to enhance utilisation of, and access to, critical information.

  18. Some Considerations about Modern Database Machines

    Directory of Open Access Journals (Sweden)

    Manole VELICANU

    2010-01-01

    Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.

  19. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  20. The Single- and Multichannel Audio Recordings Database (SMARD)

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Jesper Rindom; Jensen, Søren Holdt

    2014-01-01

    A new single- and multichannel audio recordings database (SMARD) is presented in this paper. The database contains recordings from a box-shaped listening room for various loudspeaker and array types. The recordings were made for 48 different configurations of three different loudspeakers and four...... different microphone arrays. In each configuration, 20 different audio segments were played and recorded ranging from simple artificial sounds to polyphonic music. SMARD can be used for testing algorithms developed for numerous application, and we give examples of source localisation results....

  1. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  2. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Science.gov (United States)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  3. Towards a Component Based Model for Database Systems

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-02-01

    Full Text Available Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04, Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT

  4. Extracting meronomy relations from domain-specific, textual corporate databases

    NARCIS (Netherlands)

    Ittoo, R.A.; Bouma, G.; Maruster, L.; Wortmann, J.C.; Hopfe, C.J.; Rezgui, Y.; Métais, E.; Preece, A.; Li, H.

    2010-01-01

    Various techniques for learning meronymy relationships from open-domain corpora exist. However, extracting meronymy relationships from domain-specific, textual corporate databases has been overlooked, despite numerous application opportunities particularly in domains like product development and/or

  5. RAACFDb: Rheumatoid arthritis ayurvedic classical formulations database.

    Science.gov (United States)

    Mohamed Thoufic Ali, A M; Agrawal, Aakash; Sajitha Lulu, S; Mohana Priya, A; Vino, S

    2017-02-02

    In the past years, the treatment of rheumatoid arthritis (RA) has undergone remarkable changes in all therapeutic modes. The present newfangled care in clinical research is to determine and to pick a new track for better treatment options for RA. Recent ethnopharmacological investigations revealed that traditional herbal remedies are the most preferred modality of complementary and alternative medicine (CAM). However, several ayurvedic modes of treatments and formulations for RA are not much studied and documented from Indian traditional system of medicine. Therefore, this directed us to develop an integrated database, RAACFDb (acronym: Rheumatoid Arthritis Ayurvedic Classical Formulations Database) by consolidating data from the repository of Vedic Samhita - The Ayurveda to retrieve the available formulations information easily. Literature data was gathered using several search engines and from ayurvedic practitioners for loading information in the database. In order to represent the collected information about classical ayurvedic formulations, an integrated database is constructed and implemented on a MySQL and PHP back-end. The database is supported by describing all the ayurvedic classical formulations for the treatment rheumatoid arthritis. It includes composition, usage, plant parts used, active ingredients present in the composition and their structures. The prime objective is to locate ayurvedic formulations proven to be quite successful and highly effective among the patients with reduced side effects. The database (freely available at www.beta.vit.ac.in/raacfdb/index.html) hopefully enables easy access for clinical researchers and students to discover novel leads with reduced side effects. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Development of database systems for safety of repositories for disposal of radioactive wastes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yeong Hun; Han, Jeong Sang; Shin, Hyeon Jun; Ham, Sang Won; Kim, Hye Seong [Yonsei Univ., Seoul (Korea, Republic of)

    1999-03-15

    In the study, GSIS os developed for the maximizing effectiveness of the database system. For this purpose, the spatial relation of data from various fields that are constructed in the database which was developed for the site selection and management of repository for radioactive waste disposal. By constructing the integration system that can link attribute and spatial data, it is possible to evaluate the safety of repository effectively and economically. The suitability of integrating database and GSIS is examined by constructing the database in the test district where the site characteristics are similar to that of repository for radioactive waste disposal.

  7. An integrative clinical database and diagnostics platform for biomarker identification and analysis in ion mobility spectra of human exhaled air

    DEFF Research Database (Denmark)

    Schneider, Till; Hauschild, Anne-Christin; Baumbach, Jörg Ingo

    2013-01-01

    data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous...... biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute-value (EAV) model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access...... to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated...

  8. CyanoEXpress: A web database for exploration and visualisation of the integrated transcriptome of cyanobacterium Synechocystis sp. PCC6803.

    Science.gov (United States)

    Hernandez-Prieto, Miguel A; Futschik, Matthias E

    2012-01-01

    Synechocystis sp. PCC6803 is one of the best studied cyanobacteria and an important model organism for our understanding of photosynthesis. The early availability of its complete genome sequence initiated numerous transcriptome studies, which have generated a wealth of expression data. Analysis of the accumulated data can be a powerful tool to study transcription in a comprehensive manner and to reveal underlying regulatory mechanisms, as well as to annotate genes whose functions are yet unknown. However, use of divergent microarray platforms, as well as distributed data storage make meta-analyses of Synechocystis expression data highly challenging, especially for researchers with limited bioinformatic expertise and resources. To facilitate utilisation of the accumulated expression data for a wider research community, we have developed CyanoEXpress, a web database for interactive exploration and visualisation of transcriptional response patterns in Synechocystis. CyanoEXpress currently comprises expression data for 3073 genes and 178 environmental and genetic perturbations obtained in 31 independent studies. At present, CyanoEXpress constitutes the most comprehensive collection of expression data available for Synechocystis and can be freely accessed. The database is available for free at http://cyanoexpress.sysbiolab.eu.

  9. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  10. ATLAS database application enhancements using Oracle 11g

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Blaszczyk, M; Sorokoletov, R

    2012-01-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemas (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have...

  11. YMDB: the Yeast Metabolome Database

    Science.gov (United States)

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  12. An Integrated Numerical Model for the Design of Coastal Protection Structures

    Directory of Open Access Journals (Sweden)

    Theophanis V. Karambas

    2017-10-01

    Full Text Available In the present work, an integrated coastal engineering numerical model is presented. The model simulates the linear wave propagation, wave-induced circulation, and sediment transport and bed morphology evolution. It consists of three main modules: WAVE_L, WICIR, and SEDTR. The nearshore wave transformation module WAVE_L (WAVE_Linear is based on the hyperbolic-type mild slope equation and is valid for a compound linear wave field near coastal structures where the waves are subjected to the combined effects of shoaling, refraction, diffraction, reflection (total and partial, and breaking. Radiation stress components (calculated from WAVE_L drive the depth averaged circulation module WICIR (Wave Induced CIRculation for the description of the nearshore wave-induced currents. Sediment transport and bed morphology evolution in the nearshore, surf, and swash zone are simulated by the SEDTR (SEDiment TRansport module. The model is tested against experimental data to study the effect of representative coastal protection structures and is applied to a real case study of a coastal engineering project in North Greece, producing accurate and consistent results for a versatile range of layouts.

  13. Experience using a distributed object oriented database for a DAQ system

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    To configure the RD13 data acquisition system, we need many parameters which describe the various hardware and software components. Such information has been defined using an entity-relation model and stored in a commercial memory-resident database. during the last year, Itasca, an object oriented database management system (OODB), was chosen as a replacement database system. We have ported the existing databases (hs and sw configurations, run parameters etc.) to Itasca and integrated it with the run control system. We believe that it is possible to use an OODB in real-time environments such as DAQ systems. In this paper, we present our experience and impression: why we wanted to change from an entity-relational approach, some useful features of Itasca, the issues we meet during this project including integration of the database into an existing distributed environment and factors which influence performance. (author)

  14. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  15. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    Science.gov (United States)

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or

  16. Upgrade of laser and electron beam welding database

    CERN Document Server

    Furman, Magdalena

    2014-01-01

    The main purpose of this project was to fix existing issues and update the existing database holding parameters of laser-beam and electron-beam welding machines. Moreover, the database had to be extended to hold the data for the new machines that arrived recently at the workshop. As a solution - the database had to be migrated to Oracle framework, the new user interface (using APEX) had to be designed and implemented with the integration with the CERN web services (EDMS, Phonebook, JMT, CDD and EDH).

  17. Data Cleaning and Semantic Improvement in Biological Databases

    Directory of Open Access Journals (Sweden)

    Apiletti Daniele

    2006-12-01

    Full Text Available Public genomic and proteomic databases can be affected by a variety of errors. These errors may involve either the description or the meaning of data (namely, syntactic or semantic errors. We focus our analysis on the detection of semantic errors, in order to verify the accuracy of the stored information. In particular, we address the issue of data constraints and functional dependencies among attributes in a given relational database. Constraints and dependencies show semantics among attributes in a database schema and their knowledge may be exploited to improve data quality and integration in database design, and to perform query optimization and dimensional reduction.

  18. Documentation of databases in the Wilmar Planning tool

    International Nuclear Information System (INIS)

    Kiviluioma, J.; Meimbom, P.

    2006-01-01

    The Wilmar Planning tool consists of a number of databases and models as shown in Figure 1. This report documents the design of the following subparts of the Wilmar Planning tool: 1. The Scenario database holding the scenario trees generated from the Scenario Tree Creation model. 2. The Input database holding input data to the Joint Market model and the Long-term model apart from the scenario trees. 3. The output database containing the results of a Joint Market model run. The Wilmar Planning Tool is developed in the project Wind Power Integration in Liberalised Electricity Markets (WILMAR) supported by EU (contract ENK5-CT-2002-00663). (LN)

  19. MIPS PlantsDB: a database framework for comparative plant genome research.

    Science.gov (United States)

    Nussbaumer, Thomas; Martis, Mihaela M; Roessner, Stephan K; Pfeifer, Matthias; Bader, Kai C; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB-plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834-D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB.

  20. Cross: an OWL wrapper for teasoning on relational databases

    NARCIS (Netherlands)

    Champin, P.A.; Houben, G.J.P.M.; Thiran, Ph.; Parent, C.; Schewe, K.D.; Storey, V.C.; Thalheim, B.

    2007-01-01

    One of the challenges of the Semantic Web is to integrate the huge amount of information already available on the standard Web, usually stored in relational databases. In this paper, we propose a formalization of a logic model of relational databases, and a transformation of that model into OWL, a

  1. Development of reliability databases and the particular requirements of probabilistic risk analyses

    International Nuclear Information System (INIS)

    Meslin, T.

    1989-01-01

    Nuclear utilities have an increasing need to develop reliability databases for their operating experience. The purposes of these databases are often multiple, including both equipment maintenance aspects and probabilistic risk analyses. EDF has therefore been developing experience feedback databases, including the Reliability Data Recording System (SRDF) and the Event File, as well as the history of numerous operating documents. Furthermore, since the end of 1985, EDF has been preparing a probabilistic safety analysis applied to one 1,300 MWe unit, for which a large amount of data of French origin is necessary. This data concerns both component reliability parameters and initiating event frequencies. The study has thus been an opportunity for trying out the performance databases for a specific application, as well as in-depth audits of a number of nuclear sites to make it possible to validate numerous results. Computer aided data collection is also on trial in a number of plants. After describing the EDF operating experience feedback files, we discuss the particular requirements of probabilistic risk analyses, and the resources implemented by EDF to satisfy them. (author). 5 refs

  2. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  3. Integrated application of the database for airborne geophysical survey achievement information

    International Nuclear Information System (INIS)

    Ji Zengxian; Zhang Junwei

    2006-01-01

    The paper briefly introduces the database of information for airborne geophysical survey achievements. This database was developed on the platform of Microsoft Windows System with the technical methods of Visual C++ 6.0 and MapGIS. It is an information management system concerning airborne geophysical surveying achievements with perfect functions in graphic display, graphic cutting and output, query of data, printing of documents and reports, maintenance of database, etc. All information of airborne geophysical survey achievements in nuclear industry from 1972 to 2003 was embedded in. Based on regional geological map and Meso-Cenozoic basin map, the detailed statistical information of each airborne survey area, each airborne radioactive anomalous point and high field point can be presented visually by combining geological or basin research result. The successful development of this system will provide a fairly good base and platform for management of archives and data of airborne geophysical survey achievements in nuclear industry. (authors)

  4. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  5. HCVpro: Hepatitis C virus protein interaction database

    KAUST Repository

    Kwofie, Samuel K.

    2011-12-01

    It is essential to catalog characterized hepatitis C virus (HCV) protein-protein interaction (PPI) data and the associated plethora of vital functional information to augment the search for therapies, vaccines and diagnostic biomarkers. In furtherance of these goals, we have developed the hepatitis C virus protein interaction database (HCVpro) by integrating manually verified hepatitis C virus-virus and virus-human protein interactions curated from literature and databases. HCVpro is a comprehensive and integrated HCV-specific knowledgebase housing consolidated information on PPIs, functional genomics and molecular data obtained from a variety of virus databases (VirHostNet, VirusMint, HCVdb and euHCVdb), and from BIND and other relevant biology repositories. HCVpro is further populated with information on hepatocellular carcinoma (HCC) related genes that are mapped onto their encoded cellular proteins. Incorporated proteins have been mapped onto Gene Ontologies, canonical pathways, Online Mendelian Inheritance in Man (OMIM) and extensively cross-referenced to other essential annotations. The database is enriched with exhaustive reviews on structure and functions of HCV proteins, current state of drug and vaccine development and links to recommended journal articles. Users can query the database using specific protein identifiers (IDs), chromosomal locations of a gene, interaction detection methods, indexed PubMed sources as well as HCVpro, BIND and VirusMint IDs. The use of HCVpro is free and the resource can be accessed via http://apps.sanbi.ac.za/hcvpro/ or http://cbrc.kaust.edu.sa/hcvpro/. © 2011 Elsevier B.V.

  6. Application of material databases for improved reliability of reactor pressure vessels

    International Nuclear Information System (INIS)

    Griesbach, T.J.; Server, W.L.; Beaudoin, B.F.; Burgos, B.N.

    1994-01-01

    A vital part of reactor vessel Life Cycle Management program must begin with an accurate characterization of the vessel material properties. Uncertainties in vessel material properties or use of bounding values may result in unnecessary conservatisms in vessel integrity calculations. These conservatisms may be eliminated through a better understanding of the material properties in reactor vessels, both in the unirradiated and irradiated conditions. Reactor vessel material databases are available for quantifying the chemistry and Charpy shift behavior of individual heats of reactor vessel materials. Application of the databases for vessels with embrittlement concerns has proven to be an effective embrittlement management tool. This paper presents details of database development and applications which demonstrate the value of using material databases for improving material chemistry and for maximizing the data from integrated material surveillance programs

  7. Human Variome Project Quality Assessment Criteria for Variation Databases.

    Science.gov (United States)

    Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter

    2016-06-01

    Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.

  8. Physics analysis database for the DIII-D tokamak

    International Nuclear Information System (INIS)

    Schissel, D.P.; Bramson, G.; DeBoo, J.C.

    1986-01-01

    The authors report on a centralized database for handling reduced data for physics analysis implemented for the DIII-D tokamak. Each database record corresponds to a specific snapshot in time for a selected discharge. Features of the database environment include automatic updating, data integrity checks, and data traceability. Reduced data from each diagnostic comprises a dedicated data bank (a subset of the database) with quality assurance provided by a physicist. These data banks will be used to create profile banks which will be input to a transport code to create a transport bank. Access to the database is initially through FORTRAN programs. One user interface, PLOTN, is a command driven program to select and display data subsets. Another user interface, PROF, compares and displays profiles. The database is implemented on a Digital Equipment Corporation VAX 8600 running VMS

  9. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  10. Parallel database search and prime factorization with magnonic holographic memory devices

    Energy Technology Data Exchange (ETDEWEB)

    Khitun, Alexander [Electrical and Computer Engineering Department, University of California - Riverside, Riverside, California 92521 (United States)

    2015-12-28

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  11. Parallel database search and prime factorization with magnonic holographic memory devices

    Science.gov (United States)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  12. Parallel database search and prime factorization with magnonic holographic memory devices

    International Nuclear Information System (INIS)

    Khitun, Alexander

    2015-01-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed

  13. Development of Integrated PSA Database and Application Technology

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Park, Jin Hee; Kim, Seung Hwan; Choi, Sun Yeong; Jung, Woo Sik; Jeong, Kwang Sub; Ha Jae Joo; Yang, Joon Eon; Min Kyung Ran; Kim, Tae Woon

    2005-04-15

    The purpose of this project is to develop 1) the reliability database framework, 2) the methodology for the reactor trip and abnormal event analysis, and 3) the prototype PSA information DB system. We already have a part of the reactor trip and component reliability data. In this study, we extend the collection of data up to 2002. We construct the pilot reliability database for common cause failure and piping failure data. A reactor trip or a component failure may have an impact on the safety of a nuclear power plant. We perform the precursor analysis for such events that occurred in the KSNP, and to develop a procedure for the precursor analysis. A risk monitor provides a mean to trace the changes in the risk following the changes in the plant configurations. We develop a methodology incorporating the model of secondary system related to the reactor trip into the risk monitor model. We develop a prototype PSA information system for the UCN 3 and 4 PSA models where information for the PSA is inputted into the system such as PSA reports, analysis reports, thermal-hydraulic analysis results, system notebooks, and so on. We develop a unique coherent BDD method to quantify a fault tree and the fastest fault tree quantification engine FTREX. We develop quantification software for a full PSA model and a one top model.

  14. Efficient Integrity Checking for Databases with Recursive Views

    DEFF Research Database (Denmark)

    Martinenghi, Davide; Christiansen, Henning

    2005-01-01

    Efficient and incremental maintenance of integrity constraints involving recursive views is a difficult issue that has received some attention in the past years, but for which no widely accepted solution exists yet. In this paper a technique is proposed for compiling such integrity constraints in...... approaches have not achieved comparable optimization with the same level of generality....

  15. A kinetics database and scripts for PHREEQC

    Science.gov (United States)

    Hu, B.; Zhang, Y.; Teng, Y.; Zhu, C.

    2017-12-01

    Kinetics of geochemical reactions has been increasingly used in numerical models to simulate coupled flow, mass transport, and chemical reactions. However, the kinetic data are scattered in the literature. To assemble a kinetic dataset for a modeling project is an intimidating task for most. In order to facilitate the application of kinetics in geochemical modeling, we assembled kinetics parameters into a database for the geochemical simulation program, PHREEQC (version 3.0). Kinetics data were collected from the literature. Our database includes kinetic data for over 70 minerals. The rate equations are also programmed into scripts with the Basic language. Using the new kinetic database, we simulated reaction path during the albite dissolution process using various rate equations in the literature. The simulation results with three different rate equations gave difference reaction paths at different time scale. Another application involves a coupled reactive transport model simulating the advancement of an acid plume in an acid mine drainage site associated with Bear Creek Uranium tailings pond. Geochemical reactions including calcite, gypsum, and illite were simulated with PHREEQC using the new kinetic database. The simulation results successfully demonstrated the utility of new kinetic database.

  16. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  17. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  18. Source-rock maturation characteristics of symmetric and asymmetric grabens inferred from integrated analogue and numerical modeling: The southern Viking Graben (North Sea)

    NARCIS (Netherlands)

    Corver, M.P.; Doust, H.; van Wees, J.D.A.M.; Cloetingh, S.A.P.L.

    2011-01-01

    We present the results of an integrated analogue and numerical modeling study with a focus on structural, stratigraphic and thermal differences between symmetric and asymmetric grabens. These models enable fault interpretation and subsidence analyses in studies of active rifting and graben

  19. Database and Related Activities in Japan

    International Nuclear Information System (INIS)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi; Sakaue, Hiroyuki A.; Kato, Takako; Ding, Xiaobin; Morita, Shigeru; Kitajima, Masashi; Koike, Fumihiro; Nakamura, Nobuyuki; Sakamoto, Naoki; Sasaki, Akira; Skobelev, Igor; Tsuchida, Hidetsugu; Ulantsev, Artemiy; Watanabe, Tetsuya; Yamamoto, Norimasa

    2011-01-01

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition of W 26+ ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association ''Forum of Atomic and Molecular Data and Their Applications'' to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.

  20. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  1. The Problem with the Delta Cost Project Database

    Science.gov (United States)

    Jaquette, Ozan; Parra, Edna

    2016-01-01

    The Integrated Postsecondary Education System (IPEDS) collects data on Title IV institutions. The Delta Cost Project (DCP) integrated data from multiple IPEDS survey components into a public-use longitudinal dataset. The DCP Database was the basis for dozens of journal articles and a series of influential policy reports. Unfortunately, a flaw in…

  2. Directory of Factual and Numeric Databases of Relevance to Aerospace and Defence R and D (Repertoire de Bases de donnees Factuelles ou Numeriques d’interet pour la R and D).

    Science.gov (United States)

    1992-07-01

    Madrid Faculty of Science Institute of Materials Sciences (Instituto de Ciencia de Materiales Facultad de Ciencias, Universidad Autonoma de Madrid) ADDRESS...OF ORGANIZATION: Construcciones Aeronauticas SA CASA ADDRESS/POINT OF CONTACT: Attention: J. Pascual Laboratory Aeropuerto de San Pablo 41007 Sevilla...Factual and Numeric Databases of Relevance to Aerospace and Defence R & D (Repertoire de Bases de donnees Factuelles ou Num~riques d’inte’re^t pour

  3. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  4. A numerical study on the structural integrity of self-anchored cable-stayed suspension bridges

    Directory of Open Access Journals (Sweden)

    Paolo Lonetti

    2016-10-01

    Full Text Available A generalized numerical model for predicting the structural integrity of self-anchored cable-stayed suspension bridges considering both geometric and material nonlinearities is proposed. The bridge is modeled by means of a 3D finite element approach based on a refined displacement-type finite element approximation, in which geometrical nonlinearities are assumed in all components of the structure. Moreover, nonlinearities produced by inelastic material and second order effects in the displacements are considered for girder and pylon elements, which combine gradual yielding theory with CRC tangent modulus concept. In addition, for the elements of the suspension system, i.e. stays, hangers and main cable, a finite plasticity theory is adopted to fully evaluate both geometric and material nonlinearities. In this framework, the influence of geometric and material nonlinearities on the collapse bridge behavior is investigated, by means of a comparative study, which identifies the effects produced on the ultimate bridge behavior of several sources of bridge nonlinearities involved in the bridge components. Results are developed with the purpose to evaluate numerically the influence of the material and geometric characteristics of self-anchored cable-stayed suspension bridges with respect also to conventional bridge based on cablestayed or suspension schemes

  5. MetaboSearch: tool for mass-based metabolite identification using multiple databases.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.

  6. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  7. Ultra-Structure database design methodology for managing systems biology data and analyses

    Directory of Open Access Journals (Sweden)

    Hemminger Bradley M

    2009-08-01

    Full Text Available Abstract Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping. Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find

  8. Comprehensive T-Matrix Reference Database: A 2007-2009 Update

    Science.gov (United States)

    Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas

    2010-01-01

    The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.

  9. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    Science.gov (United States)

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  10. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    Science.gov (United States)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  11. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  12. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-05-01

    Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  13. Sorption databases for increasing confidence in performance assessment - 16053

    International Nuclear Information System (INIS)

    Richter, Anke; Brendler, Vinzenz; Nebelung, Cordula; Payne, Timothy E.; Brasser, Thomas

    2009-01-01

    requires that all mineral constituents of the solid phase are characterized. Another issue is the large number of required parameters combined with time-consuming iterations. Addressing both approaches, we present two sorption databases, developed mainly by or under participation of the Forschungszentrum Dresden-Rossendorf (FZD). Both databases are implemented as relational databases, assist identification of critical data gaps and the evaluation of existing parameter sets, provide web based data search and analyses and permit the comparison of SCM predictions with K d values. RES 3 T (Rossendorf Expert System for Surface and Sorption Thermodynamics) is a digitized thermodynamic sorption database (see www.fzd.de/db/RES3T.login) and free of charge. It is mineral-specific and can therefore also be used for additive models of more complex solid phases. ISDA (Integrated Sorption Database System) connects SCM with the K d concept but focuses on conventional K d . The integrated datasets are accessible through a unified user interface. An application case, K d values in Performance Assessment, is given. (authors)

  14. Numerical investigation on exterior conformal mappings with application to airfoils

    International Nuclear Information System (INIS)

    Mohamad Rashidi Md Razali; Hu Laey Nee

    2000-01-01

    A numerical method is described in computing a conformal map from an exterior region onto the exterior of the unit disk. The numerical method is based on a boundary integral equation which is similar to the Kerzman-Stein integral equation for interior mapping. Some examples show that numerical results of high accuracy can be obtained provided that the boundaries are smooth. This numerical method has been applied to the mapping airfoils. However, due to the fact that the parametric representation of an air foil is not known, a cubic spline interpolation method has been used. Some numerical examples with satisfying results have been obtained for the symmetrical and cambered airfoils. (Author)

  15. Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically

    Science.gov (United States)

    Wu, Ming-Shin; Ruff, Gary A.

    2004-01-01

    When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and

  16. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  17. NUMERICAL WITHOUT ITERATION METHOD OF MODELING OF ELECTROMECHANICAL PROCESSES IN ASYNCHRONOUS ENGINES

    Directory of Open Access Journals (Sweden)

    D. G. Patalakh

    2018-02-01

    Full Text Available Purpose. Development of calculation of electromagnetic and electromechanic transients is in asynchronous engines without iterations. Methodology. Numeral methods of integration of usual differential equations, programming. Findings. As the system of equations, describing the dynamics of asynchronous engine, contents the products of rotor and stator currents and product of rotation frequency of rotor and currents, so this system is nonlinear one. The numeral solution of nonlinear differential equations supposes an iteration process on every step of integration. Time-continuing and badly converging iteration process may be the reason of calculation slowing. The improvement of numeral method by the way of an iteration process removing is offered. As result the modeling time is reduced. The improved numeral method is applied for integration of differential equations, describing the dynamics of asynchronous engine. Originality. The improvement of numeral method allowing to execute numeral integrations of differential equations containing product of functions is offered, that allows to avoid an iteration process on every step of integration and shorten modeling time. Practical value. On the basis of the offered methodology the universal program of modeling of electromechanics processes in asynchronous engines could be developed as taking advantage on fast-acting.

  18. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  19. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  20. Principles of data integration

    CERN Document Server

    Doan, AnHai; Ives, Zachary

    2012-01-01

    How do you approach answering queries when your data is stored in multiple databases that were designed independently by different people? This is first comprehensive book on data integration and is written by three of the most respected experts in the field. This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. Data integration is the problem of answering queries that span multiple data sources (e.g., databases, web