WorldWideScience

Sample records for integrated database providing

  1. Extending Database Integration Technology

    National Research Council Canada - National Science Library

    Buneman, Peter

    1999-01-01

    Formal approaches to the semantics of databases and database languages can have immediate and practical consequences in extending database integration technologies to include a vastly greater range...

  2. SPAN: A Network Providing Integrated, End-to-End, Sensor-to-Database Solutions for Environmental Sciences

    Science.gov (United States)

    Benzel, T.; Cho, Y. H.; Deschon, A.; Gullapalli, S.; Silva, F.

    2009-12-01

    In recent years, advances in sensor network technology have shown great promise to revolutionize environmental data collection. Still, wide spread adoption of these systems by domain experts has been lacking, and these have remained the purview of the engineers who design them. While there are many data logging options for basic data collection in the field currently, scientists are often required to visit the deployment sites to retrieve their data and manually import it into spreadsheets. Some advanced commercial software systems do allow scientists to collect data remotely, but most of these systems only allow point-to-point access, and require proprietary hardware. Furthermore, these commercial solutions preclude the use of sensors from other manufacturers or integration with internet based database repositories and compute engines. Therefore, scientists often must download and manually reformat their data before uploading it to the repositories if they wish to share their data. We present an open-source, low-cost, extensible, turnkey solution called Sensor Processing and Acquisition Network (SPAN) which provides a robust and flexible sensor network service. At the deployment site, SPAN leverages low-power generic embedded processors to integrate variety of commercially available sensor hardware to the network of environmental observation systems. By bringing intelligence close to the sensed phenomena, we can remotely control configuration and re-use, establish rules to trigger sensor activity, manage power requirements, and control the two-way flow of sensed data as well as control information to the sensors. Key features of our design include (1) adoption of a hardware agnostic architecture: our solutions are compatible with several programmable platforms, sensor systems, communication devices and protocols. (2) information standardization: our system supports several popular communication protocols and data formats, and (3) extensible data support: our

  3. On Simplification of Database Integrity Constraints

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2006-01-01

    Without proper simplification techniques, database integrity checking can be prohibitively time consuming. Several methods have been developed for producing simplified incremental checks for each update but none until now of sufficient quality and generality for providing a true practical impact,...

  4. A Database Integrity Pattern Language

    Directory of Open Access Journals (Sweden)

    Octavian Paul ROTARU

    2004-08-01

    Full Text Available Patterns and Pattern Languages are ways to capture experience and make it re-usable for others, and describe best practices and good designs. Patterns are solutions to recurrent problems.This paper addresses the database integrity problems from a pattern perspective. Even if the number of vendors of database management systems is quite high, the number of available solutions to integrity problems is limited. They all learned from the past experience applying the same solutions over and over again.The solutions to avoid integrity threats applied to in database management systems (DBMS can be formalized as a pattern language. Constraints, transactions, locks, etc, are recurrent integrity solutions to integrity threats and therefore they should be treated accordingly, as patterns.

  5. Loopedia, a database for loop integrals

    Science.gov (United States)

    Bogner, C.; Borowka, S.; Hahn, T.; Heinrich, G.; Jones, S. P.; Kerner, M.; von Manteuffel, A.; Michel, M.; Panzer, E.; Papara, V.

    2018-04-01

    Loopedia is a new database at loopedia.org for information on Feynman integrals, intended to provide both bibliographic information as well as results made available by the community. Its bibliometry is complementary to that of INSPIRE or arXiv in the sense that it admits searching for integrals by graph-theoretical objects, e.g. its topology.

  6. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L. [Univ. of Tennessee, Knoxville, TN (United States); Loftis, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States)

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  7. Integrated spent nuclear fuel database system

    International Nuclear Information System (INIS)

    Henline, S.P.; Klingler, K.G.; Schierman, B.H.

    1994-01-01

    The Distributed Information Systems software Unit at the Idaho National Engineering Laboratory has designed and developed an Integrated Spent Nuclear Fuel Database System (ISNFDS), which maintains a computerized inventory of all US Department of Energy (DOE) spent nuclear fuel (SNF). Commercial SNF is not included in the ISNFDS unless it is owned or stored by DOE. The ISNFDS is an integrated, single data source containing accurate, traceable, and consistent data and provides extensive data for each fuel, extensive facility data for every facility, and numerous data reports and queries

  8. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  9. [A web-based integrated clinical database for laryngeal cancer].

    Science.gov (United States)

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  10. DENdb: database of integrated human enhancers

    KAUST Repository

    Ashoor, Haitham

    2015-09-05

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information.

  11. DENdb: database of integrated human enhancers

    KAUST Repository

    Ashoor, Haitham; Kleftogiannis, Dimitrios A.; Radovanovic, Aleksandar; Bajic, Vladimir B.

    2015-01-01

    Enhancers are cis-acting DNA regulatory regions that play a key role in distal control of transcriptional activities. Identification of enhancers, coupled with a comprehensive functional analysis of their properties, could improve our understanding of complex gene transcription mechanisms and gene regulation processes in general. We developed DENdb, a centralized on-line repository of predicted enhancers derived from multiple human cell-lines. DENdb integrates enhancers predicted by five different methods generating an enriched catalogue of putative enhancers for each of the analysed cell-lines. DENdb provides information about the overlap of enhancers with DNase I hypersensitive regions, ChIP-seq regions of a number of transcription factors and transcription factor binding motifs, means to explore enhancer interactions with DNA using several chromatin interaction assays and enhancer neighbouring genes. DENdb is designed as a relational database that facilitates fast and efficient searching, browsing and visualization of information.

  12. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  13. SINBAD: Shielding integral benchmark archive and database

    International Nuclear Information System (INIS)

    Hunter, H.T.; Ingersoll, D.T.; Roussin, R.W.

    1996-01-01

    SINBAD is a new electronic database developed to store a variety of radiation shielding benchmark data so that users can easily retrieve and incorporate the data into their calculations. SINBAD is an excellent data source for users who require the quality assurance necessary in developing cross-section libraries or radiation transport codes. The future needs of the scientific community are best served by the electronic database format of SINBAD and its user-friendly interface, combined with its data accuracy and integrity

  14. Optimal database locks for efficient integrity checking

    DEFF Research Database (Denmark)

    Martinenghi, Davide

    2004-01-01

    In concurrent database systems, correctness of update transactions refers to the equivalent effects of the execution schedule and some serial schedule over the same set of transactions. Integrity constraints add further semantic requirements to the correctness of the database states reached upon...... the execution of update transactions. Several methods for efficient integrity checking and enforcing exist. We show in this paper how to apply one such method to automatically extend update transactions with locks and simplified consistency tests on the locked entities. All schedules produced in this way...

  15. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  16. Nuclear integrated database and design advancement system

    International Nuclear Information System (INIS)

    Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.

    1997-01-01

    The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs

  17. Distributed Access View Integrated Database (DAVID) system

    Science.gov (United States)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  18. High-integrity databases for helicopter operations

    Science.gov (United States)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  19. Integrated Space Asset Management Database and Modeling

    Science.gov (United States)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  20. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  1. A Support Database System for Integrated System Health Management (ISHM)

    Science.gov (United States)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  2. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  3. An Integrated Molecular Database on Indian Insects.

    Science.gov (United States)

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  4. INE: a rice genome database with an integrated map view.

    Science.gov (United States)

    Sakata, K; Antonio, B A; Mukai, Y; Nagasaki, H; Sakai, Y; Makino, K; Sasaki, T

    2000-01-01

    The Rice Genome Research Program (RGP) launched a large-scale rice genome sequencing in 1998 aimed at decoding all genetic information in rice. A new genome database called INE (INtegrated rice genome Explorer) has been developed in order to integrate all the genomic information that has been accumulated so far and to correlate these data with the genome sequence. A web interface based on Java applet provides a rapid viewing capability in the database. The first operational version of the database has been completed which includes a genetic map, a physical map using YAC (Yeast Artificial Chromosome) clones and PAC (P1-derived Artificial Chromosome) contigs. These maps are displayed graphically so that the positional relationships among the mapped markers on each chromosome can be easily resolved. INE incorporates the sequences and annotations of the PAC contig. A site on low quality information ensures that all submitted sequence data comply with the standard for accuracy. As a repository of rice genome sequence, INE will also serve as a common database of all sequence data obtained by collaborating members of the International Rice Genome Sequencing Project (IRGSP). The database can be accessed at http://www. dna.affrc.go.jp:82/giot/INE. html or its mirror site at http://www.staff.or.jp/giot/INE.html

  5. Database of episode-integrated solar energetic proton fluences

    Science.gov (United States)

    Robinson, Zachary D.; Adams, James H.; Xapsos, Michael A.; Stauffer, Craig A.

    2018-04-01

    A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8) and the Geostationary Operational Environmental Satellites (GOES) series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  6. Database of episode-integrated solar energetic proton fluences

    Directory of Open Access Journals (Sweden)

    Robinson Zachary D.

    2018-01-01

    Full Text Available A new database of proton episode-integrated fluences is described. This database contains data from two different instruments on multiple satellites. The data are from instruments on the Interplanetary Monitoring Platform-8 (IMP8 and the Geostationary Operational Environmental Satellites (GOES series. A method to normalize one set of data to one another is presented to create a seamless database spanning 1973 to 2016. A discussion of some of the characteristics that episodes exhibit is presented, including episode duration and number of peaks. As an example of what can be understood about episodes, the July 4, 2012 episode is examined in detail. The coronal mass ejections and solar flares that caused many of the fluctuations of the proton flux seen at Earth are associated with peaks in the proton flux during this episode. The reasoning for each choice is laid out to provide a reference for how CME and solar flares associations are made.

  7. Integrity Checking and Maintenance with Active Rules in XML Databases

    DEFF Research Database (Denmark)

    Christiansen, Henning; Rekouts, Maria

    2007-01-01

    While specification languages for integrity constraints for XML data have been considered in the literature, actual technologies and methodologies for checking and maintaining integrity are still in their infancy. Triggers, or active rules, which are widely used in previous technologies for the p...... updates, the method indicates trigger conditions and correctness criteria to be met by the trigger code supplied by a developer or possibly automatic methods. We show examples developed in the Sedna XML database system which provides a running implementation of XML triggers....

  8. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  9. Ontology based heterogeneous materials database integration and semantic query

    Science.gov (United States)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  10. DPTEdb, an integrative database of transposable elements in dioecious plants.

    Science.gov (United States)

    Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun

    2016-01-01

    Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.

  11. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  12. Heterogeneous Biomedical Database Integration Using a Hybrid Strategy: A p53 Cancer Research Database

    Directory of Open Access Journals (Sweden)

    Vadim Y. Bichutskiy

    2006-01-01

    Full Text Available Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.

  13. Emission & Generation Resource Integrated Database (eGRID)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions & Generation Resource Integrated Database (eGRID) is an integrated source of data on environmental characteristics of electric power generation....

  14. Database modeling to integrate macrobenthos data in Spatial Data Infrastructure

    Directory of Open Access Journals (Sweden)

    José Alberto Quintanilha

    2012-08-01

    Full Text Available Coastal zones are complex areas that include marine and terrestrial environments. Besides its huge environmental wealth, they also attracts humans because provides food, recreation, business, and transportation, among others. Some difficulties to manage these areas are related with their complexity, diversity of interests and the absence of standardization to collect and share data to scientific community, public agencies, among others. The idea to organize, standardize and share this information based on Web Atlas is essential to support planning and decision making issues. The construction of a spatial database integrating the environmental business, to be used on Spatial Data Infrastructure (SDI is illustrated by a bioindicator that indicates the quality of the sediments. The models show the phases required to build Macrobenthos spatial database based on Santos Metropolitan Region as a reference. It is concluded that, when working with environmental data the structuring of knowledge in a conceptual model is essential for their subsequent integration into the SDI. During the modeling process it can be noticed that methodological issues related to the collection process may obstruct or prejudice the integration of data from different studies of the same area. The development of a database model, as presented in this study, can be used as a reference for further research with similar goals.

  15. An integrated web medicinal materials DNA database: MMDBD (Medicinal Materials DNA Barcode Database

    Directory of Open Access Journals (Sweden)

    But Paul

    2010-06-01

    Full Text Available Abstract Background Thousands of plants and animals possess pharmacological properties and there is an increased interest in using these materials for therapy and health maintenance. Efficacies of the application is critically dependent on the use of genuine materials. For time to time, life-threatening poisoning is found because toxic adulterant or substitute is administered. DNA barcoding provides a definitive means of authentication and for conducting molecular systematics studies. Owing to the reduced cost in DNA authentication, the volume of the DNA barcodes produced for medicinal materials is on the rise and necessitates the development of an integrated DNA database. Description We have developed an integrated DNA barcode multimedia information platform- Medicinal Materials DNA Barcode Database (MMDBD for data retrieval and similarity search. MMDBD contains over 1000 species of medicinal materials listed in the Chinese Pharmacopoeia and American Herbal Pharmacopoeia. MMDBD also contains useful information of the medicinal material, including resources, adulterant information, medical parts, photographs, primers used for obtaining the barcodes and key references. MMDBD can be accessed at http://www.cuhk.edu.hk/icm/mmdbd.htm. Conclusions This work provides a centralized medicinal materials DNA barcode database and bioinformatics tools for data storage, analysis and exchange for promoting the identification of medicinal materials. MMDBD has the largest collection of DNA barcodes of medicinal materials and is a useful resource for researchers in conservation, systematic study, forensic and herbal industry.

  16. Integrated olfactory receptor and microarray gene expression databases

    Directory of Open Access Journals (Sweden)

    Crasto Chiquito J

    2007-06-01

    Full Text Available Abstract Background Gene expression patterns of olfactory receptors (ORs are an important component of the signal encoding mechanism in the olfactory system since they determine the interactions between odorant ligands and sensory neurons. We have developed the Olfactory Receptor Microarray Database (ORMD to house OR gene expression data. ORMD is integrated with the Olfactory Receptor Database (ORDB, which is a key repository of OR gene information. Both databases aim to aid experimental research related to olfaction. Description ORMD is a Web-accessible database that provides a secure data repository for OR microarray experiments. It contains both publicly available and private data; accessing the latter requires authenticated login. The ORMD is designed to allow users to not only deposit gene expression data but also manage their projects/experiments. For example, contributors can choose whether to make their datasets public. For each experiment, users can download the raw data files and view and export the gene expression data. For each OR gene being probed in a microarray experiment, a hyperlink to that gene in ORDB provides access to genomic and proteomic information related to the corresponding olfactory receptor. Individual ORs archived in ORDB are also linked to ORMD, allowing users access to the related microarray gene expression data. Conclusion ORMD serves as a data repository and project management system. It facilitates the study of microarray experiments of gene expression in the olfactory system. In conjunction with ORDB, ORMD integrates gene expression data with the genomic and functional data of ORs, and is thus a useful resource for both olfactory researchers and the public.

  17. Development of an integrated database management system to evaluate integrity of flawed components of nuclear power plant

    International Nuclear Information System (INIS)

    Mun, H. L.; Choi, S. N.; Jang, K. S.; Hong, S. Y.; Choi, J. B.; Kim, Y. J.

    2001-01-01

    The object of this paper is to develop an NPP-IDBMS(Integrated DataBase Management System for Nuclear Power Plants) for evaluating the integrity of components of nuclear power plant using relational data model. This paper describes the relational data model, structure and development strategy for the proposed NPP-IDBMS. The NPP-IDBMS consists of database, database management system and interface part. The database part consists of plant, shape, operating condition, material properties and stress database, which are required for the integrity evaluation of each component in nuclear power plants. For the development of stress database, an extensive finite element analysis was performed for various components considering operational transients. The developed NPP-IDBMS will provide efficient and accurate way to evaluate the integrity of flawed components

  18. Integrating pattern mining in relational databases

    NARCIS (Netherlands)

    Calders, T.; Goethals, B.; Prado, A.; Fürnkranz, J.; Scheffer, T.; Spiliopoulou, M.

    2006-01-01

    Almost a decade ago, Imielinski and Mannila introduced the notion of Inductive Databases to manage KDD applications just as DBMSs successfully manage business applications. The goal is to follow one of the key DBMS paradigms: building optimizing compilers for ad hoc queries. During the past decade,

  19. KAIKObase: An integrated silkworm genome database and data mining tool

    Directory of Open Access Journals (Sweden)

    Nagaraju Javaregowda

    2009-10-01

    Full Text Available Abstract Background The silkworm, Bombyx mori, is one of the most economically important insects in many developing countries owing to its large-scale cultivation for silk production. With the development of genomic and biotechnological tools, B. mori has also become an important bioreactor for production of various recombinant proteins of biomedical interest. In 2004, two genome sequencing projects for B. mori were reported independently by Chinese and Japanese teams; however, the datasets were insufficient for building long genomic scaffolds which are essential for unambiguous annotation of the genome. Now, both the datasets have been merged and assembled through a joint collaboration between the two groups. Description Integration of the two data sets of silkworm whole-genome-shotgun sequencing by the Japanese and Chinese groups together with newly obtained fosmid- and BAC-end sequences produced the best continuity (~3.7 Mb in N50 scaffold size among the sequenced insect genomes and provided a high degree of nucleotide coverage (88% of all 28 chromosomes. In addition, a physical map of BAC contigs constructed by fingerprinting BAC clones and a SNP linkage map constructed using BAC-end sequences were available. In parallel, proteomic data from two-dimensional polyacrylamide gel electrophoresis in various tissues and developmental stages were compiled into a silkworm proteome database. Finally, a Bombyx trap database was constructed for documenting insertion positions and expression data of transposon insertion lines. Conclusion For efficient usage of genome information for functional studies, genomic sequences, physical and genetic map information and EST data were compiled into KAIKObase, an integrated silkworm genome database which consists of 4 map viewers, a gene viewer, and sequence, keyword and position search systems to display results and data at the level of nucleotide sequence, gene, scaffold and chromosome. Integration of the

  20. Integration of Biodiversity Databases in Taiwan and Linkage to Global Databases

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available The biodiversity databases in Taiwan were dispersed to various institutions and colleges with limited amount of data by 2001. The Natural Resources and Ecology GIS Database sponsored by the Council of Agriculture, which is part of the National Geographic Information System planned by the Ministry of Interior, was the most well established biodiversity database in Taiwan. But thisThis database was, however, mainly collectingcollected the distribution data of terrestrial animals and plants within the Taiwan area. In 2001, GBIF was formed, and Taiwan joined as one of the an Associate Participant and started, starting the establishment and integration of animal and plant species databases; therefore, TaiBIF was able to co-operate with GBIF. The information of Catalog of Life, specimens, and alien species were integrated by the Darwin core. The standard. These metadata standards allowed the biodiversity information of Taiwan to connect with global databases.

  1. Database Translator (DATALATOR) for Integrated Exploitation

    Science.gov (United States)

    2010-10-31

    via the Internet to Fortune 1000 clients including Mercedes Benz , Procter & Gamble, and HP. I look forward to hearing of your successful proposal and working with you to build a successful business. Sincerely, ...testing the DATALATOR experimental prototype (IRL 4) designed to demonstrate its core functions based on Next (icneration Software technology . Die...sources, but is not directly dependent on the platform such as database technology or data formats. In other words, there is a clear air gap between

  2. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    Science.gov (United States)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  3. On the applicability of schema integration techniques to database interoperation

    NARCIS (Netherlands)

    Vermeer, Mark W.W.; Apers, Peter M.G.

    1996-01-01

    We discuss the applicability of schema integration techniques developed for tightly-coupled database interoperation to interoperation of databases stemming from different modelling contexts. We illustrate that in such an environment, it is typically quite difficult to infer the real-world semantics

  4. Data integration for plant genomics--exemplars from the integration of Arabidopsis thaliana databases.

    Science.gov (United States)

    Lysenko, Artem; Lysenko, Atem; Hindle, Matthew Morritt; Taubert, Jan; Saqi, Mansoor; Rawlings, Christopher John

    2009-11-01

    The development of a systems based approach to problems in plant sciences requires integration of existing information resources. However, the available information is currently often incomplete and dispersed across many sources and the syntactic and semantic heterogeneity of the data is a challenge for integration. In this article, we discuss strategies for data integration and we use a graph based integration method (Ondex) to illustrate some of these challenges with reference to two example problems concerning integration of (i) metabolic pathway and (ii) protein interaction data for Arabidopsis thaliana. We quantify the degree of overlap for three commonly used pathway and protein interaction information sources. For pathways, we find that the AraCyc database contains the widest coverage of enzyme reactions and for protein interactions we find that the IntAct database provides the largest unique contribution to the integrated dataset. For both examples, however, we observe a relatively small amount of data common to all three sources. Analysis and visual exploration of the integrated networks was used to identify a number of practical issues relating to the interpretation of these datasets. We demonstrate the utility of these approaches to the analysis of groups of coexpressed genes from an individual microarray experiment, in the context of pathway information and for the combination of coexpression data with an integrated protein interaction network.

  5. GDR (Genome Database for Rosaceae): integrated web-database for Rosaceae genomics and genetics data.

    Science.gov (United States)

    Jung, Sook; Staton, Margaret; Lee, Taein; Blenda, Anna; Svancara, Randall; Abbott, Albert; Main, Dorrie

    2008-01-01

    The Genome Database for Rosaceae (GDR) is a central repository of curated and integrated genetics and genomics data of Rosaceae, an economically important family which includes apple, cherry, peach, pear, raspberry, rose and strawberry. GDR contains annotated databases of all publicly available Rosaceae ESTs, the genetically anchored peach physical map, Rosaceae genetic maps and comprehensively annotated markers and traits. The ESTs are assembled to produce unigene sets of each genus and the entire Rosaceae. Other annotations include putative function, microsatellites, open reading frames, single nucleotide polymorphisms, gene ontology terms and anchored map position where applicable. Most of the published Rosaceae genetic maps can be viewed and compared through CMap, the comparative map viewer. The peach physical map can be viewed using WebFPC/WebChrom, and also through our integrated GDR map viewer, which serves as a portal to the combined genetic, transcriptome and physical mapping information. ESTs, BACs, markers and traits can be queried by various categories and the search result sites are linked to the mapping visualization tools. GDR also provides online analysis tools such as a batch BLAST/FASTA server for the GDR datasets, a sequence assembly server and microsatellite and primer detection tools. GDR is available at http://www.rosaceae.org.

  6. INTEGRATED INFORMATION SYSTEM ARCHITECTURE PROVIDING BEHAVIORAL FEATURE

    Directory of Open Access Journals (Sweden)

    Vladimir N. Shvedenko

    2016-11-01

    Full Text Available The paper deals with creation of integrated information system architecture capable of supporting management decisions using behavioral features. The paper considers the architecture of information decision support system for production system management. The behavioral feature is given to an information system, and it ensures extraction, processing of information, management decision-making with both automated and automatic modes of decision-making subsystem being permitted. Practical implementation of information system with behavior is based on service-oriented architecture: there is a set of independent services in the information system that provides data of its subsystems or data processing by separate application under the chosen variant of the problematic situation settlement. For creation of integrated information system with behavior we propose architecture including the following subsystems: data bus, subsystem for interaction with the integrated applications based on metadata, business process management subsystem, subsystem for the current state analysis of the enterprise and management decision-making, behavior training subsystem. For each problematic situation a separate logical layer service is created in Unified Service Bus handling problematic situations. This architecture reduces system information complexity due to the fact that with a constant amount of system elements the number of links decreases, since each layer provides communication center of responsibility for the resource with the services of corresponding applications. If a similar problematic situation occurs, its resolution is automatically removed from problem situation metamodel repository and business process metamodel of its settlement. In the business process performance commands are generated to the corresponding centers of responsibility to settle a problematic situation.

  7. Integration of functions in logic database systems

    NARCIS (Netherlands)

    Lambrichts, E.; Nees, P.; Paredaens, J.; Peelman, P.; Tanca, L.

    1990-01-01

    We extend Datalog, a logic programming language for rule-based systems, by respectively integrating types, negation and functions. This extention of Datalog is called MilAnt. Furthermore, MilAnt consistency is defined as a stronger form of consistency for functions. It is known that consistency for

  8. Database and applications security integrating information security and data management

    CERN Document Server

    Thuraisingham, Bhavani

    2005-01-01

    This is the first book to provide an in-depth coverage of all the developments, issues and challenges in secure databases and applications. It provides directions for data and application security, including securing emerging applications such as bioinformatics, stream information processing and peer-to-peer computing. Divided into eight sections, each of which focuses on a key concept of secure databases and applications, this book deals with all aspects of technology, including secure relational databases, inference problems, secure object databases, secure distributed databases and emerging

  9. An Integrated Enterprise Accelerator Database for the SLC Control System

    International Nuclear Information System (INIS)

    2002-01-01

    Since its inception in the early 1980's, the SLC Control System has been driven by a highly structured memory-resident real-time database. While efficient, its rigid structure and file-based sources makes it difficult to maintain and extract relevant information. The goal of transforming the sources for this database into a relational form is to enable it to be part of a Control System Enterprise Database that is an integrated central repository for SLC accelerator device and Control System data with links to other associated databases. We have taken the concepts developed for the NLC Enterprise Database and used them to create and load a relational model of the online SLC Control System database. This database contains data and structure to allow querying and reporting on beamline devices, their associations and parameters. In the future this will be extended to allow generation of EPICS and SLC database files, setup of applications and links to other databases such as accelerator maintenance, archive data, financial and personnel records, cabling information, documentation etc. The database is implemented using Oracle 8i. In the short term it will be updated daily in batch from the online SLC database. In the longer term, it will serve as the primary source for Control System static data, an R and D platform for the NLC, and contribute to SLC Control System operations

  10. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  11. Integration of a clinical trial database with a PACS

    International Nuclear Information System (INIS)

    Van Herk, M

    2014-01-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  12. TOMATOMICS: A Web Database for Integrated Omics Information in Tomato

    KAUST Repository

    Kudo, Toru; Kobayashi, Masaaki; Terashima, Shin; Katayama, Minami; Ozaki, Soichi; Kanno, Maasa; Saito, Misa; Yokoyama, Koji; Ohyanagi, Hajime; Aoki, Koh; Kubo, Yasutaka; Yano, Kentaro

    2016-01-01

    Solanum lycopersicum (tomato) is an important agronomic crop and a major model fruit-producing plant. To facilitate basic and applied research, comprehensive experimental resources and omics information on tomato are available following their development. Mutant lines and cDNA clones from a dwarf cultivar, Micro-Tom, are two of these genetic resources. Large-scale sequencing data for ESTs and full-length cDNAs from Micro-Tom continue to be gathered. In conjunction with information on the reference genome sequence of another cultivar, Heinz 1706, the Micro-Tom experimental resources have facilitated comprehensive functional analyses. To enhance the efficiency of acquiring omics information for tomato biology, we have integrated the information on the Micro-Tom experimental resources and the Heinz 1706 genome sequence. We have also inferred gene structure by comparison of sequences between the genome of Heinz 1706 and the transcriptome, which are comprised of Micro-Tom full-length cDNAs and Heinz 1706 RNA-seq data stored in the KaFTom and Sequence Read Archive databases. In order to provide large-scale omics information with streamlined connectivity we have developed and maintain a web database TOMATOMICS (http://bioinf.mind.meiji.ac.jp/tomatomics/). In TOMATOMICS, access to the information on the cDNA clone resources, full-length mRNA sequences, gene structures, expression profiles and functional annotations of genes is available through search functions and the genome browser, which has an intuitive graphical interface.

  13. TOMATOMICS: A Web Database for Integrated Omics Information in Tomato

    KAUST Repository

    Kudo, Toru

    2016-11-29

    Solanum lycopersicum (tomato) is an important agronomic crop and a major model fruit-producing plant. To facilitate basic and applied research, comprehensive experimental resources and omics information on tomato are available following their development. Mutant lines and cDNA clones from a dwarf cultivar, Micro-Tom, are two of these genetic resources. Large-scale sequencing data for ESTs and full-length cDNAs from Micro-Tom continue to be gathered. In conjunction with information on the reference genome sequence of another cultivar, Heinz 1706, the Micro-Tom experimental resources have facilitated comprehensive functional analyses. To enhance the efficiency of acquiring omics information for tomato biology, we have integrated the information on the Micro-Tom experimental resources and the Heinz 1706 genome sequence. We have also inferred gene structure by comparison of sequences between the genome of Heinz 1706 and the transcriptome, which are comprised of Micro-Tom full-length cDNAs and Heinz 1706 RNA-seq data stored in the KaFTom and Sequence Read Archive databases. In order to provide large-scale omics information with streamlined connectivity we have developed and maintain a web database TOMATOMICS (http://bioinf.mind.meiji.ac.jp/tomatomics/). In TOMATOMICS, access to the information on the cDNA clone resources, full-length mRNA sequences, gene structures, expression profiles and functional annotations of genes is available through search functions and the genome browser, which has an intuitive graphical interface.

  14. Construction of an integrated database to support genomic sequence analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  15. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  16. Current status of system development to provide databases of nuclides migration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Yoshida, Yasushi; Isogai, Takeshi; Suyama, Tadahiro; Shibata, Masahiro; Yui, Mikazu; Jintoku, Takashi

    2005-01-01

    JNC has developed databases of nuclides migration for safety assessment of high-level radioactive waste (HLW) repository, and they have been used in the second progress report to present the technical reliability of HLW geological disposal system in Japan. The technical level and applicability of databases have been highly evaluated even overseas. To provide the databases broadly over the world and to promote the use of the databases, we have performed the followings: 1) development of tools to convert the database format from geochemical code PHREEQE to PHREEQC, GWB and EQ3/6 and 2) set up a web site (http://migrationdb.jnc.go.jp) which enables the public to access to the databases. As a result, the number of database users has significantly increased. Additionally, a number of useful comments from the users can be applied to modification and/or update of databases. (author)

  17. Building an integrated neurodegenerative disease database at an academic health center.

    Science.gov (United States)

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  18. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna; Tramontano, Anna; Marcatili, Paolo

    2011-01-01

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  19. A database of immunoglobulins with integrated tools: DIGIT.

    KAUST Repository

    Chailyan, Anna

    2011-11-10

    The DIGIT (Database of ImmunoGlobulins with Integrated Tools) database (http://biocomputing.it/digit) is an integrated resource storing sequences of annotated immunoglobulin variable domains and enriched with tools for searching and analyzing them. The annotations in the database include information on the type of antigen, the respective germline sequences and on pairing information between light and heavy chains. Other annotations, such as the identification of the complementarity determining regions, assignment of their structural class and identification of mutations with respect to the germline, are computed on the fly and can also be obtained for user-submitted sequences. The system allows customized BLAST searches and automatic building of 3D models of the domains to be performed.

  20. Determinants of patient loyalty to healthcare providers: An integrative review.

    Science.gov (United States)

    Zhou, Wei-Jiao; Wan, Qiao-Qin; Liu, Cong-Ying; Feng, Xiao-Lin; Shang, Shao-Mei

    2017-08-01

    Patient loyalty is key to business success for healthcare providers and also for patient health outcomes. This study aims to identify determinants influencing patient loyalty to healthcare providers and propose an integrative conceptual model of the influencing factors. PubMed, CINAHL, OVID, ProQuest and Elsevier Science Direct databases were searched. Publications about determinants of patient loyalty to health providers were screened, and 13 articles were included. Date of publication, location of the research, sample details, objectives and findings/conclusions were extracted for 13 articles. Thirteen studies explored eight determinants: satisfaction, quality, value, hospital brand image, trust, commitment, organizational citizenship behavior and customer complaints. The integrated conceptual model comprising all the determinants demonstrated the significant positive direct impact of quality on satisfaction and value, satisfaction on trust and commitment, trust on commitment and loyalty, and brand image on quality and loyalty. This review identifies and models the determinants of patient loyalty to healthcare providers. Further studies are needed to explore the influence of trust, commitment, and switching barriers on patient loyalty. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  1. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    Science.gov (United States)

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  2. Brassica ASTRA: an integrated database for Brassica genomic research.

    Science.gov (United States)

    Love, Christopher G; Robinson, Andrew J; Lim, Geraldine A C; Hopkins, Clare J; Batley, Jacqueline; Barker, Gary; Spangenberg, German C; Edwards, David

    2005-01-01

    Brassica ASTRA is a public database for genomic information on Brassica species. The database incorporates expressed sequences with Swiss-Prot and GenBank comparative sequence annotation as well as secondary Gene Ontology (GO) annotation derived from the comparison with Arabidopsis TAIR GO annotations. Simple sequence repeat molecular markers are identified within resident sequences and mapped onto the closely related Arabidopsis genome sequence. Bacterial artificial chromosome (BAC) end sequences derived from the Multinational Brassica Genome Project are also mapped onto the Arabidopsis genome sequence enabling users to identify candidate Brassica BACs corresponding to syntenic regions of Arabidopsis. This information is maintained in a MySQL database with a web interface providing the primary means of interrogation. The database is accessible at http://hornbill.cspp.latrobe.edu.au.

  3. The Center for Integrated Molecular Brain Imaging (Cimbi) database

    DEFF Research Database (Denmark)

    Knudsen, Gitte M.; Jensen, Peter S.; Erritzoe, David

    2016-01-01

    We here describe a multimodality neuroimaging containing data from healthy volunteers and patients, acquired within the Lundbeck Foundation Center for Integrated Molecular Brain Imaging (Cimbi) in Copenhagen, Denmark. The data is of particular relevance for neurobiological research questions rela...... currently contains blood and in some instances saliva samples from about 500 healthy volunteers and 300 patients with e.g., major depression, dementia, substance abuse, obesity, and impulsive aggression. Data continue to be added to the Cimbi database and biobank....

  4. An information integration system for structured documents, Web, and databases

    OpenAIRE

    Morishima, Atsuyuki

    1998-01-01

    Rapid advance in computer network technology has changed the style of computer utilization. Distributed computing resources over world-wide computer networks are available from our local computers. They include powerful computers and a variety of information sources. This change is raising more advanced requirements. Integration of distributed information sources is one of such requirements. In addition to conventional databases, structured documents have been widely used, and have increasing...

  5. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics

    OpenAIRE

    Verma, Mohit; Kumar, Vinay; Patel, Ravi K.; Garg, Rohini; Jain, Mukesh

    2015-01-01

    Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB), which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database fea...

  6. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  7. Integrated database for rapid mass movements in Norway

    Directory of Open Access Journals (Sweden)

    C. Jaedicke

    2009-03-01

    Full Text Available Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously.

    An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development.

    Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord

  8. A perspective for biomedical data integration: Design of databases for flow cytometry

    Directory of Open Access Journals (Sweden)

    Lakoumentas John

    2008-02-01

    Full Text Available Abstract Background The integration of biomedical information is essential for tackling medical problems. We describe a data model in the domain of flow cytometry (FC allowing for massive management, analysis and integration with other laboratory and clinical information. The paper is concerned with the proper translation of the Flow Cytometry Standard (FCS into a relational database schema, in a way that facilitates end users at either doing research on FC or studying specific cases of patients undergone FC analysis Results The proposed database schema provides integration of data originating from diverse acquisition settings, organized in a way that allows syntactically simple queries that provide results significantly faster than the conventional implementations of the FCS standard. The proposed schema can potentially achieve up to 8 orders of magnitude reduction in query complexity and up to 2 orders of magnitude reduction in response time for data originating from flow cytometers that record 256 colours. This is mainly achieved by managing to maintain an almost constant number of data-mining procedures regardless of the size and complexity of the stored information. Conclusion It is evident that using single-file data storage standards for the design of databases without any structural transformations significantly limits the flexibility of databases. Analysis of the requirements of a specific domain for integration and massive data processing can provide the necessary schema modifications that will unlock the additional functionality of a relational database.

  9. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002.

    Science.gov (United States)

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics's usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics. © The Author(s) 2015. Published by Oxford University Press.

  10. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  11. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  12. Integration of curated databases to identify genotype-phenotype associations

    Directory of Open Access Journals (Sweden)

    Li Jianrong

    2006-10-01

    Full Text Available Abstract Background The ability to rapidly characterize an unknown microorganism is critical in both responding to infectious disease and biodefense. To do this, we need some way of anticipating an organism's phenotype based on the molecules encoded by its genome. However, the link between molecular composition (i.e. genotype and phenotype for microbes is not obvious. While there have been several studies that address this challenge, none have yet proposed a large-scale method integrating curated biological information. Here we utilize a systematic approach to discover genotype-phenotype associations that combines phenotypic information from a biomedical informatics database, GIDEON, with the molecular information contained in National Center for Biotechnology Information's Clusters of Orthologous Groups database (NCBI COGs. Results Integrating the information in the two databases, we are able to correlate the presence or absence of a given protein in a microbe with its phenotype as measured by certain morphological characteristics or survival in a particular growth media. With a 0.8 correlation score threshold, 66% of the associations found were confirmed by the literature and at a 0.9 correlation threshold, 86% were positively verified. Conclusion Our results suggest possible phenotypic manifestations for proteins biochemically associated with sugar metabolism and electron transport. Moreover, we believe our approach can be extended to linking pathogenic phenotypes with functionally related proteins.

  13. Development of integrated parameter database for risk assessment at the Rokkasho Reprocessing Plant

    International Nuclear Information System (INIS)

    Tamauchi, Yoshikazu

    2011-01-01

    A study to develop a parameter database for Probabilistic Safety Assessment (PSA) for the application of risk information on plant operation and maintenance activity is important because the transparency, consistency, and traceability of parameters are needed to explanation adequacy of the evaluation to third parties. Application of risk information for the plant operation and maintenance activity, equipment reliability data, human error rate, and 5 factors of 'five-factor formula' for estimation of the amount of radioactive material discharge (source term) are key inputs. As a part of the infrastructure development for the risk information application, we developed the integrated parameter database, 'R-POD' (Rokkasho reprocessing Plant Omnibus parameter Database) on the trial basis for the PSA of the Rokkasho Reprocessing Plant. This database consists primarily of the following 3 parts, 1) an equipment reliability database, 2) a five-factor formula database, and 3) a human reliability database. The underpinning for explaining the validity of the risk assessment can be improved by developing this database. Furthermore, this database is an important tool for the application of risk information, because it provides updated data by incorporating the accumulated operation experiences of the Rokkasho reprocessing plant. (author)

  14. An object-oriented language-database integration model: The composition filters approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, Sinan; Vural, S.

    1991-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  15. Document control system as an integral part of RA documentation database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    The decision about the final shutdown of the RA research reactor in Vinca Institute has been brought in 2002, and therefore the preparations for its decommissioning have begun. All activities are supervised by the International Atomic Energy Agency (IAEA), which also provides technical and experts' support. This paper describes the document control system is an integral part of the existing RA documentation database. (author)

  16. An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach

    NARCIS (Netherlands)

    Aksit, Mehmet; Bergmans, Lodewijk; Vural, S.; Vural, Sinan; Lehrmann Madsen, O.

    1992-01-01

    This paper introduces a new model, based on so-called object-composition filters, that uniformly integrates database-like features into an object-oriented language. The focus is on providing persistent dynamic data structures, data sharing, transactions, multiple views and associative access,

  17. GDR (Genome Database for Rosaceae: integrated web resources for Rosaceae genomics and genetics research

    Directory of Open Access Journals (Sweden)

    Ficklin Stephen

    2004-09-01

    Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  18. GDR (Genome Database for Rosaceae): integrated web resources for Rosaceae genomics and genetics research.

    Science.gov (United States)

    Jung, Sook; Jesudurai, Christopher; Staton, Margaret; Du, Zhidian; Ficklin, Stephen; Cho, Ilhyung; Abbott, Albert; Tomkins, Jeffrey; Main, Dorrie

    2004-09-09

    Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. The Genome Database for Rosaceae (GDR) is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.

  19. IPAD: the Integrated Pathway Analysis Database for Systematic Enrichment Analysis.

    Science.gov (United States)

    Zhang, Fan; Drabier, Renee

    2012-01-01

    Next-Generation Sequencing (NGS) technologies and Genome-Wide Association Studies (GWAS) generate millions of reads and hundreds of datasets, and there is an urgent need for a better way to accurately interpret and distill such large amounts of data. Extensive pathway and network analysis allow for the discovery of highly significant pathways from a set of disease vs. healthy samples in the NGS and GWAS. Knowledge of activation of these processes will lead to elucidation of the complex biological pathways affected by drug treatment, to patient stratification studies of new and existing drug treatments, and to understanding the underlying anti-cancer drug effects. There are approximately 141 biological human pathway resources as of Jan 2012 according to the Pathguide database. However, most currently available resources do not contain disease, drug or organ specificity information such as disease-pathway, drug-pathway, and organ-pathway associations. Systematically integrating pathway, disease, drug and organ specificity together becomes increasingly crucial for understanding the interrelationships between signaling, metabolic and regulatory pathway, drug action, disease susceptibility, and organ specificity from high-throughput omics data (genomics, transcriptomics, proteomics and metabolomics). We designed the Integrated Pathway Analysis Database for Systematic Enrichment Analysis (IPAD, http://bioinfo.hsc.unt.edu/ipad), defining inter-association between pathway, disease, drug and organ specificity, based on six criteria: 1) comprehensive pathway coverage; 2) gene/protein to pathway/disease/drug/organ association; 3) inter-association between pathway, disease, drug, and organ; 4) multiple and quantitative measurement of enrichment and inter-association; 5) assessment of enrichment and inter-association analysis with the context of the existing biological knowledge and a "gold standard" constructed from reputable and reliable sources; and 6) cross-linking of

  20. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii

    Directory of Open Access Journals (Sweden)

    Kempa Stefan

    2009-05-01

    Full Text Available Abstract Background The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. Results In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. Conclusion ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  1. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.

    Science.gov (United States)

    May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk

    2009-05-04

    The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  2. Dynamically Integrating OSM Data into a Borderland Database

    Directory of Open Access Journals (Sweden)

    Xiaoguang Zhou

    2015-09-01

    Full Text Available Spatial data are fundamental for borderland analyses of geography, natural resources, demography, politics, economy, and culture. As the spatial data used in borderland research usually cover the borderland regions of several neighboring countries, it is difficult for anyone research institution of government to collect them. Volunteered Geographic Information (VGI is a highly successful method for acquiring timely and detailed global spatial data at a very low cost. Therefore, VGI is a reasonable source of borderland spatial data. OpenStreetMap (OSM is known as the most successful VGI resource. However, OSM's data model is far different from the traditional geographic information model. Thus, the OSM data must be converted in the scientist’s customized data model. Because the real world changes rapidly, the converted data must be updated incrementally. Therefore, this paper presents a method used to dynamically integrate OSM data into the borderland database. In this method, a basic transformation rule base is formed by comparing the OSM Map Feature description document and the destination model definitions. Using the basic rules, the main features can be automatically converted to the destination model. A human-computer interaction model transformation and a rule/automatic-remember mechanism are developed to interactively transfer the unusual features that cannot be transferred by the basic rules to the target model and to remember the reusable rules automatically. To keep the borderland database current, the global OsmChange daily diff file is used to extract the change-only information for the research region. To extract the changed objects in the region under study, the relationship between the changed object and the research region is analyzed considering the evolution of the involved objects. In addition, five rules are determined to select the objects and integrate the changed objects with multi-versions over time. The objects

  3. Deep Time Data Infrastructure: Integrating Our Current Geologic and Biologic Databases

    Science.gov (United States)

    Kolankowski, S. M.; Fox, P. A.; Ma, X.; Prabhu, A.

    2016-12-01

    As our knowledge of Earth's geologic and mineralogical history grows, we require more efficient methods of sharing immense amounts of data. Databases across numerous disciplines have been utilized to offer extensive information on very specific Epochs of Earth's history up to its current state, i.e. Fossil record, rock composition, proteins, etc. These databases could be a powerful force in identifying previously unseen correlations such as relationships between minerals and proteins. Creating a unifying site that provides a portal to these databases will aid in our ability as a collaborative scientific community to utilize our findings more effectively. The Deep-Time Data Infrastructure (DTDI) is currently being defined as part of a larger effort to accomplish this goal. DTDI will not be a new database, but an integration of existing resources. Current geologic and related databases were identified, documentation of their schema was established and will be presented as a stage by stage progression. Through conceptual modeling focused around variables from their combined records, we will determine the best way to integrate these databases using common factors. The Deep-Time Data Infrastructure will allow geoscientists to bridge gaps in data and further our understanding of our Earth's history.

  4. CTDB: An Integrated Chickpea Transcriptome Database for Functional and Applied Genomics.

    Directory of Open Access Journals (Sweden)

    Mohit Verma

    Full Text Available Chickpea is an important grain legume used as a rich source of protein in human diet. The narrow genetic diversity and limited availability of genomic resources are the major constraints in implementing breeding strategies and biotechnological interventions for genetic enhancement of chickpea. We developed an integrated Chickpea Transcriptome Database (CTDB, which provides the comprehensive web interface for visualization and easy retrieval of transcriptome data in chickpea. The database features many tools for similarity search, functional annotation (putative function, PFAM domain and gene ontology search and comparative gene expression analysis. The current release of CTDB (v2.0 hosts transcriptome datasets with high quality functional annotation from cultivated (desi and kabuli types and wild chickpea. A catalog of transcription factor families and their expression profiles in chickpea are available in the database. The gene expression data have been integrated to study the expression profiles of chickpea transcripts in major tissues/organs and various stages of flower development. The utilities, such as similarity search, ortholog identification and comparative gene expression have also been implemented in the database to facilitate comparative genomic studies among different legumes and Arabidopsis. Furthermore, the CTDB represents a resource for the discovery of functional molecular markers (microsatellites and single nucleotide polymorphisms between different chickpea types. We anticipate that integrated information content of this database will accelerate the functional and applied genomic research for improvement of chickpea. The CTDB web service is freely available at http://nipgr.res.in/ctdb.html.

  5. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    Directory of Open Access Journals (Sweden)

    Stobbe Miranda D

    2011-10-01

    Full Text Available Abstract Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison

  6. KaBOB: ontology-based semantic integration of biomedical databases.

    Science.gov (United States)

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for

  7. IMG: the integrated microbial genomes database and comparative analysis system

    Science.gov (United States)

    Markowitz, Victor M.; Chen, I-Min A.; Palaniappan, Krishna; Chu, Ken; Szeto, Ernest; Grechkin, Yuri; Ratner, Anna; Jacob, Biju; Huang, Jinghua; Williams, Peter; Huntemann, Marcel; Anderson, Iain; Mavromatis, Konstantinos; Ivanova, Natalia N.; Kyrpides, Nikos C.

    2012-01-01

    The Integrated Microbial Genomes (IMG) system serves as a community resource for comparative analysis of publicly available genomes in a comprehensive integrated context. IMG integrates publicly available draft and complete genomes from all three domains of life with a large number of plasmids and viruses. IMG provides tools and viewers for analyzing and reviewing the annotations of genes and genomes in a comparative context. IMG's data content and analytical capabilities have been continuously extended through regular updates since its first release in March 2005. IMG is available at http://img.jgi.doe.gov. Companion IMG systems provide support for expert review of genome annotations (IMG/ER: http://img.jgi.doe.gov/er), teaching courses and training in microbial genome analysis (IMG/EDU: http://img.jgi.doe.gov/edu) and analysis of genomes related to the Human Microbiome Project (IMG/HMP: http://www.hmpdacc-resources.org/img_hmp). PMID:22194640

  8. Reactor core materials research and integrated material database establishment

    International Nuclear Information System (INIS)

    Ryu, Woo Seog; Jang, J. S.; Kim, D. W.

    2002-03-01

    Mainly two research areas were covered in this project. One is to establish the integrated database of nuclear materials, and the other is to study the behavior of reactor core materials, which are usually under the most severe condition in the operating plants. During the stage I of the project (for three years since 1999) in- and out of reactor properties of stainless steel, the major structural material for the core structures of PWR (Pressurized Water Reactor), were evaluated and specification of nuclear grade material was established. And the damaged core components from domestic power plants, e.g. orifice of CVCS, support pin of CRGT, etc. were investigated and the causes were revealed. To acquire more resistant materials to the nuclear environments, development of the alternative alloys was also conducted. For the integrated DB establishment, a task force team was set up including director of nuclear materials technology team, and projector leaders and relevant members from each project. The DB is now opened in public through the Internet

  9. Data Integration for Spatio-Temporal Patterns of Gene Expression of Zebrafish development: the GEMS database

    Directory of Open Access Journals (Sweden)

    Belmamoune Mounia

    2008-06-01

    Full Text Available The Gene Expression Management System (GEMS is a database system for patterns of gene expression. These patterns result from systematic whole-mount fluorescent in situ hybridization studies on zebrafish embryos. GEMS is an integrative platform that addresses one of the important challenges of developmental biology: how to integrate genetic data that underpin morphological changes during embryogenesis. Our motivation to build this system was by the need to be able to organize and compare multiple patterns of gene expression at tissue level. Integration with other developmental and biomolecular databases will further support our understanding of development. The GEMS operates in concert with a database containing a digital atlas of zebrafish embryo; this digital atlas of zebrafish development has been conceived prior to the expansion of the GEMS. The atlas contains 3D volume models of canonical stages of zebrafish development in which in each volume model element is annotated with an anatomical term. These terms are extracted from a formal anatomical ontology, i.e. the Developmental Anatomy Ontology of Zebrafish (DAOZ. In the GEMS, anatomical terms from this ontology together with terms from the Gene Ontology (GO are also used to annotate patterns of gene expression and in this manner providing mechanisms for integration and retrieval . The annotations are the glue for integration of patterns of gene expression in GEMS as well as in other biomolecular databases. At the one hand, zebrafish anatomy terminology allows gene expression data within GEMS to be integrated with phenotypical data in the 3D atlas of zebrafish development. At the other hand, GO terms extend GEMS expression patterns integration to a wide range of bioinformatics resources.

  10. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii

    OpenAIRE

    May, P.; Christian, J.O.; Kempa, S.; Walther, D.

    2009-01-01

    Abstract Background The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. Results In the fra...

  11. Development of Integrated PSA Database and Application Technology

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Park, Jin Hee; Kim, Seung Hwan; Choi, Sun Yeong; Jung, Woo Sik; Jeong, Kwang Sub; Ha Jae Joo; Yang, Joon Eon; Min Kyung Ran; Kim, Tae Woon

    2005-04-15

    The purpose of this project is to develop 1) the reliability database framework, 2) the methodology for the reactor trip and abnormal event analysis, and 3) the prototype PSA information DB system. We already have a part of the reactor trip and component reliability data. In this study, we extend the collection of data up to 2002. We construct the pilot reliability database for common cause failure and piping failure data. A reactor trip or a component failure may have an impact on the safety of a nuclear power plant. We perform the precursor analysis for such events that occurred in the KSNP, and to develop a procedure for the precursor analysis. A risk monitor provides a mean to trace the changes in the risk following the changes in the plant configurations. We develop a methodology incorporating the model of secondary system related to the reactor trip into the risk monitor model. We develop a prototype PSA information system for the UCN 3 and 4 PSA models where information for the PSA is inputted into the system such as PSA reports, analysis reports, thermal-hydraulic analysis results, system notebooks, and so on. We develop a unique coherent BDD method to quantify a fault tree and the fastest fault tree quantification engine FTREX. We develop quantification software for a full PSA model and a one top model.

  12. Brassica database (BRAD) version 2.0: integrating and mining Brassicaceae species genomic resources.

    Science.gov (United States)

    Wang, Xiaobo; Wu, Jian; Liang, Jianli; Cheng, Feng; Wang, Xiaowu

    2015-01-01

    The Brassica database (BRAD) was built initially to assist users apply Brassica rapa and Arabidopsis thaliana genomic data efficiently to their research. However, many Brassicaceae genomes have been sequenced and released after its construction. These genomes are rich resources for comparative genomics, gene annotation and functional evolutionary studies of Brassica crops. Therefore, we have updated BRAD to version 2.0 (V2.0). In BRAD V2.0, 11 more Brassicaceae genomes have been integrated into the database, namely those of Arabidopsis lyrata, Aethionema arabicum, Brassica oleracea, Brassica napus, Camelina sativa, Capsella rubella, Leavenworthia alabamica, Sisymbrium irio and three extremophiles Schrenkiella parvula, Thellungiella halophila and Thellungiella salsuginea. BRAD V2.0 provides plots of syntenic genomic fragments between pairs of Brassicaceae species, from the level of chromosomes to genomic blocks. The Generic Synteny Browser (GBrowse_syn), a module of the Genome Browser (GBrowse), is used to show syntenic relationships between multiple genomes. Search functions for retrieving syntenic and non-syntenic orthologs, as well as their annotation and sequences are also provided. Furthermore, genome and annotation information have been imported into GBrowse so that all functional elements can be visualized in one frame. We plan to continually update BRAD by integrating more Brassicaceae genomes into the database. Database URL: http://brassicadb.org/brad/. © The Author(s) 2015. Published by Oxford University Press.

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. Functional integration of automated system databases by means of artificial intelligence

    Science.gov (United States)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  15. MAGIC Database and Interfaces: An Integrated Package for Gene Discovery and Expression

    Directory of Open Access Journals (Sweden)

    Lee H. Pratt

    2006-03-01

    Full Text Available The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs, and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.

  16. Integration between manufacturers and third party logistics providers?

    DEFF Research Database (Denmark)

    Mortensen, Ole; Lemoine, Olga W.

    2008-01-01

    Purpose - The purpose of this study is to analyse the extent of the integration between manufacturers and third party logistics (TPL) providers at present and how the integration is expected to develop in the near future. The focus is on studying what tasks are part of the cooperation, what...... of the eight business processes studied. Further integration in the same processes is expected, based on ICT tools and with a focus on cost. ICT competences are primarily seen as a qualifier not a differentiator. Because the future TPL industry is expected to be characterised by more standardised services...

  17. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    Science.gov (United States)

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  18. Automated granularity to integrate digital information: the "Antarctic Treaty Searchable Database" case study

    Directory of Open Access Journals (Sweden)

    Paul Arthur Berkman

    2006-06-01

    Full Text Available Access to information is necessary, but not sufficient in our digital era. The challenge is to objectively integrate digital resources based on user-defined objectives for the purpose of discovering information relationships that facilitate interpretations and decision making. The Antarctic Treaty Searchable Database (http://aspire.nvi.net, which is in its sixth edition, provides an example of digital integration based on the automated generation of information granules that can be dynamically combined to reveal objective relationships within and between digital information resources. This case study further demonstrates that automated granularity and dynamic integration can be accomplished simply by utilizing the inherent structure of the digital information resources. Such information integration is relevant to library and archival programs that require long-term preservation of authentic digital resources.

  19. dbPAF: an integrative database of protein phosphorylation in animals and fungi.

    Science.gov (United States)

    Ullah, Shahid; Lin, Shaofeng; Xu, Yang; Deng, Wankun; Ma, Lili; Zhang, Ying; Liu, Zexian; Xue, Yu

    2016-03-24

    Protein phosphorylation is one of the most important post-translational modifications (PTMs) and regulates a broad spectrum of biological processes. Recent progresses in phosphoproteomic identifications have generated a flood of phosphorylation sites, while the integration of these sites is an urgent need. In this work, we developed a curated database of dbPAF, containing known phosphorylation sites in H. sapiens, M. musculus, R. norvegicus, D. melanogaster, C. elegans, S. pombe and S. cerevisiae. From the scientific literature and public databases, we totally collected and integrated 54,148 phosphoproteins with 483,001 phosphorylation sites. Multiple options were provided for accessing the data, while original references and other annotations were also present for each phosphoprotein. Based on the new data set, we computationally detected significantly over-represented sequence motifs around phosphorylation sites, predicted potential kinases that are responsible for the modification of collected phospho-sites, and evolutionarily analyzed phosphorylation conservation states across different species. Besides to be largely consistent with previous reports, our results also proposed new features of phospho-regulation. Taken together, our database can be useful for further analyses of protein phosphorylation in human and other model organisms. The dbPAF database was implemented in PHP + MySQL and freely available at http://dbpaf.biocuckoo.org.

  20. Integration of the ATLAS tag database with data management and analysis components

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C

    2008-01-01

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted

  1. Integration of the ATLAS tag database with data management and analysis components

    Energy Technology Data Exchange (ETDEWEB)

    Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Doyle, A T; Kenyon, M J; McGlone, H; Nicholson, C [Department of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, Scotland (United Kingdom)], E-mail: c.nicholson@physics.gla.ac.uk

    2008-07-15

    The ATLAS Tag Database is an event-level metadata system, designed to allow efficient identification and selection of interesting events for user analysis. By making first-level cuts using queries on a relational database, the size of an analysis input sample could be greatly reduced and thus the time taken for the analysis reduced. Deployment of such a Tag database is underway, but to be most useful it needs to be integrated with the distributed data management (DDM) and distributed analysis (DA) components. This means addressing the issue that the DDM system at ATLAS groups files into datasets for scalability and usability, whereas the Tag Database points to events in files. It also means setting up a system which could prepare a list of input events and use both the DDM and DA systems to run a set of jobs. The ATLAS Tag Navigator Tool (TNT) has been developed to address these issues in an integrated way and provide a tool that the average physicist can use. Here, the current status of this work is presented and areas of future work are highlighted.

  2. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  3. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.

    Science.gov (United States)

    Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P

    2010-10-07

    Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database

  4. Improving Microbial Genome Annotations in an Integrated Database Context

    Science.gov (United States)

    Chen, I-Min A.; Markowitz, Victor M.; Chu, Ken; Anderson, Iain; Mavromatis, Konstantinos; Kyrpides, Nikos C.; Ivanova, Natalia N.

    2013-01-01

    Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG) family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/. PMID:23424620

  5. Improving microbial genome annotations in an integrated database context.

    Directory of Open Access Journals (Sweden)

    I-Min A Chen

    Full Text Available Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/.

  6. Global quantitative indices reflecting provider process-of-care: data-base derivation.

    Science.gov (United States)

    Moran, John L; Solomon, Patricia J

    2010-04-19

    Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35), for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i) the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC), reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days), reflecting the time to maximum rate of hospital discharge; and (ii) individual patient ability to optimize output (as length-of-stay) for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum)1]), via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. The data-set consisted of 223129 patients from 99 ICUs with mean (SD) age and APACHE III score of 59.2(18.9) years and 52.7(30.6) respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (>or= 7.8 days) and TE (>or= 0.74) were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (>or= 4.2 days) and TE (>or= 0.69) were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysis-of-variance, P variance, for survivors (0.89) and non-survivors (0.89), was maximized by combinations of indices demonstrating a low correlation with

  7. Global quantitative indices reflecting provider process-of-care: data-base derivation

    Directory of Open Access Journals (Sweden)

    Solomon Patricia J

    2010-04-01

    Full Text Available Abstract Background Controversy has attended the relationship between risk-adjusted mortality and process-of-care. There would be advantage in the establishment, at the data-base level, of global quantitative indices subsuming the diversity of process-of-care. Methods A retrospective, cohort study of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 1993-2003, at the level of geographic and ICU-level descriptors (n = 35, for both hospital survivors and non-survivors. Process-of-care indices were established by analysis of: (i the smoothed time-hazard curve of individual patient discharge and determined by pharmaco-kinetic methods as area under the hazard-curve (AUC, reflecting the integrated experience of the discharge process, and time-to-peak-hazard (TMAX, in days, reflecting the time to maximum rate of hospital discharge; and (ii individual patient ability to optimize output (as length-of-stay for recorded data-base physiological inputs; estimated as a technical production-efficiency (TE, scaled [0,(maximum1], via the econometric technique of stochastic frontier analysis. For each descriptor, multivariate correlation-relationships between indices and summed mortality probability were determined. Results The data-set consisted of 223129 patients from 99 ICUs with mean (SD age and APACHE III score of 59.2(18.9 years and 52.7(30.6 respectively; 41.7% were female and 45.7% were mechanically ventilated within the first 24 hours post-admission. For survivors, AUC was maximal in rural and for-profit ICUs, whereas TMAX (≥ 7.8 days and TE (≥ 0.74 were maximal in tertiary-ICUs. For non-survivors, AUC was maximal in tertiary-ICUs, but TMAX (≥ 4.2 days and TE (≥ 0.69 were maximal in for-profit ICUs. Across descriptors, significant differences in indices were demonstrated (analysis-of-variance, P ≤ 0.0001. Total explained variance, for survivors (0.89 and non-survivors (0.89, was maximized by

  8. Integrating Environmental and Human Health Databases in the Great Lakes Basin: Themes, Challenges and Future Directions

    Directory of Open Access Journals (Sweden)

    Kate L. Bassil

    2015-03-01

    Full Text Available Many government, academic and research institutions collect environmental data that are relevant to understanding the relationship between environmental exposures and human health. Integrating these data with health outcome data presents new challenges that are important to consider to improve our effective use of environmental health information. Our objective was to identify the common themes related to the integration of environmental and health data, and suggest ways to address the challenges and make progress toward more effective use of data already collected, to further our understanding of environmental health associations in the Great Lakes region. Environmental and human health databases were identified and reviewed using literature searches and a series of one-on-one and group expert consultations. Databases identified were predominantly environmental stressors databases, with fewer found for health outcomes and human exposure. Nine themes or factors that impact integration were identified: data availability, accessibility, harmonization, stakeholder collaboration, policy and strategic alignment, resource adequacy, environmental health indicators, and data exchange networks. The use and cost effectiveness of data currently collected could be improved by strategic changes to data collection and access systems to provide better opportunities to identify and study environmental exposures that may impact human health.

  9. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    Science.gov (United States)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  10. MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status

    NARCIS (Netherlands)

    Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D'Elia, D.; Montalvo, A.; Pinto, B.; de Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H.; Sloof, P.; Saccone, C.

    2000-01-01

    MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces

  11. MiCroKit 3.0: an integrated database of midbody, centrosome and kinetochore.

    Science.gov (United States)

    Ren, Jian; Liu, Zexian; Gao, Xinjiao; Jin, Changjiang; Ye, Mingliang; Zou, Hanfa; Wen, Longping; Zhang, Zhaolei; Xue, Yu; Yao, Xuebiao

    2010-01-01

    During cell division/mitosis, a specific subset of proteins is spatially and temporally assembled into protein super complexes in three distinct regions, i.e. centrosome/spindle pole, kinetochore/centromere and midbody/cleavage furrow/phragmoplast/bud neck, and modulates cell division process faithfully. Although many experimental efforts have been carried out to investigate the characteristics of these proteins, no integrated database was available. Here, we present the MiCroKit database (http://microkit.biocuckoo.org) of proteins that localize in midbody, centrosome and/or kinetochore. We collected into the MiCroKit database experimentally verified microkit proteins from the scientific literature that have unambiguous supportive evidence for subcellular localization under fluorescent microscope. The current version of MiCroKit 3.0 provides detailed information for 1489 microkit proteins from seven model organisms, including Saccharomyces cerevisiae, Schizasaccharomyces pombe, Caenorhabditis elegans, Drosophila melanogaster, Xenopus laevis, Mus musculus and Homo sapiens. Moreover, the orthologous information was provided for these microkit proteins, and could be a useful resource for further experimental identification. The online service of MiCroKit database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0).

  12. Construction, database integration, and application of an Oenothera EST library.

    Science.gov (United States)

    Mrácek, Jaroslav; Greiner, Stephan; Cho, Won Kyong; Rauwolf, Uwe; Braun, Martha; Umate, Pavan; Altstätter, Johannes; Stoppel, Rhea; Mlcochová, Lada; Silber, Martina V; Volz, Stefanie M; White, Sarah; Selmeier, Renate; Rudd, Stephen; Herrmann, Reinhold G; Meurer, Jörg

    2006-09-01

    Coevolution of cellular genetic compartments is a fundamental aspect in eukaryotic genome evolution that becomes apparent in serious developmental disturbances after interspecific organelle exchanges. The genus Oenothera represents a unique, at present the only available, resource to study the role of the compartmentalized plant genome in diversification of populations and speciation processes. An integrated approach involving cDNA cloning, EST sequencing, and bioinformatic data mining was chosen using Oenothera elata with the genetic constitution nuclear genome AA with plastome type I. The Gene Ontology system grouped 1621 unique gene products into 17 different functional categories. Application of arrays generated from a selected fraction of ESTs revealed significantly differing expression profiles among closely related Oenothera species possessing the potential to generate fertile and incompatible plastid/nuclear hybrids (hybrid bleaching). Furthermore, the EST library provides a valuable source of PCR-based polymorphic molecular markers that are instrumental for genotyping and molecular mapping approaches.

  13. Provider integration and local market conditions: a contingency theory perspective.

    Science.gov (United States)

    Young, G J; Parker, V A; Charns, M P

    2001-01-01

    In recent years we have witnessed an expanding array of organizational arrangements for providing health care services in the U.S. These arrangements integrate previously independent providers at one or more points on the continuum of care. The presence of so many of these arrangements raises the question of whether certain types are more effective than are others to help providers adapt to their environment. This article discusses contingency theory as a conceptual lens for guiding empirical studies of the effectiveness of different types of arrangements.

  14. Using XML technology for the ontology-based semantic integration of life science databases.

    Science.gov (United States)

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  15. PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.

    Science.gov (United States)

    Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X

    2017-01-01

    Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.

  16. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  17. Human Ageing Genomic Resources: Integrated databases and tools for the biology and genetics of ageing

    Science.gov (United States)

    Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro

    2013-01-01

    The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293

  18. Planning the future of JPL's management and administrative support systems around an integrated database

    Science.gov (United States)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  19. Integrating advanced practice providers into medical critical care teams.

    Science.gov (United States)

    McCarthy, Christine; O'Rourke, Nancy C; Madison, J Mark

    2013-03-01

    Because there is increasing demand for critical care providers in the United States, many medical ICUs for adults have begun to integrate nurse practitioners and physician assistants into their medical teams. Studies suggest that such advanced practice providers (APPs), when appropriately trained in acute care, can be highly effective in helping to deliver high-quality medical critical care and can be important elements of teams with multiple providers, including those with medical house staff. One aspect of building an integrated team is a practice model that features appropriate coding and billing of services by all providers. Therefore, it is important to understand an APP's scope of practice, when they are qualified for reimbursement, and how they may appropriately coordinate coding and billing with other team providers. In particular, understanding when and how to appropriately code for critical care services (Current Procedural Terminology [CPT] code 99291, critical care, evaluation and management of the critically ill or critically injured patient, first 30-74 min; CPT code 99292, critical care, each additional 30 min) and procedures is vital for creating a sustainable program. Because APPs will likely play a growing role in medical critical care units in the future, more studies are needed to compare different practice models and to determine the best way to deploy this talent in specific ICU settings.

  20. Integrating protein structures and precomputed genealogies in the Magnum database: Examples with cellular retinoid binding proteins

    Directory of Open Access Journals (Sweden)

    Bradley Michael E

    2006-02-01

    Full Text Available Abstract Background When accurate models for the divergent evolution of protein sequences are integrated with complementary biological information, such as folded protein structures, analyses of the combined data often lead to new hypotheses about molecular physiology. This represents an excellent example of how bioinformatics can be used to guide experimental research. However, progress in this direction has been slowed by the lack of a publicly available resource suitable for general use. Results The precomputed Magnum database offers a solution to this problem for ca. 1,800 full-length protein families with at least one crystal structure. The Magnum deliverables include 1 multiple sequence alignments, 2 mapping of alignment sites to crystal structure sites, 3 phylogenetic trees, 4 inferred ancestral sequences at internal tree nodes, and 5 amino acid replacements along tree branches. Comprehensive evaluations revealed that the automated procedures used to construct Magnum produced accurate models of how proteins divergently evolve, or genealogies, and correctly integrated these with the structural data. To demonstrate Magnum's capabilities, we asked for amino acid replacements requiring three nucleotide substitutions, located at internal protein structure sites, and occurring on short phylogenetic tree branches. In the cellular retinoid binding protein family a site that potentially modulates ligand binding affinity was discovered. Recruitment of cellular retinol binding protein to function as a lens crystallin in the diurnal gecko afforded another opportunity to showcase the predictive value of a browsable database containing branch replacement patterns integrated with protein structures. Conclusion We integrated two areas of protein science, evolution and structure, on a large scale and created a precomputed database, known as Magnum, which is the first freely available resource of its kind. Magnum provides evolutionary and structural

  1. Development of SRS.php, a Simple Object Access Protocol-based library for data acquisition from integrated biological databases.

    Science.gov (United States)

    Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R

    2007-12-11

    Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.

  2. DEVELOPING FLEXIBLE APPLICATIONS WITH XML AND DATABASE INTEGRATION

    Directory of Open Access Journals (Sweden)

    Hale AS

    2004-04-01

    Full Text Available In recent years the most popular subject in Information System area is Enterprise Application Integration (EAI. It can be defined as a process of forming a standart connection between different systems of an organization?s information system environment. The incorporating, gaining and marriage of corporations are the major reasons of popularity in Enterprise Application Integration. The main purpose is to solve the application integrating problems while similar systems in such corporations continue working together for a more time. With the help of XML technology, it is possible to find solutions to the problems of application integration either within the corporation or between the corporations.

  3. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    Science.gov (United States)

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and

  4. Free access to INIS database provides a gateway to nuclear energy research results

    International Nuclear Information System (INIS)

    Tolonen, E.; Malmgren, M.

    2009-01-01

    Free access to INIS database was opened to all the Internet users around the world on May, 2009. The article reviews the history of INIS (the International Nuclear Information System), data aquisition process, database content and search possibilities. INIS is focused on the worldwide literature of the peaceful uses of nuclear energy and the database is produced in close collaboration with the IEA/ETDE World Energy Base (ETDEWEB), a database focusing on all aspects of energy. Nuclear Science Abstracts database (NSA), which is a comprehensive collection of international nuclear science and technology literature for the period 1948 through 1976, is also briefly discussed in the article. In Finland, the recently formed Aalto University is responsible for collecting and disseminating information (literature) and for the preparation of input to the INIS and IEA/ETDE Databases on the national level

  5. Efficient Integrity Checking for Databases with Recursive Views

    DEFF Research Database (Denmark)

    Martinenghi, Davide; Christiansen, Henning

    2005-01-01

    Efficient and incremental maintenance of integrity constraints involving recursive views is a difficult issue that has received some attention in the past years, but for which no widely accepted solution exists yet. In this paper a technique is proposed for compiling such integrity constraints in...... approaches have not achieved comparable optimization with the same level of generality....

  6. Coordinate Systems Integration for Craniofacial Database from Multimodal Devices

    Directory of Open Access Journals (Sweden)

    Deni Suwardhi

    2005-05-01

    Full Text Available This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography scan and CR (Close Range Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the craniofacial spatial data in one reference system in database.

  7. EVpedia: an integrated database of high-throughput data for systemic analyses of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Dae-Kyum Kim

    2013-03-01

    Full Text Available Secretion of extracellular vesicles is a general cellular activity that spans the range from simple unicellular organisms (e.g. archaea; Gram-positive and Gram-negative bacteria to complex multicellular ones, suggesting that this extracellular vesicle-mediated communication is evolutionarily conserved. Extracellular vesicles are spherical bilayered proteolipids with a mean diameter of 20–1,000 nm, which are known to contain various bioactive molecules including proteins, lipids, and nucleic acids. Here, we present EVpedia, which is an integrated database of high-throughput datasets from prokaryotic and eukaryotic extracellular vesicles. EVpedia provides high-throughput datasets of vesicular components (proteins, mRNAs, miRNAs, and lipids present on prokaryotic, non-mammalian eukaryotic, and mammalian extracellular vesicles. In addition, EVpedia also provides an array of tools, such as the search and browse of vesicular components, Gene Ontology enrichment analysis, network analysis of vesicular proteins and mRNAs, and a comparison of vesicular datasets by ortholog identification. Moreover, publications on extracellular vesicle studies are listed in the database. This free web-based database of EVpedia (http://evpedia.info might serve as a fundamental repository to stimulate the advancement of extracellular vesicle studies and to elucidate the novel functions of these complex extracellular organelles.

  8. M4FT-16LL080302052-Update to Thermodynamic Database Development and Sorption Database Integration

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Glenn T. Seaborg Inst.. Physical and Life Sciences; Wolery, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Akima Infrastructure Services, LLC; Atkins-Duffin, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Global Security

    2016-08-16

    This progress report (Level 4 Milestone Number M4FT-16LL080302052) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within the Argillite Disposal R&D Work Package Number FT-16LL08030205. The focus of this research is the thermodynamic modeling of Engineered Barrier System (EBS) materials and properties and development of thermodynamic databases and models to evaluate the stability of EBS materials and their interactions with fluids at various physico-chemical conditions relevant to subsurface repository environments. The development and implementation of equilibrium thermodynamic models are intended to describe chemical and physical processes such as solubility, sorption, and diffusion.

  9. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    Directory of Open Access Journals (Sweden)

    Lemoine Nicholas R

    2007-11-01

    Full Text Available Abstract Background Pancreatic cancer is the 5th leading cause of cancer death in both males and females. In recent years, a wealth of gene and protein expression studies have been published broadening our understanding of pancreatic cancer biology. Due to the explosive growth in publicly available data from multiple different sources it is becoming increasingly difficult for individual researchers to integrate these into their current research programmes. The Pancreatic Expression database, a generic web-based system, is aiming to close this gap by providing the research community with an open access tool, not only to mine currently available pancreatic cancer data sets but also to include their own data in the database. Description Currently, the database holds 32 datasets comprising 7636 gene expression measurements extracted from 20 different published gene or protein expression studies from various pancreatic cancer types, pancreatic precursor lesions (PanINs and chronic pancreatitis. The pancreatic data are stored in a data management system based on the BioMart technology alongside the human genome gene and protein annotations, sequence, homologue, SNP and antibody data. Interrogation of the database can be achieved through both a web-based query interface and through web services using combined criteria from pancreatic (disease stages, regulation, differential expression, expression, platform technology, publication and/or public data (antibodies, genomic region, gene-related accessions, ontology, expression patterns, multi-species comparisons, protein data, SNPs. Thus, our database enables connections between otherwise disparate data sources and allows relatively simple navigation between all data types and annotations. Conclusion The database structure and content provides a powerful and high-speed data-mining tool for cancer research. It can be used for target discovery i.e. of biomarkers from body fluids, identification and analysis

  10. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  11. An integrated photogrammetric and spatial database management system for producing fully structured data using aerial and remote sensing images.

    Science.gov (United States)

    Ahmadi, Farshid Farnood; Ebadi, Hamid

    2009-01-01

    3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs); direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium) standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS) is presented.

  12. An Integrated Photogrammetric and Spatial Database Management System for Producing Fully Structured Data Using Aerial and Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Farshid Farnood Ahmadi

    2009-03-01

    Full Text Available 3D spatial data acquired from aerial and remote sensing images by photogrammetric techniques is one of the most accurate and economic data sources for GIS, map production, and spatial data updating. However, there are still many problems concerning storage, structuring and appropriate management of spatial data obtained using these techniques. According to the capabilities of spatial database management systems (SDBMSs; direct integration of photogrammetric and spatial database management systems can save time and cost of producing and updating digital maps. This integration is accomplished by replacing digital maps with a single spatial database. Applying spatial databases overcomes the problem of managing spatial and attributes data in a coupled approach. This management approach is one of the main problems in GISs for using map products of photogrammetric workstations. Also by the means of these integrated systems, providing structured spatial data, based on OGC (Open GIS Consortium standards and topological relations between different feature classes, is possible at the time of feature digitizing process. In this paper, the integration of photogrammetric systems and SDBMSs is evaluated. Then, different levels of integration are described. Finally design, implementation and test of a software package called Integrated Photogrammetric and Oracle Spatial Systems (IPOSS is presented.

  13. Integrating Instrumental Data Provides the Full Science in 3D

    Science.gov (United States)

    Turrin, M.; Boghosian, A.; Bell, R. E.; Frearson, N.

    2017-12-01

    Looking at data sparks questions, discussion and insights. By integrating multiple data sets we deepen our understanding of how cryosphere processes operate. Field collected data provide measurements from multiple instruments supporting rapid insights. Icepod provides a platform focused on the integration of multiple instruments. Over the last three seasons, the ROSETTA-Ice project has deployed Icepod to comprehensively map the Ross Ice Shelf, Antarctica. This integrative data collection along with new methods of data visualization allows us to answer questions about ice shelf structure and evolution that arise during data processing and review. While data are vetted and archived in the field to confirm instruments are operating, upon return to the lab data are again reviewed for accuracy before full analysis. Recent review of shallow ice radar data from the Beardmore Glacier, an outlet glacier into the Ross Ice Shelf, presented an abrupt discontinuity in the ice surface. This sharp 8m surface elevation drop was originally interpreted as a processing error. Data were reexamined, integrating the simultaneously collected shallow and deep ice radar with lidar data. All the data sources showed the surface discontinuity, confirming the abrupt 8m drop in surface elevation. Examining high resolution WorldView satellite imagery revealed a persistent source for these elevation drops. The satellite imagery showed that this tear in the ice surface was only one piece of a larger pattern of "chatter marks" in ice that flows at a rate of 300 m/yr. The markings are buried over a distance of 30 km or after 100 years of travel down Beardmore Glacier towards the front of the Ross Ice Shelf. Using Icepod's lidar and cameras we map this chatter mark feature in 3D to reveal its full structure. We use digital elevation models from WorldView to map the other along flow chatter marks. In order to investigate the relationship between these surface features and basal crevasses, the deep ice

  14. Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Integrated Strategic Tracking and Recruiting Database (iSTAR) Data Inventory contains measured and modeled partnership and contact data. It is comprised of basic...

  15. Cost benefit analysis of power plant database integration

    International Nuclear Information System (INIS)

    Wilber, B.E.; Cimento, A.; Stuart, R.

    1988-01-01

    A cost benefit analysis of plant wide data integration allows utility management to evaluate integration and automation benefits from an economic perspective. With this evaluation, the utility can determine both the quantitative and qualitative savings that can be expected from data integration. The cost benefit analysis is then a planning tool which helps the utility to develop a focused long term implementation strategy that will yield significant near term benefits. This paper presents a flexible cost benefit analysis methodology which is both simple to use and yields accurate, verifiable results. Included in this paper is a list of parameters to consider, a procedure for performing the cost savings analysis, and samples of this procedure when applied to a utility. A case study is presented involving a specific utility where this procedure was applied. Their uses of the cost-benefit analysis are also described

  16. SolveDB: Integrating Optimization Problem Solvers Into SQL Databases

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Pedersen, Torben Bach

    2016-01-01

    for optimization problems, (2) an extensible infrastructure for integrating different solvers, and (3) query optimization techniques to achieve the best execution performance and/or result quality. Extensive experiments with the PostgreSQL-based implementation show that SolveDB is a versatile tool offering much...

  17. Representing clinical communication knowledge through database management system integration.

    Science.gov (United States)

    Khairat, Saif; Craven, Catherine; Gong, Yang

    2012-01-01

    Clinical communication failures are considered the leading cause of medical errors [1]. The complexity of the clinical culture and the significant variance in training and education levels form a challenge to enhancing communication within the clinical team. In order to improve communication, a comprehensive understanding of the overall communication process in health care is required. In an attempt to further understand clinical communication, we conducted a thorough methodology literature review to identify strengths and limitations of previous approaches [2]. Our research proposes a new data collection method to study the clinical communication activities among Intensive Care Unit (ICU) clinical teams with a primary focus on the attending physician. In this paper, we present the first ICU communication instrument, and, we introduce the use of database management system to aid in discovering patterns and associations within our ICU communications data repository.

  18. System/subsystem specifications for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    Energy Technology Data Exchange (ETDEWEB)

    Rollow, J.P.; Shipe, P.C.; Truett, L.F. [Oak Ridge National Lab., TN (United States); Faby, E.Z.; Fluker, J.; Grubb, J.; Hancock, B.R. [Univ. of Tennessee, Knoxville, TN (United States); Ferguson, R.A. [Science Applications International Corp., Oak Ridge, TN (United States)

    1995-11-20

    A system is being developed by the Military Traffic Management Command (MTMC) to provide data integration and worldwide management and tracking of surface cargo movements. The Integrated Cargo Database (ICDB) will be a data repository for the WPS terminal-level system, will be a primary source of queries and cargo traffic reports, will receive data from and provide data to other MTMC and non-MTMC systems, will provide capabilities for processing Advance Transportation Control and Movement Documents (ATCMDs), and will process and distribute manifests. This System/Subsystem Specifications for the Worldwide Port System Regional ICDB documents the system/subsystem functions, provides details of the system/subsystem analysis in order to provide a communication link between developers and operational personnel, and identifies interfaces with other systems and subsystems. It must be noted that this report is being produced near the end of the initial development phase of ICDB, while formal software testing is being done. Following the initial implementation of the ICDB system, maintenance contractors will be in charge of making changes and enhancing software modules. Formal testing and user reviews may indicate the need for additional software units or changes to existing ones. This report describes the software units that are components of this ICDB system as of August 1995.

  19. A semantic data dictionary method for database schema integration in CIESIN

    Science.gov (United States)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  20. UNITE: a database providing web-based methods for the molecular identification of ectomycorrhizal fungi

    DEFF Research Database (Denmark)

    Köljalg, U.; Larsson, K.H.; Abarenkov, K.

    2005-01-01

    Identification of ectomycorrhizal (ECM) fungi is often achieved through comparisons of ribosomal DNA internal transcribed spacer (ITS) sequences with accessioned sequences deposited in public databases. A major problem encountered is that annotation of the sequences in these databases is not always....... At present UNITE contains 758 ITS sequences from 455 species and 67 genera of ECM fungi. •  UNITE can be searched by taxon name, via sequence similarity using blastn, and via phylogenetic sequence identification using galaxie. Following implementation, galaxie performs a phylogenetic analysis of the query...... sequence after alignment either to pre-existing generic alignments, or to matches retrieved from a blast search on the UNITE data. It should be noted that the current version of UNITE is dedicated to the reliable identification of ECM fungi. •  The UNITE database is accessible through the URL http://unite.zbi.ee...

  1. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  2. Provider perceptions of an integrated primary care quality improvement strategy: The PPAQ toolkit.

    Science.gov (United States)

    Beehler, Gregory P; Lilienthal, Kaitlin R

    2017-02-01

    The Primary Care Behavioral Health (PCBH) model of integrated primary care is challenging to implement with high fidelity. The Primary Care Behavioral Health Provider Adherence Questionnaire (PPAQ) was designed to assess provider adherence to essential model components and has recently been adapted into a quality improvement toolkit. The aim of this pilot project was to gather preliminary feedback on providers' perceptions of the acceptability and utility of the PPAQ toolkit for making beneficial practice changes. Twelve mental health providers working in Department of Veterans Affairs integrated primary care clinics participated in semistructured interviews to gather quantitative and qualitative data. Descriptive statistics and qualitative content analysis were used to analyze data. Providers identified several positive features of the PPAQ toolkit organization and structure that resulted in high ratings of acceptability, while also identifying several toolkit components in need of modification to improve usability. Toolkit content was considered highly representative of the (PCBH) model and therefore could be used as a diagnostic self-assessment of model adherence. The toolkit was considered to be high in applicability to providers regardless of their degree of prior professional preparation or current clinical setting. Additionally, providers identified several system-level contextual factors that could impact the usefulness of the toolkit. These findings suggest that frontline mental health providers working in (PCBH) settings may be receptive to using an adherence-focused toolkit for ongoing quality improvement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. UNITE: a database providing web-based methods for the molecular identification of ectomycorrhizal fungi.

    Science.gov (United States)

    Kõljalg, Urmas; Larsson, Karl-Henrik; Abarenkov, Kessy; Nilsson, R Henrik; Alexander, Ian J; Eberhardt, Ursula; Erland, Susanne; Høiland, Klaus; Kjøller, Rasmus; Larsson, Ellen; Pennanen, Taina; Sen, Robin; Taylor, Andy F S; Tedersoo, Leho; Vrålstad, Trude; Ursing, Björn M

    2005-06-01

    Identification of ectomycorrhizal (ECM) fungi is often achieved through comparisons of ribosomal DNA internal transcribed spacer (ITS) sequences with accessioned sequences deposited in public databases. A major problem encountered is that annotation of the sequences in these databases is not always complete or trustworthy. In order to overcome this deficiency, we report on UNITE, an open-access database. UNITE comprises well annotated fungal ITS sequences from well defined herbarium specimens that include full herbarium reference identification data, collector/source and ecological data. At present UNITE contains 758 ITS sequences from 455 species and 67 genera of ECM fungi. UNITE can be searched by taxon name, via sequence similarity using blastn, and via phylogenetic sequence identification using galaxie. Following implementation, galaxie performs a phylogenetic analysis of the query sequence after alignment either to pre-existing generic alignments, or to matches retrieved from a blast search on the UNITE data. It should be noted that the current version of UNITE is dedicated to the reliable identification of ECM fungi. The UNITE database is accessible through the URL http://unite.zbi.ee

  4. Autism genetic database (AGD: a comprehensive database including autism susceptibility gene-CNVs integrated with known noncoding RNAs and fragile sites

    Directory of Open Access Journals (Sweden)

    Talebizadeh Zohreh

    2009-09-01

    Full Text Available Abstract Background Autism is a highly heritable complex neurodevelopmental disorder, therefore identifying its genetic basis has been challenging. To date, numerous susceptibility genes and chromosomal abnormalities have been reported in association with autism, but most discoveries either fail to be replicated or account for a small effect. Thus, in most cases the underlying causative genetic mechanisms are not fully understood. In the present work, the Autism Genetic Database (AGD was developed as a literature-driven, web-based, and easy to access database designed with the aim of creating a comprehensive repository for all the currently reported genes and genomic copy number variations (CNVs associated with autism in order to further facilitate the assessment of these autism susceptibility genetic factors. Description AGD is a relational database that organizes data resulting from exhaustive literature searches for reported susceptibility genes and CNVs associated with autism. Furthermore, genomic information about human fragile sites and noncoding RNAs was also downloaded and parsed from miRBase, snoRNA-LBME-db, piRNABank, and the MIT/ICBP siRNA database. A web client genome browser enables viewing of the features while a web client query tool provides access to more specific information for the features. When applicable, links to external databases including GenBank, PubMed, miRBase, snoRNA-LBME-db, piRNABank, and the MIT siRNA database are provided. Conclusion AGD comprises a comprehensive list of susceptibility genes and copy number variations reported to-date in association with autism, as well as all known human noncoding RNA genes and fragile sites. Such a unique and inclusive autism genetic database will facilitate the evaluation of autism susceptibility factors in relation to known human noncoding RNAs and fragile sites, impacting on human diseases. As a result, this new autism database offers a valuable tool for the research

  5. Distribution Grid Integration Unit Cost Database | Solar Research | NREL

    Science.gov (United States)

    consulting companies, IEEE Some data is provided to give a sense of the order of magnitude of the lifetime , as O&M and component lifetime data is sparse, the effects of PV on device O&M and lifetimes Inverter manufacturers, PV developers The cost of advanced inverters depends on whether the capability of

  6. Integration of published information into a resistance-associated mutation database for Mycobacterium tuberculosis.

    Science.gov (United States)

    Salamon, Hugh; Yamaguchi, Ken D; Cirillo, Daniela M; Miotto, Paolo; Schito, Marco; Posey, James; Starks, Angela M; Niemann, Stefan; Alland, David; Hanna, Debra; Aviles, Enrique; Perkins, Mark D; Dolinger, David L

    2015-04-01

    Tuberculosis remains a major global public health challenge. Although incidence is decreasing, the proportion of drug-resistant cases is increasing. Technical and operational complexities prevent Mycobacterium tuberculosis drug susceptibility phenotyping in the vast majority of new and retreatment cases. The advent of molecular technologies provides an opportunity to obtain results rapidly as compared to phenotypic culture. However, correlations between genetic mutations and resistance to multiple drugs have not been systematically evaluated. Molecular testing of M. tuberculosis sampled from a typical patient continues to provide a partial picture of drug resistance. A database of phenotypic and genotypic testing results, especially where prospectively collected, could document statistically significant associations and may reveal new, predictive molecular patterns. We examine the feasibility of integrating existing molecular and phenotypic drug susceptibility data to identify associations observed across multiple studies and demonstrate potential for well-integrated M. tuberculosis mutation data to reveal actionable findings. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Integration of TGS and CTEN assays using the CTENFIT analysis and databasing program

    International Nuclear Information System (INIS)

    Estep, R.

    2000-01-01

    The CTEN F IT program, written for Windows 9x/NT in C++, performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplified record keeping tasks

  8. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-01-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments

  9. Development of an Integrated Natural Barrier Database System for Site Evaluation of a Deep Geologic Repository in Korea - 13527

    International Nuclear Information System (INIS)

    Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong; Lee, Jeong-Hwan

    2013-01-01

    Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel and other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)

  10. Bio-optical data integration based on a 4 D database system approach

    Science.gov (United States)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  11. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    Science.gov (United States)

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  12. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko

    2017-05-10

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  13. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database.

    Science.gov (United States)

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-06-23

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max 'Enrei'). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. The Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all predicted proteins from

  14. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-01-01

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  15. LmSmdB: an integrated database for metabolic and gene regulatory network in Leishmania major and Schistosoma mansoni

    Directory of Open Access Journals (Sweden)

    Priyanka Patel

    2016-03-01

    Full Text Available A database that integrates all the information required for biological processing is essential to be stored in one platform. We have attempted to create one such integrated database that can be a one stop shop for the essential features required to fetch valuable result. LmSmdB (L. major and S. mansoni database is an integrated database that accounts for the biological networks and regulatory pathways computationally determined by integrating the knowledge of the genome sequences of the mentioned organisms. It is the first database of its kind that has together with the network designing showed the simulation pattern of the product. This database intends to create a comprehensive canopy for the regulation of lipid metabolism reaction in the parasite by integrating the transcription factors, regulatory genes and the protein products controlled by the transcription factors and hence operating the metabolism at genetic level. Keywords: L.major, S.mansoni, Regulatory networks, Transcription factors, Database

  16. ViralORFeome: an integrated database to generate a versatile collection of viral ORFs.

    Science.gov (United States)

    Pellet, J; Tafforeau, L; Lucas-Hourani, M; Navratil, V; Meyniel, L; Achaz, G; Guironnet-Paquet, A; Aublin-Gex, A; Caignard, G; Cassonnet, P; Chaboud, A; Chantier, T; Deloire, A; Demeret, C; Le Breton, M; Neveu, G; Jacotot, L; Vaglio, P; Delmotte, S; Gautier, C; Combet, C; Deleage, G; Favre, M; Tangy, F; Jacob, Y; Andre, P; Lotteau, V; Rabourdin-Combe, C; Vidalain, P O

    2010-01-01

    Large collections of protein-encoding open reading frames (ORFs) established in a versatile recombination-based cloning system have been instrumental to study protein functions in high-throughput assays. Such 'ORFeome' resources have been developed for several organisms but in virology, plasmid collections covering a significant fraction of the virosphere are still needed. In this perspective, we present ViralORFeome 1.0 (http://www.viralorfeome.com), an open-access database and management system that provides an integrated set of bioinformatic tools to clone viral ORFs in the Gateway(R) system. ViralORFeome provides a convenient interface to navigate through virus genome sequences, to design ORF-specific cloning primers, to validate the sequence of generated constructs and to browse established collections of virus ORFs. Most importantly, ViralORFeome has been designed to manage all possible variants or mutants of a given ORF so that the cloning procedure can be applied to any emerging virus strain. A subset of plasmid constructs generated with ViralORFeome platform has been tested with success for heterologous protein expression in different expression systems at proteome scale. ViralORFeome should provide our community with a framework to establish a large collection of virus ORF clones, an instrumental resource to determine functions, activities and binding partners of viral proteins.

  17. A Comprehensive Database and Analysis Framework To Incorporate Multiscale Data Types and Enable Integrated Analysis of Bioactive Polyphenols.

    Science.gov (United States)

    Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M

    2018-03-05

    The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative

  18. Health Providers' Counselling of Caregivers in the Integrated ...

    African Journals Online (AJOL)

    Results: Health providers performed well in assessing the child's problem (85%); listening (100%); use of simple language (95%); use of kind tone of voice (99%); showing interest in caregivers (99%); giving feeding ... Keywords: Child, preschool; infant; health-provider; caregiver; counselling; IMCI-counselling; Uganda

  19. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    Science.gov (United States)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  20. MIPS Arabidopsis thaliana Database (MAtDB): an integrated biological knowledge resource for plant genomics

    Science.gov (United States)

    Schoof, Heiko; Ernst, Rebecca; Nazarov, Vladimir; Pfeifer, Lukas; Mewes, Hans-Werner; Mayer, Klaus F. X.

    2004-01-01

    Arabidopsis thaliana is the most widely studied model plant. Functional genomics is intensively underway in many laboratories worldwide. Beyond the basic annotation of the primary sequence data, the annotated genetic elements of Arabidopsis must be linked to diverse biological data and higher order information such as metabolic or regulatory pathways. The MIPS Arabidopsis thaliana database MAtDB aims to provide a comprehensive resource for Arabidopsis as a genome model that serves as a primary reference for research in plants and is suitable for transfer of knowledge to other plants, especially crops. The genome sequence as a common backbone serves as a scaffold for the integration of data, while, in a complementary effort, these data are enhanced through the application of state-of-the-art bioinformatics tools. This information is visualized on a genome-wide and a gene-by-gene basis with access both for web users and applications. This report updates the information given in a previous report and provides an outlook on further developments. The MAtDB web interface can be accessed at http://mips.gsf.de/proj/thal/db. PMID:14681437

  1. Integrating alternative providers into managed care: a case study.

    Science.gov (United States)

    Broida, M

    1997-09-01

    Alternative medical techniques have become extremely popular, particularly in the western United States. Washington State recently enacted a law requiring certain health plans to include alternative providers on their physician panels. The author describes the efforts of one MCO to comply.

  2. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  3. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  4. Using ontology databases for scalable query answering, inconsistency detection, and data integration

    Science.gov (United States)

    Dou, Dejing

    2011-01-01

    An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378

  5. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R.; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-01-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  6. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  7. NOAA's Integrated Tsunami Database: Data for improved forecasts, warnings, research, and risk assessments

    Science.gov (United States)

    Stroker, Kelly; Dunbar, Paula; Mungov, George; Sweeney, Aaron; McCullough, Heather; Carignan, Kelly

    2015-04-01

    The National Oceanic and Atmospheric Administration (NOAA) has primary responsibility in the United States for tsunami forecast, warning, research, and supports community resiliency. NOAA's National Geophysical Data Center (NGDC) and co-located World Data Service for Geophysics provide a unique collection of data enabling communities to ensure preparedness and resilience to tsunami hazards. Immediately following a damaging or fatal tsunami event there is a need for authoritative data and information. The NGDC Global Historical Tsunami Database (http://www.ngdc.noaa.gov/hazard/) includes all tsunami events, regardless of intensity, as well as earthquakes and volcanic eruptions that caused fatalities, moderate damage, or generated a tsunami. The long-term data from these events, including photographs of damage, provide clues to what might happen in the future. NGDC catalogs the information on global historical tsunamis and uses these data to produce qualitative tsunami hazard assessments at regional levels. In addition to the socioeconomic effects of a tsunami, NGDC also obtains water level data from the coasts and the deep-ocean at stations operated by the NOAA/NOS Center for Operational Oceanographic Products and Services, the NOAA Tsunami Warning Centers, and the National Data Buoy Center (NDBC) and produces research-quality data to isolate seismic waves (in the case of the deep-ocean sites) and the tsunami signal. These water-level data provide evidence of sea-level fluctuation and possible inundation events. NGDC is also building high-resolution digital elevation models (DEMs) to support real-time forecasts, implemented at 75 US coastal communities. After a damaging or fatal event NGDC begins to collect and integrate data and information from many organizations into the hazards databases. Sources of data include our NOAA partners, the U.S. Geological Survey, the UNESCO Intergovernmental Oceanographic Commission (IOC) and International Tsunami Information Center

  8. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  9. A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information

    Directory of Open Access Journals (Sweden)

    Emmanouil Papadakis

    2017-10-01

    Full Text Available This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic solvents in reaction performance improvement. The focus of this reaction database is to provide a data rich environment with process information available to assist during the early stage synthesis of pharmaceutical products. The database is structured in terms of reaction classification of reaction types; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information for each reaction and reference are also available in the database. Additionally, the retrieved information obtained from the database can be evaluated in terms of sustainability using well-known “green” metrics published in the scientific literature. The application of the database is illustrated through the synthesis of ibuprofen, for which data on different reaction pathways have been retrieved from the database and compared using “green” chemistry metrics.

  10. A database and tool, IM Browser, for exploring and integrating emerging gene and protein interaction data for Drosophila

    Directory of Open Access Journals (Sweden)

    Parrish Jodi R

    2006-04-01

    Full Text Available Abstract Background Biological processes are mediated by networks of interacting genes and proteins. Efforts to map and understand these networks are resulting in the proliferation of interaction data derived from both experimental and computational techniques for a number of organisms. The volume of this data combined with the variety of specific forms it can take has created a need for comprehensive databases that include all of the available data sets, and for exploration tools to facilitate data integration and analysis. One powerful paradigm for the navigation and analysis of interaction data is an interaction graph or map that represents proteins or genes as nodes linked by interactions. Several programs have been developed for graphical representation and analysis of interaction data, yet there remains a need for alternative programs that can provide casual users with rapid easy access to many existing and emerging data sets. Description Here we describe a comprehensive database of Drosophila gene and protein interactions collected from a variety of sources, including low and high throughput screens, genetic interactions, and computational predictions. We also present a program for exploring multiple interaction data sets and for combining data from different sources. The program, referred to as the Interaction Map (IM Browser, is a web-based application for searching and visualizing interaction data stored in a relational database system. Use of the application requires no downloads and minimal user configuration or training, thereby enabling rapid initial access to interaction data. IM Browser was designed to readily accommodate and integrate new types of interaction data as it becomes available. Moreover, all information associated with interaction measurements or predictions and the genes or proteins involved are accessible to the user. This allows combined searches and analyses based on either common or technique-specific attributes

  11. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    Science.gov (United States)

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  12. Integrating the DLD dosimetry system into the Almaraz NPP Corporative Database

    International Nuclear Information System (INIS)

    Gonzalez Crego, E.; Martin Lopez-Suevos, C.

    1996-01-01

    The article discusses the experience acquired during the integration of a new MGP Instruments DLD Dosimetry System into the Almaraz NPP corporative database and general communications network, following a client-server philosophy and taking into account the computer standards of the Plant. The most important results obtained are: Integration of DLD dosimetry information into corporative databases, permitting the use of new applications Sharing of existing personnel information with the DLD dosimetry application, thereby avoiding the redundant work of introducing data and improving the quality of the information. Facilitation of maintenance, both software and hardware, of the DLD system. Maximum explotation, from the computer point of view, of the initial investment. Adaptation of the application to the applicable legislation. (Author)

  13. The Future of Asset Management for Human Space Exploration: Supply Classification and an Integrated Database

    Science.gov (United States)

    Shull, Sarah A.; Gralla, Erica L.; deWeck, Olivier L.; Shishko, Robert

    2006-01-01

    One of the major logistical challenges in human space exploration is asset management. This paper presents observations on the practice of asset management in support of human space flight to date and discusses a functional-based supply classification and a framework for an integrated database that could be used to improve asset management and logistics for human missions to the Moon, Mars and beyond.

  14. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002

    OpenAIRE

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present Cyan...

  15. An integrative clinical database and diagnostics platform for biomarker identification and analysis in ion mobility spectra of human exhaled air

    DEFF Research Database (Denmark)

    Schneider, Till; Hauschild, Anne-Christin; Baumbach, Jörg Ingo

    2013-01-01

    data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous...... biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute-value (EAV) model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access...... to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated...

  16. Follicle Online: an integrated database of follicle assembly, development and ovulation.

    Science.gov (United States)

    Hua, Juan; Xu, Bo; Yang, Yifan; Ban, Rongjun; Iqbal, Furhan; Cooke, Howard J; Zhang, Yuanwei; Shi, Qinghua

    2015-01-01

    Folliculogenesis is an important part of ovarian function as it provides the oocytes for female reproductive life. Characterizing genes/proteins involved in folliculogenesis is fundamental for understanding the mechanisms associated with this biological function and to cure the diseases associated with folliculogenesis. A large number of genes/proteins associated with folliculogenesis have been identified from different species. However, no dedicated public resource is currently available for folliculogenesis-related genes/proteins that are validated by experiments. Here, we are reporting a database 'Follicle Online' that provides the experimentally validated gene/protein map of the folliculogenesis in a number of species. Follicle Online is a web-based database system for storing and retrieving folliculogenesis-related experimental data. It provides detailed information for 580 genes/proteins (from 23 model organisms, including Homo sapiens, Mus musculus, Rattus norvegicus, Mesocricetus auratus, Bos Taurus, Drosophila and Xenopus laevis) that have been reported to be involved in folliculogenesis, POF (premature ovarian failure) and PCOS (polycystic ovary syndrome). The literature was manually curated from more than 43,000 published articles (till 1 March 2014). The Follicle Online database is implemented in PHP + MySQL + JavaScript and this user-friendly web application provides access to the stored data. In summary, we have developed a centralized database that provides users with comprehensive information about genes/proteins involved in folliculogenesis. This database can be accessed freely and all the stored data can be viewed without any registration. Database URL: http://mcg.ustc.edu.cn/sdap1/follicle/index.php © The Author(s) 2015. Published by Oxford University Press.

  17. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    Directory of Open Access Journals (Sweden)

    Elizabeth M Azzato

    2014-01-01

    Full Text Available Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS. We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS.

  18. Influenza research database: an integrated bioinformatics resource for influenza virus research

    Science.gov (United States)

    The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics, an...

  19. PharmDB-K: Integrated Bio-Pharmacological Network Database for Traditional Korean Medicine.

    Directory of Open Access Journals (Sweden)

    Ji-Hyun Lee

    Full Text Available Despite the growing attention given to Traditional Medicine (TM worldwide, there is no well-known, publicly available, integrated bio-pharmacological Traditional Korean Medicine (TKM database for researchers in drug discovery. In this study, we have constructed PharmDB-K, which offers comprehensive information relating to TKM-associated drugs (compound, disease indication, and protein relationships. To explore the underlying molecular interaction of TKM, we integrated fourteen different databases, six Pharmacopoeias, and literature, and established a massive bio-pharmacological network for TKM and experimentally validated some cases predicted from the PharmDB-K analyses. Currently, PharmDB-K contains information about 262 TKMs, 7,815 drugs, 3,721 diseases, 32,373 proteins, and 1,887 side effects. One of the unique sets of information in PharmDB-K includes 400 indicator compounds used for standardization of herbal medicine. Furthermore, we are operating PharmDB-K via phExplorer (a network visualization software and BioMart (a data federation framework for convenient search and analysis of the TKM network. Database URL: http://pharmdb-k.org, http://biomart.i-pharm.org.

  20. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Science.gov (United States)

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  1. Integrated application of the database for airborne geophysical survey achievement information

    International Nuclear Information System (INIS)

    Ji Zengxian; Zhang Junwei

    2006-01-01

    The paper briefly introduces the database of information for airborne geophysical survey achievements. This database was developed on the platform of Microsoft Windows System with the technical methods of Visual C++ 6.0 and MapGIS. It is an information management system concerning airborne geophysical surveying achievements with perfect functions in graphic display, graphic cutting and output, query of data, printing of documents and reports, maintenance of database, etc. All information of airborne geophysical survey achievements in nuclear industry from 1972 to 2003 was embedded in. Based on regional geological map and Meso-Cenozoic basin map, the detailed statistical information of each airborne survey area, each airborne radioactive anomalous point and high field point can be presented visually by combining geological or basin research result. The successful development of this system will provide a fairly good base and platform for management of archives and data of airborne geophysical survey achievements in nuclear industry. (authors)

  2. Patient's and health care provider's perspectives on music therapy in palliative care - an integrative review.

    Science.gov (United States)

    Schmid, W; Rosland, J H; von Hofacker, S; Hunskår, I; Bruvik, F

    2018-02-20

    The use of music as therapy in multidisciplinary end-of-life care dates back to the 1970s and nowadays music therapy (MT) is one of the most frequently used complementary therapy in in-patient palliative care in the US. However existing research investigated music therapy's potential impact mainly from one perspective, referring to either a quantitative or qualitative paradigm. The aim of this review is to provide an overview of the users' and providers' perspectives on music therapy in palliative care within one research article. A systematic literature search was conducted using several databases supplemented with a hand-search of journals between November 1978 and December 2016. Inclusion criteria were: Music therapy with adults in palliative care conducted by a certified music therapist. Both quantitative and qualitative studies in English, German or a Scandinavian language published in peer reviewed journals were included. We aimed to identify and discuss the perspectives of both patients and health care providers on music therapy's impact in palliative care to forward a comprehensive understanding of it's effectiveness, benefits and limitations. We investigated themes mentioned by patients within qualitative studies, as well as commonly chosen outcome measures in quantitative research. A qualitative approach utilizing inductive content analysis was carried out to analyze and categorize the data. Twelve articles, reporting on nine quantitative and three qualitative research studies were included. Seven out of the nine quantitative studies investigated pain as an outcome. All of the included quantitative studies reported positive effects of the music therapy. Patients themselves associated MT with the expression of positive as well as challenging emotions and increased well-being. An overarching theme in both types of research is a psycho-physiological change through music therapy. Both quantitative as well as qualitative research showed positive changes in

  3. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  4. FY1995 transduction method and CAD database systems for integrated design; 1995 nendo transduction ho to CAD database togo sekkei shien system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Transduction method developed by the research coordinator and Prof. Muroga is one of the most popular methods to design large-scale integrated circuits, and thus used by major design tool companies in USA and Japan. The major objectives of the research is to improve capability and utilize its reusable property by combining with CAD databases. Major results of the project is as follows, (1) Improvement of Transduction method : Efficiency, capability and the maximum circuit size are improved. Error compensation method is also improved. (2) Applications to new logic elements : Transduction method is modified to cope with wired logic and FPGAs. (3) CAD databases : One of the major advantages of Transduction methods is 'reusability' of already designed circuits. It is suitable to combine with CAD databases. We design CAD databases suitable for cooperative design using Transduction method. (4) Program development : Programs for Windows95 and developed for distribution. (NEDO)

  5. Integrating family planning into HIV care in western Kenya: HIV care providers' perspectives and experiences one year following integration.

    Science.gov (United States)

    Newmann, Sara J; Zakaras, Jennifer M; Tao, Amy R; Onono, Maricianah; Bukusi, Elizabeth A; Cohen, Craig R; Steinfeld, Rachel; Grossman, Daniel

    2016-01-01

    With high rates of unintended pregnancy in sub-Saharan Africa, integration of family planning (FP) into HIV care is being explored as a strategy to reduce unmet need for contraception. Perspectives and experiences of healthcare providers are critical in order to create sustainable models of integrated care. This qualitative study offers insight into how HIV care providers view and experience the benefits and challenges of providing integrated FP/HIV services in Nyanza Province, Kenya. Sixteen individual interviews were conducted among healthcare workers at six public sector HIV care facilities one year after the implementation of integrated FP and HIV services. Data were transcribed and analyzed qualitatively using grounded theory methods and Atlas.ti. Providers reported a number of benefits of integrated services that they believed increased the uptake and continuation of contraceptive methods. They felt that integrated services enabled them to reach a larger number of female and male patients and in a more efficient way for patients compared to non-integrated services. Availability of FP services in the same place as HIV care also eliminated the need for most referrals, which many providers saw as a barrier for patients seeking FP. Providers reported many challenges to providing integrated services, including the lack of space, time, and sufficient staff, inadequate training, and commodity shortages. Despite these challenges, the vast majority of providers was supportive of FP/HIV integration and found integrated services to be beneficial to HIV-infected patients. Providers' concerns relating to staffing, infrastructure, and training need to be addressed in order to create sustainable, cost-effective FP/HIV integrated service models.

  6. Neutron metrology file NMF-90. An integrated database for performing neutron spectrum adjustment calculations

    International Nuclear Information System (INIS)

    Kocherov, N.P.

    1996-01-01

    The Neutron Metrology File NMF-90 is an integrated database for performing neutron spectrum adjustment (unfolding) calculations. It contains 4 different adjustment codes, the dosimetry reaction cross-section library IRDF-90/NMF-G with covariances files, 6 input data sets for reactor benchmark neutron fields and a number of utility codes for processing and plotting the input and output data. The package consists of 9 PC HD diskettes and manuals for the codes. It is distributed by the Nuclear Data Section of the IAEA on request free of charge. About 10 MB of diskspace is needed to install and run a typical reactor neutron dosimetry unfolding problem. (author). 8 refs

  7. GeNNet: an integrated platform for unifying scientific workflows and graph databases for transcriptome data analysis

    Directory of Open Access Journals (Sweden)

    Raquel L. Costa

    2017-07-01

    analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet.

  8. Are Integrated Plan Providers Associated With Lower Premiums on the Health Insurance Marketplaces?

    Science.gov (United States)

    La Forgia, Ambar; Maeda, Jared Lane K; Banthin, Jessica S

    2018-04-01

    As the health insurance industry becomes more consolidated, hospitals and health systems have started to enter the insurance business. Insurers are also rapidly acquiring providers. Although these "vertically" integrated plan providers are small players in the insurance market, they are becoming more numerous. The health insurance marketplaces (HIMs) offer a unique setting to study integrated plan providers relative to other insurer types because the HIMs were designed to promote competition. In this descriptive study, the authors compared the premiums of the lowest priced silver plans of integrated plan providers with other insurer types on the 2015 and 2016 HIMs. Integrated plan providers were associated with modestly lower premiums relative to most other insurer types. This study provides early insights into premium competition on the HIMs. Examining integrated plan providers as a separate insurer type has important policy implications because they are a growing segment of the marketplaces and their pricing behavior may influence future premium trends.

  9. Integrating query of relational and textual data in clinical databases: a case study.

    Science.gov (United States)

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  10. Integrated Storage and Management of Vector and Raster Data Based on Oracle Database

    Directory of Open Access Journals (Sweden)

    WU Zheng

    2017-05-01

    Full Text Available At present, there are many problems in the storage and management of multi-source heterogeneous spatial data, such as the difficulty of transferring, the lack of unified storage and the low efficiency. By combining relational database and spatial data engine technology, an approach for integrated storage and management of vector and raster data is proposed on the basis of Oracle in this paper. This approach establishes an integrated storage model on vector and raster data and optimizes the retrieval mechanism at first, then designs a framework for the seamless data transfer, finally realizes the unified storage and efficient management of multi-source heterogeneous data. By comparing experimental results with the international leading similar software ArcSDE, it is proved that the proposed approach has higher data transfer performance and better query retrieval efficiency.

  11. CPLA 1.0: an integrated database of protein lysine acetylation.

    Science.gov (United States)

    Liu, Zexian; Cao, Jun; Gao, Xinjiao; Zhou, Yanhong; Wen, Longping; Yang, Xiangjiao; Yao, Xuebiao; Ren, Jian; Xue, Yu

    2011-01-01

    As a reversible post-translational modification (PTM) discovered decades ago, protein lysine acetylation was known for its regulation of transcription through the modification of histones. Recent studies discovered that lysine acetylation targets broad substrates and especially plays an essential role in cellular metabolic regulation. Although acetylation is comparable with other major PTMs such as phosphorylation, an integrated resource still remains to be developed. In this work, we presented the compendium of protein lysine acetylation (CPLA) database for lysine acetylated substrates with their sites. From the scientific literature, we manually collected 7151 experimentally identified acetylation sites in 3311 targets. We statistically studied the regulatory roles of lysine acetylation by analyzing the Gene Ontology (GO) and InterPro annotations. Combined with protein-protein interaction information, we systematically discovered a potential human lysine acetylation network (HLAN) among histone acetyltransferases (HATs), substrates and histone deacetylases (HDACs). In particular, there are 1862 triplet relationships of HAT-substrate-HDAC retrieved from the HLAN, at least 13 of which were previously experimentally verified. The online services of CPLA database was implemented in PHP + MySQL + JavaScript, while the local packages were developed in JAVA 1.5 (J2SE 5.0). The CPLA database is freely available for all users at: http://cpla.biocuckoo.org.

  12. EchoBASE: an integrated post-genomic database for Escherichia coli.

    Science.gov (United States)

    Misra, Raju V; Horler, Richard S P; Reindl, Wolfgang; Goryanin, Igor I; Thomas, Gavin H

    2005-01-01

    EchoBASE (http://www.ecoli-york.org) is a relational database designed to contain and manipulate information from post-genomic experiments using the model bacterium Escherichia coli K-12. Its aim is to collate information from a wide range of sources to provide clues to the functions of the approximately 1500 gene products that have no confirmed cellular function. The database is built on an enhanced annotation of the updated genome sequence of strain MG1655 and the association of experimental data with the E.coli genes and their products. Experiments that can be held within EchoBASE include proteomics studies, microarray data, protein-protein interaction data, structural data and bioinformatics studies. EchoBASE also contains annotated information on 'orphan' enzyme activities from this microbe to aid characterization of the proteins that catalyse these elusive biochemical reactions.

  13. The Eukaryotic Pathogen Databases: a functional genomic resource integrating data from human and veterinary parasites.

    Science.gov (United States)

    Harb, Omar S; Roos, David S

    2015-01-01

    Over the past 20 years, advances in high-throughput biological techniques and the availability of computational resources including fast Internet access have resulted in an explosion of large genome-scale data sets "big data." While such data are readily available for download and personal use and analysis from a variety of repositories, often such analysis requires access to seldom-available computational skills. As a result a number of databases have emerged to provide scientists with online tools enabling the interrogation of data without the need for sophisticated computational skills beyond basic knowledge of Internet browser utility. This chapter focuses on the Eukaryotic Pathogen Databases (EuPathDB: http://eupathdb.org) Bioinformatic Resource Center (BRC) and illustrates some of the available tools and methods.

  14. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  15. Provider accountability as a driving force towards physician–hospital integration: a systematic review

    Directory of Open Access Journals (Sweden)

    Jeroen Trybou

    2015-04-01

    Full Text Available Background: Hospitals and physicians lie at the heart of our health care delivery system. In general, physicians provide medical care and hospitals the resources to deliver health care. In the past two decades many countries have adopted reforms in which provider financial risk bearing is increased. By making providers financially accountable for the delivered care integrated care delivery is stimulated.Purpose: To assess the evidence base supporting the relationship between provider financial risk bearing and physician–hospital integration and to identify the different types of methods used to measure physician–hospital integration to evaluate the functional value of these integrative models.Results: Nine studies met the inclusion criteria. The evidence base is mixed and inconclusive. Our methodological analysis of previous research shows that previous studies have largely focused on the formal structures of physician–hospital arrangements as an indicator of physician–hospital integration.Conclusion: The link between provider financial risk bearing and physician–hospital integration can at this time be supported merely on the basis of theoretical insights of agency theory rather than empirical research. Physician–hospital integration measurement has concentrated on the prevalence of contracting vehicles that enables joint bargaining in a managed care environment but without realizing integration and cooperation between hospital and physicians. Therefore, we argue that these studies fail to shed light on the impact of risk shifting on the hospital–physician relationship accurately.

  16. Provider accountability as a driving force towards physician–hospital integration: a systematic review

    Directory of Open Access Journals (Sweden)

    Jeroen Trybou

    2015-04-01

    Full Text Available Background: Hospitals and physicians lie at the heart of our health care delivery system. In general, physicians provide medical care and hospitals the resources to deliver health care. In the past two decades many countries have adopted reforms in which provider financial risk bearing is increased. By making providers financially accountable for the delivered care integrated care delivery is stimulated. Purpose: To assess the evidence base supporting the relationship between provider financial risk bearing and physician–hospital integration and to identify the different types of methods used to measure physician–hospital integration to evaluate the functional value of these integrative models. Results: Nine studies met the inclusion criteria. The evidence base is mixed and inconclusive. Our methodological analysis of previous research shows that previous studies have largely focused on the formal structures of physician–hospital arrangements as an indicator of physician–hospital integration. Conclusion: The link between provider financial risk bearing and physician–hospital integration can at this time be supported merely on the basis of theoretical insights of agency theory rather than empirical research. Physician–hospital integration measurement has concentrated on the prevalence of contracting vehicles that enables joint bargaining in a managed care environment but without realizing integration and cooperation between hospital and physicians. Therefore, we argue that these studies fail to shed light on the impact of risk shifting on the hospital–physician relationship accurately.

  17. ANISEED 2017: extending the integrated ascidian database to the exploration and evolutionary comparison of genome-scale datasets.

    Science.gov (United States)

    Brozovic, Matija; Dantec, Christelle; Dardaillon, Justine; Dauga, Delphine; Faure, Emmanuel; Gineste, Mathieu; Louis, Alexandra; Naville, Magali; Nitta, Kazuhiro R; Piette, Jacques; Reeves, Wendy; Scornavacca, Céline; Simion, Paul; Vincentelli, Renaud; Bellec, Maelle; Aicha, Sameh Ben; Fagotto, Marie; Guéroult-Bellone, Marion; Haeussler, Maximilian; Jacox, Edwin; Lowe, Elijah K; Mendez, Mickael; Roberge, Alexis; Stolfi, Alberto; Yokomori, Rui; Brown, C Titus; Cambillau, Christian; Christiaen, Lionel; Delsuc, Frédéric; Douzery, Emmanuel; Dumollard, Rémi; Kusakabe, Takehiro; Nakai, Kenta; Nishida, Hiroki; Satou, Yutaka; Swalla, Billie; Veeman, Michael; Volff, Jean-Nicolas; Lemaire, Patrick

    2018-01-04

    ANISEED (www.aniseed.cnrs.fr) is the main model organism database for tunicates, the sister-group of vertebrates. This release gives access to annotated genomes, gene expression patterns, and anatomical descriptions for nine ascidian species. It provides increased integration with external molecular and taxonomy databases, better support for epigenomics datasets, in particular RNA-seq, ChIP-seq and SELEX-seq, and features novel interactive interfaces for existing and novel datatypes. In particular, the cross-species navigation and comparison is enhanced through a novel taxonomy section describing each represented species and through the implementation of interactive phylogenetic gene trees for 60% of tunicate genes. The gene expression section displays the results of RNA-seq experiments for the three major model species of solitary ascidians. Gene expression is controlled by the binding of transcription factors to cis-regulatory sequences. A high-resolution description of the DNA-binding specificity for 131 Ciona robusta (formerly C. intestinalis type A) transcription factors by SELEX-seq is provided and used to map candidate binding sites across the Ciona robusta and Phallusia mammillata genomes. Finally, use of a WashU Epigenome browser enhances genome navigation, while a Genomicus server was set up to explore microsynteny relationships within tunicates and with vertebrates, Amphioxus, echinoderms and hemichordates. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Integrating stations from the North America Gravity Database into a local GPS-based land gravity survey

    Science.gov (United States)

    Shoberg, Thomas G.; Stoddard, Paul R.

    2013-01-01

    The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.

  19. An Integrative Clinical Database and Diagnostics Platform for Biomarker Identification and Analysis in Ion Mobility Spectra of Human Exhaled Air

    Directory of Open Access Journals (Sweden)

    Schneider Till

    2013-06-01

    Full Text Available Over the last decade the evaluation of odors and vapors in human breath has gained more and more attention, particularly in the diagnostics of pulmonary diseases. Ion mobility spectrometry coupled with multi-capillary columns (MCC/IMS, is a well known technology for detecting volatile organic compounds (VOCs in air. It is a comparatively inexpensive, non-invasive, high-throughput method, which is able to handle the moisture that comes with human exhaled air, and allows for characterizing of VOCs in very low concentrations. To identify discriminating compounds as biomarkers, it is necessary to have a clear understanding of the detailed composition of human breath. Therefore, in addition to the clinical studies, there is a need for a flexible and comprehensive centralized data repository, which is capable of gathering all kinds of related information. Moreover, there is a demand for automated data integration and semi-automated data analysis, in particular with regard to the rapid data accumulation, emerging from the high-throughput nature of the MCC/IMS technology. Here, we present a comprehensive database application and analysis platform, which combines metabolic maps with heterogeneous biomedical data in a well-structured manner. The design of the database is based on a hybrid of the entity-attribute- value (EAV model and the EAV-CR, which incorporates the concepts of classes and relationships. Additionally it offers an intuitive user interface that provides easy and quick access to the platform’s functionality: automated data integration and integrity validation, versioning and roll-back strategy, data retrieval as well as semi-automatic data mining and machine learning capabilities. The platform will support MCC/IMS-based biomarker identification and validation. The software, schemata, data sets and further information is publicly available at http://imsdb.mpi-inf.mpg.de.

  20. Whistleblowing: An integrative literature review of data-based studies involving nurses.

    Science.gov (United States)

    Jackson, Debra; Hickman, Louise D; Hutchinson, Marie; Andrew, Sharon; Smith, James; Potgieter, Ingrid; Cleary, Michelle; Peters, Kath

    2014-01-01

    Abstract Aim: To summarise and critique the research literature about whistleblowing and nurses. Whistleblowing is identified as a crucial issue in maintenance of healthcare standards and nurses are frequently involved in whistleblowing events. Despite the importance of this issue, to our knowledge an evaluation of this body of the data-based literature has not been undertaken. An integrative literature review approach was used to summarise and critique the research literature. A comprehensive search of five databases including Medline, CINAHL, PubMed and Health Science: Nursing/Academic Edition, and Google, were searched using terms including: 'Whistleblow*,' 'nurs*.' In addition, relevant journals were examined, as well as reference lists of retrieved papers. Papers published during the years 2007-2013 were selected for inclusion. Fifteen papers were identified, capturing data from nurses in seven countries. The findings in this review demonstrate a growing body of research for the nursing profession at large to engage and respond appropriately to issues involving suboptimal patient care or organisational wrongdoing. Nursing plays a key role in maintaining practice standards and in reporting care that is unacceptable although the repercussions to nurses who raise concerns are insupportable. Overall, whistleblowing and how it influences the individual, their family, work colleagues, nursing practice and policy overall, requires further national and international research attention.

  1. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    Science.gov (United States)

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  2. Plan–Provider Integration, Premiums, and Quality in the Medicare Advantage Market

    Science.gov (United States)

    Frakt, Austin B; Pizer, Steven D; Feldman, Roger

    2013-01-01

    Objective. To investigate how integration between Medicare Advantage plans and health care providers is related to plan premiums and quality ratings. Data Source. We used public data from the Centers for Medicare and Medicaid Services (CMS) and the Area Resource File and private data from one large insurer. Premiums and quality ratings are from 2009 CMS administrative files and some control variables are historical. Study Design. We estimated ordinary least-squares models for premiums and plan quality ratings, with state fixed effects and firm random effects. The key independent variable was an indicator of plan–provider integration. Data Collection. With the exception of Medigap premium data, all data were publicly available. We ascertained plan–provider integration through examination of plans’ websites and governance documents. Principal Findings. We found that integrated plan–providers charge higher premiums, controlling for quality. Such plans also have higher quality ratings. We found no evidence that integration is associated with more generous benefits. Conclusions. Current policy encourages plan–provider integration, although potential effects on health insurance products and markets are uncertain. Policy makers and regulators may want to closely monitor changes in premiums and quality after integration and consider whether quality improvement (if any) justifies premium increases (if they occur). PMID:23800017

  3. Plan-provider integration, premiums, and quality in the Medicare Advantage market.

    Science.gov (United States)

    Frakt, Austin B; Pizer, Steven D; Feldman, Roger

    2013-12-01

    To investigate how integration between Medicare Advantage plans and health care providers is related to plan premiums and quality ratings. We used public data from the Centers for Medicare and Medicaid Services (CMS) and the Area Resource File and private data from one large insurer. Premiums and quality ratings are from 2009 CMS administrative files and some control variables are historical. We estimated ordinary least-squares models for premiums and plan quality ratings, with state fixed effects and firm random effects. The key independent variable was an indicator of plan-provider integration. With the exception of Medigap premium data, all data were publicly available. We ascertained plan-provider integration through examination of plans' websites and governance documents. We found that integrated plan-providers charge higher premiums, controlling for quality. Such plans also have higher quality ratings. We found no evidence that integration is associated with more generous benefits. Current policy encourages plan-provider integration, although potential effects on health insurance products and markets are uncertain. Policy makers and regulators may want to closely monitor changes in premiums and quality after integration and consider whether quality improvement (if any) justifies premium increases (if they occur). © Health Research and Educational Trust.

  4. Development and Exploration of a Regional Stormwater BMP Performance Database to Parameterize an Integrated Decision Support Tool (i-DST)

    Science.gov (United States)

    Bell, C.; Li, Y.; Lopez, E.; Hogue, T. S.

    2017-12-01

    Decision support tools that quantitatively estimate the cost and performance of infrastructure alternatives are valuable for urban planners. Such a tool is needed to aid in planning stormwater projects to meet diverse goals such as the regulation of stormwater runoff and its pollutants, minimization of economic costs, and maximization of environmental and social benefits in the communities served by the infrastructure. This work gives a brief overview of an integrated decision support tool, called i-DST, that is currently being developed to serve this need. This presentation focuses on the development of a default database for the i-DST that parameterizes water quality treatment efficiency of stormwater best management practices (BMPs) by region. Parameterizing the i-DST by region will allow the tool to perform accurate simulations in all parts of the United States. A national dataset of BMP performance is analyzed to determine which of a series of candidate regionalizations explains the most variance in the national dataset. The data used in the regionalization analysis comes from the International Stormwater BMP Database and data gleaned from an ongoing systematic review of peer-reviewed and gray literature. In addition to identifying a regionalization scheme for water quality performance parameters in the i-DST, our review process will also provide example methods and protocols for systematic reviews in the field of Earth Science.

  5. Providing Databases for Different Indoor Positioning Technologies: Pros and Cons of Magnetic Field and Wi-Fi Based Positioning

    Directory of Open Access Journals (Sweden)

    Joaquín Torres-Sospedra

    2016-01-01

    Full Text Available Localization is one of the main pillars for indoor services. However, it is still very difficult for the mobile sensing community to compare state-of-the-art indoor positioning systems due to the scarcity of publicly available databases. To make fair and meaningful comparisons between indoor positioning systems, they must be evaluated in the same situation, or in the same sets of situations. In this paper, two databases are introduced for studying the performance of magnetic field and Wi-Fi fingerprinting based positioning systems in the same environment (i.e., indoor area. The “magnetic” database contains more than 40,000 discrete captures (270 continuous samples, whereas the “Wi-Fi” one contains 1,140 ones. The environment and both databases are fully detailed in this paper. A set of experiments is also presented where two simple but effective baselines have been developed to test the suitability of the databases. Finally, the pros and cons of both types of positioning techniques are discussed in detail.

  6. Analysis and databasing software for integrated tomographic gamma scanner (TGS) and passive-active neutron (PAN) assay systems

    International Nuclear Information System (INIS)

    Estep, R.J.; Melton, S.G.; Buenafe, C.

    2000-01-01

    The CTEN-FIT program, written for Windows 9x/NT in C++,performs databasing and analysis of combined thermal/epithermal neutron (CTEN) passive and active neutron assay data and integrates that with isotopics results and gamma-ray data from methods such as tomographic gamma scanning (TGS). The binary database is reflected in a companion Excel database that allows extensive customization via Visual Basic for Applications macros. Automated analysis options make the analysis of the data transparent to the assay system operator. Various record browsers and information displays simplify record keeping tasks

  7. DAVID Knowledgebase: a gene-centered database integrating heterogeneous gene annotation resources to facilitate high-throughput gene functional analysis

    Directory of Open Access Journals (Sweden)

    Baseler Michael W

    2007-11-01

    Full Text Available Abstract Background Due to the complex and distributed nature of biological research, our current biological knowledge is spread over many redundant annotation databases maintained by many independent groups. Analysts usually need to visit many of these bioinformatics databases in order to integrate comprehensive annotation information for their genes, which becomes one of the bottlenecks, particularly for the analytic task associated with a large gene list. Thus, a highly centralized and ready-to-use gene-annotation knowledgebase is in demand for high throughput gene functional analysis. Description The DAVID Knowledgebase is built around the DAVID Gene Concept, a single-linkage method to agglomerate tens of millions of gene/protein identifiers from a variety of public genomic resources into DAVID gene clusters. The grouping of such identifiers improves the cross-reference capability, particularly across NCBI and UniProt systems, enabling more than 40 publicly available functional annotation sources to be comprehensively integrated and centralized by the DAVID gene clusters. The simple, pair-wise, text format files which make up the DAVID Knowledgebase are freely downloadable for various data analysis uses. In addition, a well organized web interface allows users to query different types of heterogeneous annotations in a high-throughput manner. Conclusion The DAVID Knowledgebase is designed to facilitate high throughput gene functional analysis. For a given gene list, it not only provides the quick accessibility to a wide range of heterogeneous annotation data in a centralized location, but also enriches the level of biological information for an individual gene. Moreover, the entire DAVID Knowledgebase is freely downloadable or searchable at http://david.abcc.ncifcrf.gov/knowledgebase/.

  8. Business process integration between European manufacturers and transport and logistics service providers

    DEFF Research Database (Denmark)

    Mortensen, Ole; Lemoine, W

    2005-01-01

    The goal of the Supply Chain Management process is to create value for customers, stakeholders and all supply chain members, through the integration of disparate processes like manufacturing flow management, customer service and order fulfillment. However, many firms fail in the path of achieving...... a total integration. This study illustrates, from an empirical point of view, the problems associated to SC integration among European firms operating in global/international markets. The focus is on the relationship between two echelons in the supply chain: manufacturers and their transport and logistics...... service providers (TLSPs). The paper examines (1) the characteristics of the collaborative partnerships established between manufacturers and their TLSPs; (2) to what extent manufacturers and their TLSPs have integrated SC business processes; (3) the IT used to support the SC cooperation and integration...

  9. Vertical integration of teaching in Australian general practice--a survey of regional training providers.

    Science.gov (United States)

    Stocks, Nigel P; Frank, Oliver; Linn, Andrew M; Anderson, Katrina; Meertens, Sarah

    2011-06-06

    To examine vertical integration of teaching and clinical training in general practice and describe practical examples being undertaken by Australian general practice regional training providers (RTPs). A qualitative study of all RTPs in Australia, mid 2010. All 17 RTPs in Australia responded. Eleven had developed some vertical integration initiatives. Several encouraged registrars to teach junior doctors and medical students, others encouraged general practitioner supervisors to run multilevel educational sessions, a few coordinated placements, linkages and support across their region. Three RTPs provided case studies of vertical integration. Many RTPs in Australia use vertical integration of teaching in their training programs. RTPs with close associations with universities and rural clinical schools seem to be leading these initiatives.

  10. gEVE: a genome-based endogenous viral element database provides comprehensive viral protein-coding sequences in mammalian genomes.

    Science.gov (United States)

    Nakagawa, So; Takahashi, Mahoko Ueda

    2016-01-01

    In mammals, approximately 10% of genome sequences correspond to endogenous viral elements (EVEs), which are derived from ancient viral infections of germ cells. Although most EVEs have been inactivated, some open reading frames (ORFs) of EVEs obtained functions in the hosts. However, EVE ORFs usually remain unannotated in the genomes, and no databases are available for EVE ORFs. To investigate the function and evolution of EVEs in mammalian genomes, we developed EVE ORF databases for 20 genomes of 19 mammalian species. A total of 736,771 non-overlapping EVE ORFs were identified and archived in a database named gEVE (http://geve.med.u-tokai.ac.jp). The gEVE database provides nucleotide and amino acid sequences, genomic loci and functional annotations of EVE ORFs for all 20 genomes. In analyzing RNA-seq data with the gEVE database, we successfully identified the expressed EVE genes, suggesting that the gEVE database facilitates studies of the genomic analyses of various mammalian species.Database URL: http://geve.med.u-tokai.ac.jp. © The Author(s) 2016. Published by Oxford University Press.

  11. OECD/NEA data bank scientific and integral experiments databases in support of knowledge preservation and transfer

    International Nuclear Information System (INIS)

    Sartori, E.; Kodeli, I.; Mompean, F.J.; Briggs, J.B.; Gado, J.; Hasegawa, A.; D'hondt, P.; Wiesenack, W.; Zaetta, A.

    2004-01-01

    The OECD/Nuclear Energy Data Bank was established by its member countries as an institution to allow effective sharing of knowledge and its basic underlying information and data in key areas of nuclear science and technology. The activities as regards preserving and transferring knowledge consist of the: 1) Acquisition of basic nuclear data, computer codes and experimental system data needed over a wide range of nuclear and radiation applications; 2) Independent verification and validation of these data using quality assurance methods, adding value through international benchmark exercises, workshops and meetings and by issuing relevant reports with conclusions and recommendations, as well as by organising training courses to ensure their qualified and competent use; 3) Dissemination of the different products to authorised establishments in member countries and collecting and integrating user feedback. Of particular importance has been the establishment of basic and integral experiments databases and the methodology developed with the aim of knowledge preservation and transfer. Databases established thus far include: 1) IRPhE - International Reactor Physics Experimental Benchmarks Evaluations, 2) SINBAD - a radiation shielding experiments database (nuclear reactors, fusion neutronics and accelerators), 3) IFPE - International Fuel Performance Benchmark Experiments Database, 4) TDB - The Thermochemical Database Project, 5) ICSBE - International Nuclear Criticality Safety Benchmark Evaluations, 6) CCVM - CSNI Code Validation Matrix of Thermal-hydraulic Codes for LWR LOCA and Transients. This paper will concentrate on knowledge preservation and transfer concepts and methods related to some of the integral experiments and TDB. (author)

  12. MannDB – A microbial database of automated protein sequence analyses and evidence integration for protein characterization

    Directory of Open Access Journals (Sweden)

    Kuczmarski Thomas A

    2006-10-01

    Full Text Available Abstract Background MannDB was created to meet a need for rapid, comprehensive automated protein sequence analyses to support selection of proteins suitable as targets for driving the development of reagents for pathogen or protein toxin detection. Because a large number of open-source tools were needed, it was necessary to produce a software system to scale the computations for whole-proteome analysis. Thus, we built a fully automated system for executing software tools and for storage, integration, and display of automated protein sequence analysis and annotation data. Description MannDB is a relational database that organizes data resulting from fully automated, high-throughput protein-sequence analyses using open-source tools. Types of analyses provided include predictions of cleavage, chemical properties, classification, features, functional assignment, post-translational modifications, motifs, antigenicity, and secondary structure. Proteomes (lists of hypothetical and known proteins are downloaded and parsed from Genbank and then inserted into MannDB, and annotations from SwissProt are downloaded when identifiers are found in the Genbank entry or when identical sequences are identified. Currently 36 open-source tools are run against MannDB protein sequences either on local systems or by means of batch submission to external servers. In addition, BLAST against protein entries in MvirDB, our database of microbial virulence factors, is performed. A web client browser enables viewing of computational results and downloaded annotations, and a query tool enables structured and free-text search capabilities. When available, links to external databases, including MvirDB, are provided. MannDB contains whole-proteome analyses for at least one representative organism from each category of biological threat organism listed by APHIS, CDC, HHS, NIAID, USDA, USFDA, and WHO. Conclusion MannDB comprises a large number of genomes and comprehensive protein

  13. The integrated evaluation of the macro environment of companies providing transport services

    Directory of Open Access Journals (Sweden)

    A. Žvirblis

    2008-09-01

    Full Text Available The article presents the main principles of the integrated evaluation of macro environment components and factors influencing the performance of transport companies as well as providing the validated quantitative evaluation models and results obtained in evaluating the macro environment of Lithuanian companies providing transport services. Since quantitative evaluation is growing in importance, the process of developing the principles and methods of business macro environment quantitative evaluation is becoming relevant from both theoretical and practical perspectives. The created methodology is based on the concept of macro environment as an integrated whole of components, formalization and the principle of three-stage quantitative evaluation. The methodology suggested involves the quantitative evaluation of primary factors and macro environment components as an integral dimension (expressed in points. On the basis of this principle, an integrated macro environment evaluation parameter is established as its level index. The methodology integrates the identification of significant factors, building scenarios, a primary analysis of factors, expert evaluation, the quantitative evaluation of macro environment components and their whole. The application of the multi-criteria Simple Additive Weighting (SAW method is validated. The integrated evaluation of the macro environment of Lithuanian freight transportation companies was conducted. As a result, the level indices of all components as well as the level index of macro environment considered as a whole of components were identified. The latter reflects the extent of deviation from an average level of a favourable macro environment. This is important for developing strategic marketing decisions and expanding a strategic area.

  14. TriMEDB: A database to integrate transcribed markers and facilitate genetic studies of the tribe Triticeae

    Directory of Open Access Journals (Sweden)

    Yoshida Takuhiro

    2008-06-01

    Full Text Available Abstract Background The recent rapid accumulation of sequence resources of various crop species ensures an improvement in the genetics approach, including quantitative trait loci (QTL analysis as well as the holistic population analysis and association mapping of natural variations. Because the tribe Triticeae includes important cereals such as wheat and barley, integration of information on the genetic markers in these crops should effectively accelerate map-based genetic studies on Triticeae species and lead to the discovery of key loci involved in plant productivity, which can contribute to sustainable food production. Therefore, informatics applications and a semantic knowledgebase of genome-wide markers are required for the integration of information on and further development of genetic markers in wheat and barley in order to advance conventional marker-assisted genetic analyses and population genomics of Triticeae species. Description The Triticeae mapped expressed sequence tag (EST database (TriMEDB provides information, along with various annotations, regarding mapped cDNA markers that are related to barley and their homologues in wheat. The current version of TriMEDB provides map-location data for barley and wheat ESTs that were retrieved from 3 published barley linkage maps (the barley single nucleotide polymorphism database of the Scottish Crop Research Institute, the barley transcript map of Leibniz Institute of Plant Genetics and Crop Plant Research, and HarvEST barley ver. 1.63 and 1 diploid wheat map. These data were imported to CMap to allow the visualization of the map positions of the ESTs and interrelationships of these ESTs with public gene models and representative cDNA sequences. The retrieved cDNA sequences corresponding to each EST marker were assigned to the rice genome to predict an exon-intron structure. Furthermore, to generate a unique set of EST markers in Triticeae plants among the public domain, 3472 markers were

  15. Managing Consistency Anomalies in Distributed Integrated Databases with Relaxed ACID Properties

    DEFF Research Database (Denmark)

    Frank, Lars; Ulslev Pedersen, Rasmus

    2014-01-01

    In central databases the consistency of data is normally implemented by using the ACID (Atomicity, Consistency, Isolation and Durability) properties of a DBMS (Data Base Management System). This is not possible if distributed and/or mobile databases are involved and the availability of data also...... has to be optimized. Therefore, we will in this paper use so called relaxed ACID properties across different locations. The objective of designing relaxed ACID properties across different database locations is that the users can trust the data they use even if the distributed database temporarily...... is inconsistent. It is also important that disconnected locations can operate in a meaningful way in socalled disconnected mode. A database is DBMS consistent if its data complies with the consistency rules of the DBMS's metadata. If the database is DBMS consistent both when a transaction starts and when it has...

  16. Service-provider and utility task-leadership integration. Paper D

    Energy Technology Data Exchange (ETDEWEB)

    Bagshaw, S.; Van Tassell, D. [AP Services, Inc., Freeport, PA (United States)

    2011-07-01

    As nuclear power utilities strive to stream-line their organizations, while improving outage and refurbishment project performance, the necessity for effective relationships and interaction between utility and service-providers becomes paramount. Successful integration of Service-Provider into the Utility's environment is achievable and has been demonstrated. Early and extensive engagement in front-end planning, single-point-of-continuity, and the use of integrated execution teams, are some of the critical elements for ensuring success. The paper discusses Task-Leadership Integration at the three levels of; utility executive level 'need-statement'; a 'why is this important' discussion; and as a 'thoughtful tutorial' on its features and practice. (author)

  17. Service-provider and utility task-leadership integration. Paper D

    International Nuclear Information System (INIS)

    Bagshaw, S.; Van Tassell, D.

    2011-01-01

    As nuclear power utilities strive to stream-line their organizations, while improving outage and refurbishment project performance, the necessity for effective relationships and interaction between utility and service-providers becomes paramount. Successful integration of Service-Provider into the Utility's environment is achievable and has been demonstrated. Early and extensive engagement in front-end planning, single-point-of-continuity, and the use of integrated execution teams, are some of the critical elements for ensuring success. The paper discusses Task-Leadership Integration at the three levels of; utility executive level 'need-statement'; a 'why is this important' discussion; and as a 'thoughtful tutorial' on its features and practice. (author)

  18. The Planteome database: an integrated resource for reference ontologies, plant genomics and phenomics

    Science.gov (United States)

    Cooper, Laurel; Meier, Austin; Laporte, Marie-Angélique; Elser, Justin L; Mungall, Chris; Sinn, Brandon T; Cavaliere, Dario; Carbon, Seth; Dunn, Nathan A; Smith, Barry; Qu, Botong; Preece, Justin; Zhang, Eugene; Todorovic, Sinisa; Gkoutos, Georgios; Doonan, John H; Stevenson, Dennis W; Arnaud, Elizabeth

    2018-01-01

    Abstract The Planteome project (http://www.planteome.org) provides a suite of reference and species-specific ontologies for plants and annotations to genes and phenotypes. Ontologies serve as common standards for semantic integration of a large and growing corpus of plant genomics, phenomics and genetics data. The reference ontologies include the Plant Ontology, Plant Trait Ontology and the Plant Experimental Conditions Ontology developed by the Planteome project, along with the Gene Ontology, Chemical Entities of Biological Interest, Phenotype and Attribute Ontology, and others. The project also provides access to species-specific Crop Ontologies developed by various plant breeding and research communities from around the world. We provide integrated data on plant traits, phenotypes, and gene function and expression from 95 plant taxa, annotated with reference ontology terms. The Planteome project is developing a plant gene annotation platform; Planteome Noctua, to facilitate community engagement. All the Planteome ontologies are publicly available and are maintained at the Planteome GitHub site (https://github.com/Planteome) for sharing, tracking revisions and new requests. The annotated data are freely accessible from the ontology browser (http://browser.planteome.org/amigo) and our data repository. PMID:29186578

  19. Logistics Service Provider Selection through an Integrated Fuzzy Multicriteria Decision Making Approach

    OpenAIRE

    Gülşen Akman; Kasım Baynal

    2014-01-01

    Nowadays, the demand of third-party logistics provider becomes an increasingly important issue for companies to improve their customer service and to decrease logistics costs. This paper presents an integrated fuzzy approach for the evaluation and selection of 3rd party logistics service providers. This method consists of two techniques: (1) use fuzzy analytic hierarchy process to identify weights of evaluation criteria; (2) apply fuzzy technique for order preference by similarity to ideal so...

  20. General practice integration in Australia. Primary health services provider and consumer perceptions of barriers and solutions.

    Science.gov (United States)

    Appleby, N J; Dunt, D; Southern, D M; Young, D

    1999-08-01

    To identify practical examples of barriers and possible solutions to improve general practice integration with other health service providers. Twelve focus groups, including one conducted by teleconference, were held across Australia with GPs and non GP primary health service providers between May and September, 1996. Focus groups were embedded within concept mapping sessions, which were used to conceptually explore the meaning of integration in general practice. Data coding, organising and analysis were based on the techniques documented by Huberman and Miles. Barriers to integration were perceived to be principally due to the role and territory disputes between the different levels of government and their services, the manner in which the GP's role is currently defined, and the system of GP remuneration. Suggestions on ways to improve integration involved two types of strategies. The first involves initiatives implemented 'top down' through major government reform to service structures, including the expansion of the role of divisions of general practice, and structural changes to the GP remuneration systems. The second type of strategy suggested involves initiatives implemented from the 'bottom up' involving services such as hospitals (e.g. additional GP liaison positions) and the use of information technology to link services and share appropriate patient data. The findings support the need for further research and evaluation of initiatives aimed at achieving general practice integration at a systems level. There is little evidence to suggest which types of initiatives improve integration. However, general practice has been placed in the centre of the health care debate and is likely to remain central to the success of such initiatives. Clarification of the future role and authority of general practice will therefore be required if such integrative strategies are to be successful at a wider health system level.

  1. European Vegetation Archive (EVA): an integrated database of European vegetation plots

    DEFF Research Database (Denmark)

    Chytrý, M; Hennekens, S M; Jiménez-Alfaro, B

    2015-01-01

    vegetation- plot databases on a single software platform. Data storage in EVA does not affect on-going independent development of the contributing databases, which remain the property of the data contributors. EVA uses a prototype of the database management software TURBOVEG 3 developed for joint management......The European Vegetation Archive (EVA) is a centralized database of European vegetation plots developed by the IAVS Working Group European Vegetation Survey. It has been in development since 2012 and first made available for use in research projects in 2014. It stores copies of national and regional...... data source for large-scale analyses of European vegetation diversity both for fundamental research and nature conservation applications. Updated information on EVA is available online at http://euroveg.org/eva-database....

  2. SynechoNET: integrated protein-protein interaction database of a model cyanobacterium Synechocystis sp. PCC 6803

    OpenAIRE

    Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon

    2008-01-01

    Background Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. Description We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactio...

  3. IMPACT web portal: oncology database integrating molecular profiles with actionable therapeutics.

    Science.gov (United States)

    Hintzsche, Jennifer D; Yoo, Minjae; Kim, Jihye; Amato, Carol M; Robinson, William A; Tan, Aik Choon

    2018-04-20

    With the advancement of next generation sequencing technology, researchers are now able to identify important variants and structural changes in DNA and RNA in cancer patient samples. With this information, we can now correlate specific variants and/or structural changes with actionable therapeutics known to inhibit these variants. We introduce the creation of the IMPACT Web Portal, a new online resource that connects molecular profiles of tumors to approved drugs, investigational therapeutics and pharmacogenetics associated drugs. IMPACT Web Portal contains a total of 776 drugs connected to 1326 target genes and 435 target variants, fusion, and copy number alterations. The online IMPACT Web Portal allows users to search for various genetic alterations and connects them to three levels of actionable therapeutics. The results are categorized into 3 levels: Level 1 contains approved drugs separated into two groups; Level 1A contains approved drugs with variant specific information while Level 1B contains approved drugs with gene level information. Level 2 contains drugs currently in oncology clinical trials. Level 3 provides pharmacogenetic associations between approved drugs and genes. IMPACT Web Portal allows for sequencing data to be linked to actionable therapeutics for translational and drug repurposing research. The IMPACT Web Portal online resource allows users to query genes and variants to approved and investigational drugs. We envision that this resource will be a valuable database for personalized medicine and drug repurposing. IMPACT Web Portal is freely available for non-commercial use at http://tanlab.ucdenver.edu/IMPACT .

  4. MINDMAP: establishing an integrated database infrastructure for research in ageing, mental well-being, and the urban environment.

    Science.gov (United States)

    Beenackers, Mariëlle A; Doiron, Dany; Fortier, Isabel; Noordzij, J Mark; Reinhard, Erica; Courtin, Emilie; Bobak, Martin; Chaix, Basile; Costa, Giuseppe; Dapp, Ulrike; Diez Roux, Ana V; Huisman, Martijn; Grundy, Emily M; Krokstad, Steinar; Martikainen, Pekka; Raina, Parminder; Avendano, Mauricio; van Lenthe, Frank J

    2018-01-19

    Urbanization and ageing have important implications for public mental health and well-being. Cities pose major challenges for older citizens, but also offer opportunities to develop, test, and implement policies, services, infrastructure, and interventions that promote mental well-being. The MINDMAP project aims to identify the opportunities and challenges posed by urban environmental characteristics for the promotion and management of mental well-being and cognitive function of older individuals. MINDMAP aims to achieve its research objectives by bringing together longitudinal studies from 11 countries covering over 35 cities linked to databases of area-level environmental exposures and social and urban policy indicators. The infrastructure supporting integration of this data will allow multiple MINDMAP investigators to safely and remotely co-analyse individual-level and area-level data. Individual-level data is derived from baseline and follow-up measurements of ten participating cohort studies and provides information on mental well-being outcomes, sociodemographic variables, health behaviour characteristics, social factors, measures of frailty, physical function indicators, and chronic conditions, as well as blood derived clinical biochemistry-based biomarkers and genetic biomarkers. Area-level information on physical environment characteristics (e.g. green spaces, transportation), socioeconomic and sociodemographic characteristics (e.g. neighbourhood income, residential segregation, residential density), and social environment characteristics (e.g. social cohesion, criminality) and national and urban social policies is derived from publically available sources such as geoportals and administrative databases. The linkage, harmonization, and analysis of data from different sources are being carried out using piloted tools to optimize the validity of the research results and transparency of the methodology. MINDMAP is a novel research collaboration that is

  5. Multilingual access to full text databases; Acces multilingue aux bases de donnees en texte integral

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, C; Radwan, K [Institut National des Sciences et Techniques Nucleaires (INSTN), Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1990-05-01

    Many full text databases are available in only one language, or more, they may contain documents in different languages. Even if the user is able to understand the language of the documents in the database, it could be easier for him to express his need in his own language. For the case of databases containing documents in different languages, it is more simple to formulate the query in one language only and to retrieve documents in different languages. This paper present the developments and the first experiments of multilingual search, applied to french-english pair, for text data in nuclear field, based on the system SPIRIT. After reminding the general problems of full text databases search by queries formulated in natural language, we present the methods used to reformulate the queries and show how they can be expanded for multilingual search. The first results on data in nuclear field are presented (AFCEN norms and INIS abstracts). 4 refs.

  6. An Integrated Database of Unit Training Performance: Description an Lessons Learned

    National Research Council Canada - National Science Library

    Leibrecht, Bruce

    1997-01-01

    The Army Research Institute (ARI) has developed a prototype relational database for processing and archiving unit performance data from home station, training area, simulation based, and Combat Training Center training exercises...

  7. 77 FR 33486 - Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and Products...

    Science.gov (United States)

    2012-06-06

    ...Notice is hereby given that the U.S. International Trade Commission has received a complaint entitled Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and Products Containing Same, DN 2899; the Commission is soliciting comments on any public interest issues raised by the complaint or complainant's filing under section 210.8(b) of the Commission's Rules of Practice and Procedure (19 CFR 210.8(b)).

  8. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  9. DMPD: Signal integration between IFNgamma and TLR signalling pathways in macrophages. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 16920490 Signal integration between IFNgamma and TLR signalling pathways in macroph...tml) (.csml) Show Signal integration between IFNgamma and TLR signalling pathways in macrophages. PubmedID 16920490 Title Signal inte...gration between IFNgamma and TLR signalling pathways in

  10. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    Science.gov (United States)

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  11. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  12. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Science.gov (United States)

    2010-10-01

    ... require software but not hardware changes to provide portability (“Hardware Capable Switches”), within 60... queries, so that they can deliver calls from their networks to any party that has retained its number after switching from one telecommunications carrier to another. (c) [Reserved] (d) In the event a...

  13. Integrative Analyses of De Novo Mutations Provide Deeper Biological Insights into Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Atsushi Takata

    2018-01-01

    Full Text Available Recent studies have established important roles of de novo mutations (DNMs in autism spectrum disorders (ASDs. Here, we analyze DNMs in 262 ASD probands of Japanese origin and confirm the “de novo paradigm” of ASDs across ethnicities. Based on this consistency, we combine the lists of damaging DNMs in our and published ASD cohorts (total number of trios, 4,244 and perform integrative bioinformatics analyses. Besides replicating the findings of previous studies, our analyses highlight ATP-binding genes and fetal cerebellar/striatal circuits. Analysis of individual genes identified 61 genes enriched for damaging DNMs, including ten genes for which our dataset now contributes to statistical significance. Screening of compounds altering the expression of genes hit by damaging DNMs reveals a global downregulating effect of valproic acid, a known risk factor for ASDs, whereas cardiac glycosides upregulate these genes. Collectively, our integrative approach provides deeper biological and potential medical insights into ASDs.

  14. Data integration and knowledge discovery in biomedical databases. Reliable information from unreliable sources

    Directory of Open Access Journals (Sweden)

    A Mitnitski

    2003-01-01

    Full Text Available To better understand information about human health from databases we analyzed three datasets collected for different purposes in Canada: a biomedical database of older adults, a large population survey across all adult ages, and vital statistics. Redundancy in the variables was established, and this led us to derive a generalized (macroscopic state variable, being a fitness/frailty index that reflects both individual and group health status. Evaluation of the relationship between fitness/frailty and the mortality rate revealed that the latter could be expressed in terms of variables generally available from any cross-sectional database. In practical terms, this means that the risk of mortality might readily be assessed from standard biomedical appraisals collected for other purposes.

  15. An analysis of market development strategy of a point-of-sale solutions provider's market research database

    OpenAIRE

    Medina, Ahmed

    2007-01-01

    This paper is a strategic analysis of Vivonet Inc. and its restaurant performance-benchmarking tool ZATA. Vivonet is a Point of Sales (POS) systems provider for the hospitality and the retail industry. Its ZATA product captures POS and other related information from restaurants and allows the restaurants to compare their performance with restaurants in their market segment. With ZATA, Vivonet has the opportunity to extend beyond the POS systems segment and compete in the market research i...

  16. THE UNIFICATION OF THE CODE LISTS PROVIDED WITHIN THE DATA MODEL ORIGINATING FROM THE INSPIRE TECHNICAL GUIDELINES AND THE ONES PROVIDED FOR GESUT DATABASES IN THE CONTEXT OF POTENTIAL EXPLOITATION IN THE MINING INDUSTRY

    Directory of Open Access Journals (Sweden)

    Andrzej ZYGMUNIAK

    2016-07-01

    Full Text Available This study is aimed at exposing differences between two data models in case of code lists values provided there. The first of them is an obligatory one for managing Geodesic Register of Utility Networks databases in Poland [9] and the second is the model originating from the Technical Guidelines issued to the INSPIRE Directive. Since the second one mentioned is the basis for managing spatial databases among European parties, correlating these two data models has an effect in easing the way of harmonizing and, in consequence, exchanging spatial data. Therefore, the study presents the possibilities of increasing compatibility between the values of the code lists concerning attributes for objects provid-ed in both models. In practice, it could lead to an increase of the competitiveness of entities managing or processing such databases and to greater involvement in scientific or research projects when it comes to the mining industry. More-over, since utility networks located on mining areas are under particular protection, the ability of making them more fitted to their own needs will make it possible for mining plants to exchange spatial data in a more efficient way.

  17. Social Gerontology--Integrative and Territorial Aspects: A Citation Analysis of Subject Scatter and Database Coverage

    Science.gov (United States)

    Lasda Bergman, Elaine M.

    2011-01-01

    To determine the mix of resources used in social gerontology research, a citation analysis was conducted. A representative sample of citations was selected from three prominent gerontology journals and information was added to determine subject scatter and database coverage for the cited materials. Results indicate that a significant portion of…

  18. Integrating information systems : linking global business goals to local database applications

    NARCIS (Netherlands)

    Dignum, F.P.M.; Houben, G.J.P.M.

    1999-01-01

    This paper describes a new approach to design modern information systems that offer an integrated access to the data and knowledge that is available in local applications. By integrating the local data management activities into one transparent information distribution process, modern organizations

  19. An integrated conceptual framework for selecting reverse logistics providers in the presence of vagueness

    Science.gov (United States)

    Fırdolaş, Tugba; Önüt, Semih; Kongar, Elif

    2005-11-01

    In recent years, relating organization's attitude towards sustainable development, environmental management is gaining an increasing interest among researchers in supply chain management. With regard to a long term requirement of a shift from a linear economy towards a cycle economy, businesses should be motivated to embrace change brought about by consumers, government, competition, and ethical responsibility. To achieve business goals and objectives, a company must reply to increasing consumer demand for "green" products and implement environmentally responsible plans. Reverse logistics is an activity within organizations delegated to the customer service function, where customers with warranted or defective products would return them to their supplier. Emergence of reverse logistics enables to provide a competitive advantage and significant return on investment with an indirect effect on profitability. Many organizations are hiring third-party providers to implement reverse logistics programs designed to retain value by getting products back. Reverse logistics vendors play an important role in helping organizations in closing the loop for products offered by the organizations. In this regard, the selection of third-party providers issue is increasingly becoming an area of reverse logistics concept and practice. This study aims to assist managers in determining which third-party logistics provider to collaborate in the reverse logistics process with an alternative approach based on an integrated model using neural networks and fuzzy logic. An illustrative case study is discussed and the best provider is identified through the solution of this model.

  20. Provider and patient satisfaction with the integration of ambulatory and hospital EHR systems.

    Science.gov (United States)

    Meyerhoefer, Chad D; Sherer, Susan A; Deily, Mary E; Chou, Shin-Yi; Guo, Xiaohui; Chen, Jie; Sheinberg, Michael; Levick, Donald

    2018-05-16

    The installation of EHR systems can disrupt operations at clinical practice sites, but also lead to improvements in information availability. We examined how the installation of an ambulatory EHR at OB/GYN practices and its subsequent interface with an inpatient perinatal EHR affected providers' satisfaction with the transmission of clinical information and patients' ratings of their care experience. We collected data on provider satisfaction through 4 survey rounds during the phased implementation of the EHR. Data on patient satisfaction were drawn from Press Ganey surveys issued by the healthcare network through a standard process. Using multivariable models, we determined how provider satisfaction with information transmission and patient satisfaction with their care experience changed as the EHR system allowed greater information flow between OB/GYN practices and the hospital. Outpatient OB/GYN providers became more satisfied with their access to information from the inpatient perinatal triage unit once system capabilities included automatic data flow from triage back to the OB/GYN offices. Yet physicians were generally less satisfied with how the EHR affected their work processes than other clinical and non-clinical staff. Patient satisfaction dropped after initial EHR installation, and we find no evidence of increased satisfaction linked to system integration. Dissatisfaction of providers with an EHR system and difficulties incorporating EHR technology into patient care may negatively impact patient satisfaction. Care must be taken during EHR implementations to maintain good communication with patients while satisfying documentation requirements.

  1. Effect of a patient engagement tool on positive airway pressure adherence: analysis of a German healthcare provider database.

    Science.gov (United States)

    Woehrle, Holger; Arzt, Michael; Graml, Andrea; Fietze, Ingo; Young, Peter; Teschler, Helmut; Ficker, Joachim H

    2018-01-01

    This study investigated the addition of a real-time feedback patient engagement tool on positive airway pressure (PAP) adherence when added to a proactive telemedicine strategy. Data from a German healthcare provider (ResMed Healthcare Germany) were retrospectively analyzed. Patients who first started PAP therapy between 1 September 2009 and 30 April 2014, and were managed using telemedicine (AirView™; proactive care) or telemedicine + patient engagement tool (AirView™ + myAir™; patient engagement) were eligible. Patient demographics, therapy start date, sleep-disordered breathing indices, device usage hours, and therapy termination rate were obtained and compared between the two groups. The first 500 patients managed by telemedicine-guided care and a patient engagement tool were matched with 500 patients managed by telemedicine-guided care only. The proportion of nights with device usage ≥4 h was 77 ± 25% in the patient engagement group versus 63 ± 32% in the proactive care group (p < 0.001). Therapy termination occurred less often in the patient engagement group (p < 0.001). The apnea-hypopnea index was similar in the two groups, but leak was significantly lower in the patient engagement versus proactive care group (2.7 ± 4.0 vs 4.1 ± 5.3 L/min; p < 0.001). Addition of a patient engagement tool to telemonitoring-guided proactive care was associated with higher device usage and lower leak. This suggests that addition of an engagement tool may help improve PAP therapy adherence and reduce mask leak. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Written records of historical tsunamis in the northeastern South China Sea – challenges associated with developing a new integrated database

    Directory of Open Access Journals (Sweden)

    A. Y. A. Lau

    2010-09-01

    Full Text Available Comprehensive analysis of 15 previously published regional databases incorporating more than 100 sources leads to a newly revised historical tsunami database for the northeastern (NE region of the South China Sea (SCS including Taiwan. The validity of each reported historical tsunami event listed in our database is assessed by comparing and contrasting the information and descriptions provided in the other databases. All earlier databases suffer from errors associated with inaccuracies in translation between different languages, calendars and location names. The new database contains 205 records of "events" reported to have occurred between AD 1076 and 2009. We identify and investigate 58 recorded tsunami events in the region. The validity of each event is based on the consistency and accuracy of the reports along with the relative number of individual records for that event. Of the 58 events, 23 are regarded as "valid" (confirmed events, three are "probable" events and six are "possible". Eighteen events are considered "doubtful" and eight events "invalid". The most destructive tsunami of the 23 valid events occurred in 1867 and affected Keelung, northern Taiwan, killing at least 100 people. Inaccuracies in the historical record aside, this new database highlights the occurrence and geographical extent of several large tsunamis in the NE SCS region and allows an elementary statistical analysis of annual recurrence intervals. Based on historical records from 1951–2009 the probability of a tsunami (from any source affecting the region in any given year is relatively high (33.4%. However, the likelihood of a tsunami that has a wave height >1 m, and/or causes fatalities and damage to infrastructure occurring in the region in any given year is low (1–2%. This work indicates the need for further research using coastal stratigraphy and inundation modeling to help validate some of the historical accounts of tsunamis as well as adequately evaluate

  3. A reference methylome database and analysis pipeline to facilitate integrative and comparative epigenomics.

    Directory of Open Access Journals (Sweden)

    Qiang Song

    Full Text Available DNA methylation is implicated in a surprising diversity of regulatory, evolutionary processes and diseases in eukaryotes. The introduction of whole-genome bisulfite sequencing has enabled the study of DNA methylation at a single-base resolution, revealing many new aspects of DNA methylation and highlighting the usefulness of methylome data in understanding a variety of genomic phenomena. As the number of publicly available whole-genome bisulfite sequencing studies reaches into the hundreds, reliable and convenient tools for comparing and analyzing methylomes become increasingly important. We present MethPipe, a pipeline for both low and high-level methylome analysis, and MethBase, an accompanying database of annotated methylomes from the public domain. Together these resources enable researchers to extract interesting features from methylomes and compare them with those identified in public methylomes in our database.

  4. International integral experiments databases in support of nuclear data and code validation

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Gado, Janos; Hunter, Hamilton; Kodeli, Ivan; Salvatores, Massimo; Sartori, Enrico

    2002-01-01

    The OECD/NEA Nuclear Science Committee (NSC) has identified the need to establish international databases containing all the important experiments that are available for sharing among the specialists. The NSC has set up or sponsored specific activities to achieve this. The aim is to preserve them in an agreed standard format in computer accessible form, to use them for international activities involving validation of current and new calculational schemes including computer codes and nuclear data libraries, for assessing uncertainties, confidence bounds and safety margins, and to record measurement methods and techniques. The databases so far established or in preparation related to nuclear data validation cover the following areas: SINBAD - A Radiation Shielding Experiments database encompassing reactor shielding, fusion blanket neutronics, and accelerator shielding. ICSBEP - International Criticality Safety Benchmark Experiments Project Handbook, with more than 2500 critical configurations with different combination of materials and spectral indices. IRPhEP - International Reactor Physics Experimental Benchmarks Evaluation Project. The different projects are described in the following including results achieved, work in progress and planned. (author)

  5. What Patients and Providers Want to Know About Complementary and Integrative Health Therapies.

    Science.gov (United States)

    Taylor, Stephanie L; Giannitrapani, Karleen F; Yuan, Anita; Marshall, Nell

    2018-01-01

    We conducted a quality improvement project to determine (1) what information providers and patients most wanted to learn about complementary and integrative health (CIH) therapies and (2) in what format they wanted to receive this information. The overall aim was to develop educational materials to facilitate the CIH therapy decision-making processes. We used mixed methods to iteratively pilot test and revise provider and patient educational materials on yoga and meditation. We conducted semistructured interviews with 11 medical providers and held seven focus groups and used feedback forms with 52 outpatients. We iteratively developed and tested three versions of both provider and patient materials. Activities were conducted at four Veterans Administration medical facilities (two large medical centers and two outpatient clinics). Patients want educational materials with clearly stated basic information about: (1) what mindfulness and yoga are, (2) what a yoga/meditation class entails and how classes can be modified to suit different abilities, (3) key benefits to health and wellness, and (4) how to find classes at the hospital/clinic. Diverse media (videos, handouts, pocket guides) appealed to different Veterans. Videos should depict patients speaking to patients and demonstrating the CIH therapy. Written materials should be one to three pages with colors, and images and messages targeting a variety of patients. Providers wanted a concise (one-page) sheet in black and white font with no images listing the scientific evidence for CIH therapies from high-impact journals, organized by either type of CIH or health condition to use during patient encounters, and including practical information about how to refer patients. Providers and patients want to learn more about CIH therapies, but want the information in succinct, targeted formats. The information learned and materials developed in this study can be used by others to educate patients and providers on CIH

  6. Development of an Information Database for the Integrated Airline Management System (IAMS

    Directory of Open Access Journals (Sweden)

    Bogdane Ruta

    2017-08-01

    Full Text Available In present conditions the activity of any enterprise is represented as a combination of operational processes. Each of them corresponds to relevant airline management systems. Combining two or more management systems, it is possible to obtain an integrated management system. For the effective functioning of the integrated management system, an appropriate information system should be developed. This article proposes a model of such an information system.

  7. What does «integrative medicine» provide to daily scientific clinical care?

    Science.gov (United States)

    Bataller-Sifre, R; Bataller-Alberola, A

    2015-11-01

    Integrative medicine is an ambitious and noble-minded attempt to address the shortcomings of the current public health systems in our Western societies, which is restricted by the limited time available, especially in outpatient clinics. Integrative medicine also does not limit the possibilities of useful therapies that have been tested over the centuries (from China, India, etc.) or of certain resources that do not achieve the level of desired scientific credibility but that present certain therapeutic support in specific cases (homeopathy, acupuncture, etc.) but still require a scientific approach. Finally, the resource of botanical products (phytotherapy) constitutes a wide range of possibilities that universities can (and do) make progress on by providing drug brands for these products through the use of the scientific method and evidence-based medical criteria. This approach will help avoid the irrationality of the daily struggle between conventional scientific medicine (which we apply to the immense majority of patients) and the other diagnostic-therapeutic «guidelines» (natural medicine, alternative medicine, complementary medicine, patient-focused medicine and others). Copyright © 2015. Published by Elsevier España, S.L.U.

  8. An Integrative Database System of Agro-Ecology for the Black Soil Region of China

    Directory of Open Access Journals (Sweden)

    Cuiping Ge

    2007-12-01

    Full Text Available The comprehensive database system of the Northeast agro-ecology of black soil (CSDB_BL is user-friendly software designed to store and manage large amounts of data on agriculture. The data was collected in an efficient and systematic way by long-term experiments and observations of black land and statistics information. It is based on the ORACLE database management system and the interface is written in PB language. The database has the following main facilities:(1 runs on Windows platforms; (2 facilitates data entry from *.dbf to ORACLE or creates ORACLE tables directly; (3has a metadata facility that describes the methods used in the laboratory or in the observations; (4 data can be transferred to an expert system for simulation analysis and estimates made by Visual C++ and Visual Basic; (5 can be connected with GIS, so it is easy to analyze changes in land use ; and (6 allows metadata and data entity to be shared on the internet. The following datasets are included in CSDB_BL: long-term experiments and observations of water, soil, climate, biology, special research projects, and a natural resource survey of Hailun County in the 1980s; images from remote sensing, graphs of vectors and grids, and statistics from Northeast of China. CSDB_BL can be used in the research and evaluation of agricultural sustainability nationally, regionally, or locally. Also, it can be used as a tool to assist the government in planning for agricultural development. Expert systems connected with CSDB_BL can give farmers directions for farm planting management.

  9. A Reaction Database for Small Molecule Pharmaceutical Processes Integrated with Process Information

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Anantpinijwatna, Amata; Woodley, John

    2017-01-01

    This article describes the development of a reaction database with the objective to collect data for multiphase reactions involved in small molecule pharmaceutical processes with a search engine to retrieve necessary data in investigations of reaction-separation schemes, such as the role of organic......; compounds participating in the reaction; use of organic solvents and their function; information for single step and multistep reactions; target products; reaction conditions and reaction data. Information for reactor scale-up together with information for the separation and other relevant information...

  10. Integrated Data Acquisition, Storage, Retrieval and Processing Using the COMPASS DataBase (CDB)

    Czech Academy of Sciences Publication Activity Database

    Urban, Jakub; Pipek, Jan; Hron, Martin; Janky, Filip; Papřok, Richard; Peterka, Matěj; Duarte, A.S.

    2014-01-01

    Roč. 89, č. 5 (2014), s. 712-716 ISSN 0920-3796. [Ninth IAEA TM on Control, Data Acquisition, and Remote Participation for Fusion Research. Hefei, 06.05.2013-10.05.2013] R&D Projects: GA ČR GP13-38121P; GA ČR GAP205/11/2470; GA MŠk(CZ) LM2011021 Institutional support: RVO:61389021 Keywords : tokamak * CODAC * database Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.152, year: 2014 http://dx.doi.org/10.1016/j.fusengdes.2014.03.032

  11. Integration of Narrative Processing, Data Fusion, and Database Updating Techniques in an Automated System.

    Science.gov (United States)

    1981-10-29

    are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie

  12. Databases in welding engineering - definition and starting phase of the integrated welding engineering information system

    International Nuclear Information System (INIS)

    Barthelmess, H.; Queren, W.; Stracke, M.

    1989-01-01

    The structure and function of the Information AAssociation for Welding Engineering, newly established by the Deutscher Verband fuer Schweisstechnik, are presented. Examined are: special literature for welding techniques - value and prospects; databases accessible to the public for information on welding techniques; concept for the Information Association for Welding Engineering; the four phases to establish databasis for facts and expert systems of the Information Association for Welding Engineering; the pilot project 'MVT-Data base' (hot crack data base for data of modified varestraint-transvarestraint tests). (orig./MM) [de

  13. 77 FR 39735 - Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and Products...

    Science.gov (United States)

    2012-07-05

    ...Notice is hereby given that a complaint was filed with the U.S. International Trade Commission on May 31, 2012, under section 337 of the Tariff Act of 1930, as amended, on behalf of Industrial Technology Research Institute of Taiwan and ITRI International of San Jose, California. The complaint alleges violations of section 337 based upon the importation into the United States, the sale for importation, and the sale within the United States after importation of certain integrated circuit packages provided with multiple heat-conducting paths and products containing same by reason of infringement of certain claims of U.S. Patent No. 5,710,459 (``the `459 patent''). The complaint further alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The complainants request that the Commission institute an investigation and, after the investigation, issue an exclusion order and cease and desist order.

  14. Do NHS walk-in centres in England provide a model of integrated care?

    Directory of Open Access Journals (Sweden)

    C. Salisbury

    2003-08-01

    Full Text Available Purpose: To undertake a comprehensive evaluation of NHS walk-in centres against criteria of improved access, quality, user satisfaction and efficiency. Context: Forty NHS walk-in centres have been opened in England, as part of the UK governments agenda to modernise the NHS. They are intended to improve access to primary care, provide high quality treatment at convenient times, and reduce inappropriate demand on other NHS providers. Care is provided by nurses rather than doctors, using computerised algorithms, and nurses use protocols to supply treatments previously only available from doctors. Data sources: Several linked studies were conducted using different sources of data and methodologies. These included routinely collected data, site visits, patient interviews, a survey of users of walk-in centres, a study using simulated patients to assess quality of care, analysis of consultation rates in NHS services near to walk-in centres, and audit of compliance with protocols. Conclusion & discussion: The findings illustrate many of the issues described in a recent WHO reflective paper on Integrated Care, including tensions between professional judgement and use of protocols, problems with incompatible IT systems, balancing users' demands and needs, the importance of understanding health professionals' roles and issues of technical versus allocative efficiency.

  15. An integrated Biophysical CGE model to provide Sustainable Development Goal insights

    Science.gov (United States)

    Sanchez, Marko; Cicowiez, Martin; Howells, Mark; Zepeda, Eduardo

    2016-04-01

    Future projected changes in the energy system will inevitably result in changes to the level of appropriation of environmental resources, particularly land and water, and this will have wider implications for environmental sustainability, and may affect other sectors of the economy. An integrated climate, land, energy and water (CLEW) system will provide useful insights, particularly with regard to the environmental sustainability. However, it will require adequate integration with other tools to detect economic impacts and broaden the scope for policy analysis. A computable general equilibrium (CGE) model is a well suited tool to channel impacts, as detected in a CLEW analysis, onto all sectors of the economy, and evaluate trade-offs and synergies, including those of possible policy responses. This paper will show an application of such integration in a single-country CGE model with the following key characteristics. Climate is partly exogenous (as proxied by temperature and rainfall) and partly endogenous (as proxied by emissions generated by different sectors) and has an impact on endogenous variables such as land productivity and labor productivity. Land is a factor of production used in agricultural and forestry activities which can be of various types if land use alternatives (e.g., deforestation) are to be considered. Energy is an input to the production process of all economic sectors and a consumption good for households. Because it is possible to allow for substitution among different energy sources (e.g. renewable vs non-renewable) in the generation of electricity, the production process of energy products can consider the use of natural resources such as oil and water. Water, data permitting, can be considered as an input into the production process of agricultural sectors, which is particularly relevant in case of irrigation. It can also be considered as a determinant of total factor productivity in hydro-power generation. The integration of a CLEW

  16. Integrating Behavioral Health into Pediatric Primary Care: Implications for Provider Time and Cost.

    Science.gov (United States)

    Gouge, Natasha; Polaha, Jodi; Rogers, Rachel; Harden, Amy

    2016-12-01

    Integrating a behavioral health consultant (BHC) into primary care is associated with improved patient outcomes, fewer medical visits, and increased provider satisfaction; however, few studies have evaluated the feasibility of this model from an operations perspective. Specifically, time and cost have been identified as barriers to implementation. Our study aimed to examine time spent, patient volume, and revenue generated during days when the on-site BHC was available compared with days when the consultant was not. Data were collected across a 10-day period when a BHC provided services and 10 days when she was not available. Data included time stamps of patient direct care; providers' direct reports of problems raised; and a review of medical and administrative records, including billing codes and reimbursement. This study took place in a rural, stand-alone private pediatric primary care practice. The participants were five pediatric primary care providers (PCPs; two doctors of medicine, 1 doctor of osteopathy, 2 nurse practitioners) and two supervised doctoral students in psychology (BHCs). Pediatric patients (N = 668) and their parents also participated. On days when a BHC was present, medical providers spent 2 fewer minutes on average for every patient seen, saw 42% more patients, and collected $1142 more revenue than on days when no consultant was present. The time savings demonstrated on days when the consultant was available point to the efficiency and potential financial viability of this model. These results have important implications for the feasibility of hiring behavioral health professionals in a fee-for-service system. They have equally useful implications for the utility of moving to a bundled system of care in which collaborative practice is valued.

  17. The Role of a Provider-Sponsored Health Plan in Achieving Scale and Integration.

    Science.gov (United States)

    Johnson, Steven P

    2016-01-01

    In pursuit of two primary strategies-to become an integrated delivery network (IDN) on the local level and to achieve additional overall organizational scale to sustain operations-Health First, based in Rockledge, Florida, relies on the success of its provider-sponsored health plan (PSHP) as a critical asset. For Health First, the PSHP serves as an agent for holding and administering financial risk for the health of populations. In addition, we are learning that our PSHP is a critical asset in support of integrating the components of our care delivery system to manage that financial risk effectively, efficiently, and in a manner that creates a unified experience for the customer.Health First is challenged by continuing pressure on reimbursement, as well as by a substantial regulatory burden, as we work to optimize the environments and tools of care and population health management. Even with strong margins and a healthy balance sheet, we simply do not have the resources needed to bring an IDN robustly to life. However, we have discovered that our PSHP can be the vehicle that carries us to additional scale. Many health systems do not own or otherwise have access to a PSHP to hold and manage financial risk. Health First sought and found a not-for-profit health system with complementary goals and a strong brand to partner with, and we now provide private-label health plan products for that system using its strong name while operating the insurance functions under our license and with our capabilities.

  18. Integrated model for providing tactical emergency medicine support (TEMS): analysis of 120 tactical situations.

    Science.gov (United States)

    Vainionpää, T; Peräjoki, K; Hiltunen, T; Porthan, K; Taskinen, A; Boyd, J; Kuisma, M

    2012-02-01

    Various models for organising tactical emergency medicine support (TEMS) in law enforcement operations exist. In Helsinki, TEMS is organised as an integral part of emergency medical service (EMS) and applied in hostage, siege, bomb threat and crowd control situations and in other tactical situations after police request. Our aim was to analyse TEMS operations, patient profile, and the level of on-site care provided. We conducted a retrospective cohort study of TEMS operations in Helsinki from 2004 to 2009. Data were retrieved from EMS, hospital and dispatching centre files and from TEMS reports. One hundred twenty TEMS operations were analysed. Median time from dispatching to arrival on scene was 10 min [Interquartile Range (IQR) 7-14]. Median duration of operations was 41 min (IQR 19-63). Standby was the only activity in 72 operations, four patients were dead on arrival, 16 requests were called off en route and patient examination or care was needed in 28 operations. Twenty-eight patients (records retrieved) were alive on arrival and were classified as trauma (n = 12) or medical (n = 16). Of traumas, two sustained a gunshot wound, one sustained a penetrating abdominal wound, three sustained medium severity injuries and nine sustained minor injuries. There was neither on-scene nor in-hospital mortality among patients who were alive on arrival. The level of on-site care performed was basic life support in all cases. The results showed that TEMS integrated to daily EMS services including safe zone working only was a feasible, rapid and efficient way to provide medical support to law enforcement operations. © 2011 The Authors Acta Anaesthesiologica Scandinavica © 2011 The Acta Anaesthesiologica Scandinavica Foundation.

  19. Data integration for European marine biodiversity research: creating a database on benthos and plankton to study large-scale patterns and long-term changes

    NARCIS (Netherlands)

    Vandepitte, L.; Vanhoorne, B.; Kraberg, A.; Anisimova, N.; Antoniadou, C.; Araújo, R.; Bartsch, I.; Beker, B.; Benedetti-Cecchi, L.; Bertocci, I.; Cochrane, S.J.; Cooper, K.; Craeymeersch, J.A.; Christou, E.; Crisp, D.J.; Dahle, S.; de Boissier, M.; De Kluijver, M.; Denisenko, S.; De Vito, D.; Duineveld, G.; Escaravage, V.L.; Fleischer, D.; Fraschetti, S.; Giangrande, A.; Heip, C.H.R.; Hummel, H.; Janas, U.; Karez, R.; Kedra, M.; Kingston, P.; Kuhlenkamp, R.; Libes, M.; Martens, P.; Mees, J.; Mieszkowska, N.; Mudrak, S.; Munda, I.; Orfanidis, S.; Orlando-Bonaca, M.; Palerud, R.; Rachor, E.; Reichert, K.; Rumohr, H.; Schiedek, D.; Schubert, P.; Sistermans, W.C.H.; Sousa Pinto, I.S.; Southward, A.J.; Terlizzi, A.; Tsiaga, E.; Van Beusekom, J.E.E.; Vanden Berghe, E.; Warzocha, J.; Wasmund, N.; Weslawski, J.M.; Widdicombe, C.; Wlodarska-Kowalczuk, M.; Zettler, M.L.

    2010-01-01

    The general aim of setting up a central database on benthos and plankton was to integrate long-, medium- and short-term datasets on marine biodiversity. Such a database makes it possible to analyse species assemblages and their changes on spatial and temporal scales across Europe. Data collation

  20. An integrated data-analysis and database system for AMS {sup 14}C

    Energy Technology Data Exchange (ETDEWEB)

    Kjeldsen, Henrik, E-mail: kjeldsen@phys.au.d [AMS 14C Dating Centre, Department of Physics and Astronomy, Aarhus University, Aarhus (Denmark); Olsen, Jesper [Department of Earth Sciences, Aarhus University, Aarhus (Denmark); Heinemeier, Jan [AMS 14C Dating Centre, Department of Physics and Astronomy, Aarhus University, Aarhus (Denmark)

    2010-04-15

    AMSdata is the name of a combined database and data-analysis system for AMS {sup 14}C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS {sup 14}C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  1. An integrated data-analysis and database system for AMS 14C

    International Nuclear Information System (INIS)

    Kjeldsen, Henrik; Olsen, Jesper; Heinemeier, Jan

    2010-01-01

    AMSdata is the name of a combined database and data-analysis system for AMS 14 C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS 14 C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  2. MIPS Arabidopsis thaliana Database (MAtDB): an integrated biological knowledge resource based on the first complete plant genome

    Science.gov (United States)

    Schoof, Heiko; Zaccaria, Paolo; Gundlach, Heidrun; Lemcke, Kai; Rudd, Stephen; Kolesov, Grigory; Arnold, Roland; Mewes, H. W.; Mayer, Klaus F. X.

    2002-01-01

    Arabidopsis thaliana is the first plant for which the complete genome has been sequenced and published. Annotation of complex eukaryotic genomes requires more than the assignment of genetic elements to the sequence. Besides completing the list of genes, we need to discover their cellular roles, their regulation and their interactions in order to understand the workings of the whole plant. The MIPS Arabidopsis thaliana Database (MAtDB; http://mips.gsf.de/proj/thal/db) started out as a repository for genome sequence data in the European Scientists Sequencing Arabidopsis (ESSA) project and the Arabidopsis Genome Initiative. Our aim is to transform MAtDB into an integrated biological knowledge resource by integrating diverse data, tools, query and visualization capabilities and by creating a comprehensive resource for Arabidopsis as a reference model for other species, including crop plants. PMID:11752263

  3. A framework for organizing cancer-related variations from existing databases, publications and NGS data using a High-performance Integrated Virtual Environment (HIVE).

    Science.gov (United States)

    Wu, Tsung-Jung; Shamsaddini, Amirhossein; Pan, Yang; Smith, Krista; Crichton, Daniel J; Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    Years of sequence feature curation by UniProtKB/Swiss-Prot, PIR-PSD, NCBI-CDD, RefSeq and other database biocurators has led to a rich repository of information on functional sites of genes and proteins. This information along with variation-related annotation can be used to scan human short sequence reads from next-generation sequencing (NGS) pipelines for presence of non-synonymous single-nucleotide variations (nsSNVs) that affect functional sites. This and similar workflows are becoming more important because thousands of NGS data sets are being made available through projects such as The Cancer Genome Atlas (TCGA), and researchers want to evaluate their biomarkers in genomic data. BioMuta, an integrated sequence feature database, provides a framework for automated and manual curation and integration of cancer-related sequence features so that they can be used in NGS analysis pipelines. Sequence feature information in BioMuta is collected from the Catalogue of Somatic Mutations in Cancer (COSMIC), ClinVar, UniProtKB and through biocuration of information available from publications. Additionally, nsSNVs identified through automated analysis of NGS data from TCGA are also included in the database. Because of the petabytes of data and information present in NGS primary repositories, a platform HIVE (High-performance Integrated Virtual Environment) for storing, analyzing, computing and curating NGS data and associated metadata has been developed. Using HIVE, 31 979 nsSNVs were identified in TCGA-derived NGS data from breast cancer patients. All variations identified through this process are stored in a Curated Short Read archive, and the nsSNVs from the tumor samples are included in BioMuta. Currently, BioMuta has 26 cancer types with 13 896 small-scale and 308 986 large-scale study-derived variations. Integration of variation data allows identifications of novel or common nsSNVs that can be prioritized in validation studies. Database URL: BioMuta: http

  4. Integration of Health Coaching Concepts and Skills into Clinical Practice Among VHA Providers: A Qualitative Study.

    Science.gov (United States)

    Collins, David A; Thompson, Kirsten; Atwood, Katharine A; Abadi, Melissa H; Rychener, David L; Simmons, Leigh Ann

    2018-01-01

    Although studies of health coaching for behavior change in chronic disease prevention and management are increasing, to date no studies have reported on what concepts and skills providers integrate into their clinical practice following participation in health coaching courses. The purpose of this qualitative study was to assess Veterans Health Administration (VHA) providers' perceptions of the individual-level and system-level changes they observed after participating with colleagues in a 6-day Whole Health Coaching course held in 8 VHA medical centers nationwide. Data for this study were from the follow-up survey conducted with participants 2 to 3 months after completing the training. A total of 142 responses about individual-level changes and 99 responses about system-level changes were analyzed using content analysis. Eight primary themes emerged regarding individual changes, including increased emphasis on Veterans' values, increased use of listening and other specific health coaching skills in their clinical role, and adding health coaching to their clinical practice.Four primary themes emerged regarding system-level changes, including leadership support, increased staff awareness/support/learning and sharing, increased use of health coaching skills or tools within the facility, and organizational changes demonstrating a more engaged workforce, such as new work groups being formed or existing groups becoming more active. Findings suggest that VHA providers who participate in health coaching trainings do perceive positive changes within themselves and their organizations. Health coaching courses that emphasize patient-centered care and promote patient-provider partnerships likely have positive effects beyond the individual participants that can be used to promote desired organizational change.

  5. The human interactome knowledge base (hint-kb): An integrative human protein interaction database enriched with predicted protein–protein interaction scores using a novel hybrid technique

    KAUST Repository

    Theofilatos, Konstantinos A.

    2013-07-12

    Proteins are the functional components of many cellular processes and the identification of their physical protein–protein interactions (PPIs) is an area of mature academic research. Various databases have been developed containing information about experimentally and computationally detected human PPIs as well as their corresponding annotation data. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://biotools.ceid.upatras.gr/hint-kb/), a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, cal-culatesasetoffeaturesofinterest and computesaconfidence score for every candidate protein interaction. This confidence score is essential for filtering the false positive interactions which are present in existing databases, predicting new protein interactions and measuring the frequency of each true protein interaction. For this reason, a novel machine learning hybrid methodology, called (Evolutionary Kalman Mathematical Modelling—EvoKalMaModel), was used to achieve an accurate and interpretable scoring methodology. The experimental results indicated that the proposed scoring scheme outperforms existing computational methods for the prediction of PPIs.

  6. Information for Child Care Providers about Pesticides/Integrated Pest Management

    Science.gov (United States)

    Learn about pesticides/integrated pest management, the health effects associated with exposure to pests and pesticides, and the steps that can be taken to use integrated pest management strategies in childcare facilities.

  7. Design and implementation of typical target image database system

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2010-01-01

    It is necessary to provide essential background data and thematic data timely in image processing and application. In fact, application is an integrating and analyzing procedure with different kinds of data. In this paper, the authors describe an image database system which classifies, stores, manages and analyzes database of different types, such as image database, vector database, spatial database, spatial target characteristics database, its design and structure. (authors)

  8. Disciplining Change, Displacing Frictions. Two Structural Dimensions of Digital Circulation Across Land Registry Database Integration

    NARCIS (Netherlands)

    Pelizza, Annalisa

    2016-01-01

    Data acquire meaning through circulation. Yet most approaches to high-quality data aim to flatten this stratification of meanings. In government, data quality is achieved through integrated systems of authentic registers that reduce multiple trajectories to a single, official one. These systems can

  9. Multimedia Bootcamp: a health sciences library provides basic training to promote faculty technology integration.

    Science.gov (United States)

    Ramsey, Ellen C

    2006-04-25

    Recent research has shown a backlash against the enthusiastic promotion of technological solutions as replacements for traditional educational content delivery. Many institutions, including the University of Virginia, have committed staff and resources to supporting state-of-the-art, showpiece educational technology projects. However, the Claude Moore Health Sciences Library has taken the approach of helping Health Sciences faculty be more comfortable using technology in incremental ways for instruction and research presentations. In July 2004, to raise awareness of self-service multimedia resources for instructional and professional development needs, the Library conducted a "Multimedia Bootcamp" for nine Health Sciences faculty and fellows. Case study. Program stewardship by a single Library faculty member contributed to the delivery of an integrated learning experience. The amount of time required to attend the sessions and complete homework was the maximum fellows had to devote to such pursuits. The benefit of introducing technology unfamiliar to most fellows allowed program instructors to start everyone at the same baseline while not appearing to pass judgment on the technology literacy skills of faculty. The combination of wrapping the program in the trappings of a fellowship and selecting fellows who could commit to a majority of scheduled sessions yielded strong commitment from participants as evidenced by high attendance and a 100% rate of assignment completion. Response rates to follow-up evaluation requests, as well as continued use of Media Studio resources and Library expertise for projects begun or conceived during Bootcamp, bode well for the long-term success of this program. An incremental approach to integrating technology with current practices in instruction and presentation provided a supportive yet energizing environment for Health Sciences faculty. Keys to this program were its faculty focus, traditional hands-on instruction, unrestricted

  10. An object-oriented framework for managing cooperating legacy databases

    NARCIS (Netherlands)

    Balsters, H; de Brock, EO

    2003-01-01

    We describe a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous legacy databases into a global integrated system. Our approach to database federation is based on the UML/OCL data

  11. Object-oriented modeling and design of database federations

    NARCIS (Netherlands)

    Balsters, H.

    2003-01-01

    We describe a logical architecture and a general semantic framework for precise specification of so-called database federations. A database federation provides for tight coupling of a collection of heterogeneous component databases into a global integrated system. Our approach to database federation

  12. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    OpenAIRE

    Alexandra Maria Ioana FLOREA

    2013-01-01

    In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for ...

  13. Blockchain-based database to ensure data integrity in cloud computing environments

    OpenAIRE

    Gaetani, Edoardo; Aniello, Leonardo; Baldoni, Roberto; Lombardi, Federico; Margheri, Andrea; Sassone, Vladimiro

    2017-01-01

    Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinati...

  14. Design Integration of Man-Machine Interface (MMI) Display Drawings and MMI Database

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Jun; Seo, Kwang Rak; Song, Jeong Woog; Kim, Dae Ho; Han, Jung A [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    The conventional Main Control Room (MCR) was designed using hardwired controllers and analog indications mounted on control boards for control and acquisition of plant information. This is compared with advanced MCR design where Flat Panel Displays (FPDs) with soft controls and mimic displays are used. The advanced design needs MMI display drawings replacing the conventional control board layout drawings and component lists. The data is linked to related object of the MMI displays. Compilation of the data into the DB is generally done manually, which tends to introduce errors and discrepancies. Also, updating and managing is difficult due to a huge number of entries in the DB and the update must closely track the changes in the associated drawing. Therefore, automating the DB update whenever a related drawing is updated would be quite beneficial. An attempt is made to develop a new method to integrate the MMIS display drawing design and the DB management. This would significantly reduce the amount of errors and improve design quality. The design integration of the MMI Display drawing and MMI DB is explained briefly but concisely in this paper. The existing method involved individually and separately inputting design data for the MMI display drawings. This caused to the potential problem of data discrepancies and errors as well as the update time lag between related drawings and the DB. This led to development of an integration of design process which automates the design data input activity.

  15. Database Security for an Integrated Solution to Automate Sales Processes in Banking

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2013-05-01

    Full Text Available In order to maintain a competitive edge in a very active banking market the implementation of a web-based solution to standardize, optimize and manage the flow of sales / pre-sales and generating new leads is requested by a company. This article presents the realization of a development framework for software interoperability in the banking financial institutions and an integrated solution for achieving sales process automation in banking. The paper focuses on presenting the requirements for security and confidentiality of stored data and also on presenting the identified techniques and procedures to implement these requirements.

  16. Understanding the motivations of health-care providers in performing female genital mutilation: an integrative review of the literature.

    Science.gov (United States)

    Doucet, Marie-Hélène; Pallitto, Christina; Groleau, Danielle

    2017-03-23

    Female genital mutilation (FGM) is a traditional harmful practice that can cause severe physical and psychological damages to girls and women. Increasingly, trained health-care providers carry out the practice at the request of families. It is important to understand the motivations of providers in order to reduce the medicalization of FGM. This integrative review identifies, appraises and summarizes qualitative and quantitative literature exploring the factors that are associated with the medicalization of FGM and/or re-infibulation. Literature searches were conducted in PubMed, CINAHL and grey literature databases. Hand searches of identified studies were also examined. The "CASP Qualitative Research Checklist" and the "STROBE Statement" were used to assess the methodological quality of the qualitative and quantitative studies respectively. A total of 354 articles were reviewed for inclusion. Fourteen (14) studies, conducted in countries where FGM is largely practiced as well as in countries hosting migrants from these regions, were included. The main findings about the motivations of health-care providers to practice FGM were: (1) the belief that performing FGM would be less harmful for girls or women than the procedure being performed by a traditional practitioner (the so-called "harm reduction" perspective); (2) the belief that the practice was justified for cultural reasons; (3) the financial gains of performing the procedure; (4) responding to requests of the community or feeling pressured by the community to perform FGM. The main reasons given by health-care providers for not performing FGM were that they (1) are concerned about the risks that FGM can cause for girls' and women's health; (2) are preoccupied by the legal sanctions that might result from performing FGM; and (3) consider FGM to be a "bad practice". The findings of this review can inform public health program planners, policy makers and researchers to adapt or create strategies to end

  17. An Integral Model to Provide Reactive and Proactive Services in an Academic CSIRT Based on Business Intelligence

    Directory of Open Access Journals (Sweden)

    Walter Fuertes

    2017-11-01

    Full Text Available Cyber-attacks have increased in severity and complexity. That requires, that the CERT/CSIRT research and develops new security tools. Therefore, our study focuses on the design of an integral model based on Business Intelligence (BI, which provides reactive and proactive services in a CSIRT, in order to alert and reduce any suspicious or malicious activity on information systems and data networks. To achieve this purpose, a solution has been assembled, that generates information stores, being compiled from a continuous network transmission of several internal and external sources of an organization. However, it contemplates a data warehouse, which is focused like a correlator of logs, being formed by the information of feeds with diverse formats. Furthermore, it analyzed attack detection and port scanning, obtained from sensors such as Snort and Passive Vulnerability Scanner, which are stored in a database, where the logs have been generated by the systems. With such inputs, we designed and implemented BI systems using the phases of the Ralph Kimball methodology, ETL and OLAP processes. In addition, a software application has been implemented using the SCRUM methodology, which allowed to link the obtained logs to the BI system for visualization in dynamic dashboards, with the purpose of generating early alerts and constructing complex queries using the user interface through objects structures. The results demonstrate, that this solution has generated early warnings based on the level of criticality and level of sensitivity of malware and vulnerabilities as well as monitoring efficiency, increasing the level of security of member institutions.

  18. Quality of integrated chronic disease care in rural South Africa: user and provider perspectives.

    Science.gov (United States)

    Ameh, Soter; Klipstein-Grobusch, Kerstin; D'ambruoso, Lucia; Kahn, Kathleen; Tollman, Stephen M; Gómez-Olivé, Francesc Xavier

    2017-03-01

    The integrated chronic disease management (ICDM) model was introduced as a response to the dual burden of HIV/AIDS and non-communicable diseases (NCDs) in South Africa, one of the first of such efforts by an African Ministry of Health. The aim of the ICDM model is to leverage HIV programme innovations to improve the quality of chronic disease care. There is a dearth of literature on the perspectives of healthcare providers and users on the quality of care in the novel ICDM model. This paper describes the viewpoints of operational managers and patients regarding quality of care in the ICDM model. In 2013, we conducted a case study of the seven PHC facilities in the rural Agincourt sub-district in northeast South Africa. Focus group discussions (n = 8) were used to obtain data from 56 purposively selected patients ≥18 years. In-depth interviews were conducted with operational managers of each facility and the sub-district health manager. Donabedian’s structure, process and outcome theory for service quality evaluation underpinned the conceptual framework in this study. Qualitative data were analysed, with MAXQDA 2 software, to identify 17 a priori dimensions of care and unanticipated themes that emerged during the analysis. The manager and patient narratives showed the inadequacies in structure (malfunctioning blood pressure machines and staff shortage); process (irregular prepacking of drugs); and outcome (long waiting times). There was discordance between managers and patients regarding reasons for long patient waiting time which managers attributed to staff shortage and missed appointments, while patients ascribed it to late arrival of managers to the clinics. Patients reported anti-hypertension drug stock-outs (structure); sub-optimal defaulter-tracing (process); rigid clinic appointment system (process). Emerging themes showed that patients reported HIV stigmatisation in the community due to defaulter-tracing activities of home-based carers, while

  19. Anaesthesiology as an integral part of Slovene partisan medical services provided during the second world war

    Directory of Open Access Journals (Sweden)

    Aleksander Manohin

    2006-01-01

    Full Text Available Background: The aim of this work was to describe the practice of anaesthesia in partisan military hospitals in Slovenia during the Second World War. The organisation of anaesthetic services delivered as an integral part of partisan medical care was unique in Europe and in the world. Healthcare givers exhibited a high level of professsional knowledge as well as exceptional resourcefulness, adaptability, and willigness to cope with physical and psychological demands of their work.Conclusions: During the Second World War, a number of healthcare facilities for treatment of wounded and severly ill soldiers, run by partisan forces, were established on the territory of Slovenia. The paper deals with the first and most important, Slovene central military partisan hospital in Kočevski Rog, and the best-known, Franja and Pavla Hospitals in Primorska region (Franja was proposed for entry in UNESCO’s list of World Heritage Sites. The authors used a large body of written documentation, as well as the testimony provided by the living witnesses of war events. The main characteristics of partisan fighting were constant movement of troops and absence of hinterland. Therefore, it was not possible to apply the basic principle of war medical services, i. e. to evacuate wounded soldiers to the hinterland through graded units of care. No handbooks on the organization of partisan medical services were available at the time, and there were no hard and fast rules for action. Frequently, healthcare had to be provided before any arrangements for the management of wounded soldiers had been made. The apparently unsolvable problems had to be solved on the spot. The paper gives information not only on anaesthesia but also on general conditions characteristic of that period. It is only in the light of this dramatically different situation that the role of anaesthetic services provided during the war can be understood correctly. The material is illustrated with more, mostly

  20. Using reefcheck monitoring database to develop the coral reef index of biological integrity

    DEFF Research Database (Denmark)

    Nguyen, Hai Yen T.; Pedersen, Ole; Ikejima, Kou

    2009-01-01

    The coral reef indices of biological integrity was constituted based on the reef check monitoring data. Seventy six minimally disturbed sites and 72 maximallv disturbed sites in shallow water and 39 minimally disturbed sites and 37 maximally disturbed sites in deep water were classified based...... on the high-end and low-end percentages and ratios of hard coral, dead coral and fieshy algae. A total of 52 candidate metrics was identified and compiled, Eight and four metrics were finally selected to constitute the shallow and deep water coral reef indices respectively. The rating curve was applied.......05) and coral damaged by other factors -0.283 (pcoral reef indices were sensitive responses to stressors and can be capable to use as the coral reef biological monitoring tool....

  1. The effect of integration of hospitals and post-acute care providers on Medicare payment and patient outcomes.

    Science.gov (United States)

    Konetzka, R Tamara; Stuart, Elizabeth A; Werner, Rachel M

    2018-02-07

    In this paper we examine empirically the effect of integration on Medicare payment and rehospitalization. We use 2005-2013 data on Medicare beneficiaries receiving post-acute care (PAC) in the U.S. to examine integration between hospitals and the two most common post-acute care settings: skilled nursing facilities (SNFs) and home health agencies (HHA), using two measures of integration-formal vertical integration and informal integration representing preferential relationships between providers without formal relationships. Our identification strategy is twofold. First, we use longitudinal models with a fixed effect for each hospital-PAC pair in a market to test how changes in integration impact patient outcomes. Second, we use an instrumental variable approach to account for patient selection into integrated providers. We find that vertical integration between hospitals and SNFs increases Medicare payments and reduces rehospitalization rates. However, vertical integration between hospitals and HHAs has little effect, nor does informal integration between hospitals and either PAC setting. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  2. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    Science.gov (United States)

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  3. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  4. Waterborne disease outbreak detection: an integrated approach using health administrative databases.

    Science.gov (United States)

    Coly, S; Vincent, N; Vaissiere, E; Charras-Garrido, M; Gallay, A; Ducrot, C; Mouly, D

    2017-08-01

    Hundreds of waterborne disease outbreaks (WBDO) of acute gastroenteritis (AGI) due to contaminated tap water are reported in developed countries each year. Such outbreaks are probably under-detected. The aim of our study was to develop an integrated approach to detect and study clusters of AGI in geographical areas with homogeneous exposure to drinking water. Data for the number of AGI cases are available at the municipality level while exposure to tap water depends on drinking water networks (DWN). These two geographical units do not systematically overlap. This study proposed to develop an algorithm which would match the most relevant grouping of municipalities with a specific DWN, in order that tap water exposure can be taken into account when investigating future disease outbreaks. A space-time detection method was applied to the grouping of municipalities. Seven hundred and fourteen new geographical areas (groupings of municipalities) were obtained compared with the 1,310 municipalities and the 1,706 DWN. Eleven potential WBDO were identified in these groupings of municipalities. For ten of them, additional environmental investigations identified at least one event that could have caused microbiological contamination of DWN in the days previous to the occurrence of a reported WBDO.

  5. Multisource Data-Based Integrated Agricultural Drought Monitoring in the Huai River Basin, China

    Science.gov (United States)

    Sun, Peng; Zhang, Qiang; Wen, Qingzhi; Singh, Vijay P.; Shi, Peijun

    2017-10-01

    Drought monitoring is critical for early warning of drought hazard. This study attempted to develop an integrated remote sensing drought monitoring index (IRSDI), based on meteorological data for 2003-2013 from 40 meteorological stations and soil moisture data from 16 observatory stations, as well as Moderate Resolution Imaging Spectroradiometer data using a linear trend detection method, and standardized precipitation evapotranspiration index. The objective was to investigate drought conditions across the Huai River basin in both space and time. Results indicate that (1) the proposed IRSDI monitors and describes drought conditions across the Huai River basin reasonably well in both space and time; (2) frequency of drought and severe drought are observed during April-May and July-September. The northeastern and eastern parts of Huai River basin are dominated by frequent droughts and intensified drought events. These regions are dominated by dry croplands, grasslands, and highly dense population and are hence more sensitive to drought hazards; (3) intensified droughts are detected during almost all months except January, August, October, and December. Besides, significant intensification of droughts is discerned mainly in eastern and western Huai River basin. The duration and regions dominated by intensified drought events would be a challenge for water resources management in view of agricultural and other activities in these regions in a changing climate.

  6. Whether integrating refining and petrochemical business can provide opportunities for development of petrochemical industry in Serbia

    Directory of Open Access Journals (Sweden)

    Popović Zoran M.

    2016-01-01

    Full Text Available Since the beginning of 90s of last century both the petroleum industry and petrochemical industry have operated in difficult circumstances. In particularly, margins of petroleum and petrochemical industry were exacerbated during global economic crisis in 2008-2009 years. At that time, as one option that could be the solution, the global analysts had started to more intense investigate the benefits of Refining-Petrochemical Integration. Shortly afterwards, more and more petroleum refineries and petrochemical manufacturers began to see the future in this kind of operational, managerial, marketing and commercial connection. This paper evaluates, in particular, the achieved level of integration of refinery and petrochemical businesses in Central and South-Eastern Europe. And specifically, the paper identifies current capabilities and future chances of linking this kind of integration between Serbian refining and petrochemical players. The viability of integration between possible actors and benefits of every single refining-petrochemical interface in Serbia depend on many factors, and therefore each integrated system is unique and requires prior serious Cost Benefit Analysis.

  7. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  8. Quality of integrated chronic disease care in rural South Africa : user and provider perspectives

    NARCIS (Netherlands)

    Ameh, Soter; Klipstein-Grobusch, Kerstin; D'ambruoso, Lucia; Kahn, Kathleen; Tollman, Stephen M; Gómez-Olivé, Francesc Xavier

    2017-01-01

    The integrated chronic disease management (ICDM) model was introduced as a response to the dual burden of HIV/AIDS and non-communicable diseases (NCDs) in South Africa, one of the first of such efforts by an African Ministry of Health. The aim of the ICDM model is to leverage HIV programme

  9. Defining Intercloud Federation Framework for Multi-provider Cloud Services Integration

    NARCIS (Netherlands)

    Makkes, M.X.; Ngo, C.; Demchenko, Y.; Strijkers, R.J.; Meijer, R.J.; Laat, C. de

    2013-01-01

    This paper presents the on-going research to define the Intercloud Federation Framework (ICFF) which is a part of the general Intercloud Architecture Framework (ICAF) proposed by the authors. ICFF attempts to address the interoperability and integration issues in provisioning on-demand

  10. ADANS database specification

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  11. Multi-Lab EV Smart Grid Integration Requirements Study. Providing Guidance on Technology Development and Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Markel, T. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Meintz, A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hardy, K. [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, B. [Argonne National Lab. (ANL), Argonne, IL (United States); Bohn, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Smart, J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Scoffield, D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hovsapian, R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Saxena, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); MacDonald, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kiliccote, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kahl, K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pratt, R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-28

    The report begins with a discussion of the current state of the energy and transportation systems, followed by a summary of some VGI scenarios and opportunities. The current efforts to create foundational interface standards are detailed, and the requirements for enabling PEVs as a grid resource are presented. Existing technology demonstrations that include vehicle to grid functions are summarized. The report also includes a data-based discussion on the magnitude and variability of PEVs as a grid resource, followed by an overview of existing simulation tools that vi This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. can be used to explore the expansion of VGI to larger grid functions that might offer system and customer value. The document concludes with a summary of the requirements and potential action items that would support greater adoption of VGI.

  12. Do nurses provide a safe sleep environment for infants in the hospital setting? An integrative review.

    Science.gov (United States)

    Patton, Carla; Stiltner, Denise; Wright, Kelly Barnhardt; Kautz, Donald D

    2015-02-01

    Sudden infant death syndrome (SIDS) may be the most preventable cause of death for infants 0 to 6 months of age. The American Academy of Pediatrics (AAP) first published safe sleep recommendations for parents and healthcare professionals in 1992. In 1994, new guidelines were published and they became known as the "Back to Sleep" campaign. After this, a noticeable decline occurred in infant deaths from SIDS. However, this number seems to have plateaued with no continuing significant improvements in infant deaths. The objective of this review was to determine whether nurses provide a safe sleep environment for infants in the hospital setting. Research studies that dealt with nursing behaviors and nursing knowledge in the hospital setting were included in the review. A search was conducted of Google Scholar, CINAHL, PubMed, and Cochrane, using the key words "NICU," "newborn," "SIDS," "safe sleep environment," "nurse," "education," "supine sleep," "prone sleep," "safe sleep," "special care nursery," "hospital policy for safe sleep," "research," "premature," "knowledge," "practice," "health care professionals," and "parents." The review included research reports on nursing knowledge and behaviors as well as parental knowledge obtained through education and role modeling of nursing staff. Only research studies were included to ensure that our analysis was based on rigorous research-based findings. Several international studies were included because they mirrored findings noted in the United States. All studies were published between 1999 and 2012. Healthcare professionals and parents were included in the studies. They were primarily self-report surveys, designed to determine what nurses, other healthcare professionals, and parents knew or had been taught about SIDS. Integrative review. Thirteen of the 16 studies included in the review found that some nurses and some mothers continued to use nonsupine positioning. Four of the 16 studies discussed nursing knowledge and

  13. Updates on drug-target network; facilitating polypharmacology and data integration by growth of DrugBank database.

    Science.gov (United States)

    Barneh, Farnaz; Jafari, Mohieddin; Mirzaie, Mehdi

    2016-11-01

    Network pharmacology elucidates the relationship between drugs and targets. As the identified targets for each drug increases, the corresponding drug-target network (DTN) evolves from solely reflection of the pharmaceutical industry trend to a portrait of polypharmacology. The aim of this study was to evaluate the potentials of DrugBank database in advancing systems pharmacology. We constructed and analyzed DTN from drugs and targets associations in the DrugBank 4.0 database. Our results showed that in bipartite DTN, increased ratio of identified targets for drugs augmented density and connectivity of drugs and targets and decreased modular structure. To clear up the details in the network structure, the DTNs were projected into two networks namely, drug similarity network (DSN) and target similarity network (TSN). In DSN, various classes of Food and Drug Administration-approved drugs with distinct therapeutic categories were linked together based on shared targets. Projected TSN also showed complexity because of promiscuity of the drugs. By including investigational drugs that are currently being tested in clinical trials, the networks manifested more connectivity and pictured the upcoming pharmacological space in the future years. Diverse biological processes and protein-protein interactions were manipulated by new drugs, which can extend possible target combinations. We conclude that network-based organization of DrugBank 4.0 data not only reveals the potential for repurposing of existing drugs, also allows generating novel predictions about drugs off-targets, drug-drug interactions and their side effects. Our results also encourage further effort for high-throughput identification of targets to build networks that can be integrated into disease networks. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Information Interpretation Code For Providing Secure Data Integrity On Multi-Server Cloud Infrastructure

    OpenAIRE

    Sathiya Moorthy Srinivsan; Chandrasekar Chaillah

    2014-01-01

    Data security is one of the biggest concerns in cloud computing environment. Although the advantages of storing data in cloud computing environment is extremely high, there arises a problem related to data missing. CyberLiveApp (CLA) supports secure application development between multiple users, even though cloud users distinguish their vision privileges during storing of data. But CyberLiveApp failed to integrate the system with certain cloud-based computing environments on multi-server. En...

  15. Providing Power Supply to Other Use Cases Integrated in the System of Public Lighting

    OpenAIRE

    Perko, Jurica; Topić, Danijel; Šljivac, Damir

    2017-01-01

    Smart city is an attractive way of making the city more livable through intelligent solutions that are enabled by information and communication technology. Regarding the lighting system, it achieves the perfect balance between beautiful city ambience and preserving the darkness that makes cities more livable. As a smart city component, a public lighting system offers much more than light itself. Integration of other use cases has given a new dimension to the public lighting system in visual a...

  16. Integration of an OWL-DL knowledge base with an EHR prototype and providing customized information.

    Science.gov (United States)

    Jing, Xia; Kay, Stephen; Marley, Tom; Hardiker, Nicholas R

    2014-09-01

    When clinicians use electronic health record (EHR) systems, their ability to obtain general knowledge is often an important contribution to their ability to make more informed decisions. In this paper we describe a method by which an external, formal representation of clinical and molecular genetic knowledge can be integrated into an EHR such that customized knowledge can be delivered to clinicians in a context-appropriate manner.Web Ontology Language-Description Logic (OWL-DL) is a formal knowledge representation language that is widely used for creating, organizing and managing biomedical knowledge through the use of explicit definitions, consistent structure and a computer-processable format, particularly in biomedical fields. In this paper we describe: 1) integration of an OWL-DL knowledge base with a standards-based EHR prototype, 2) presentation of customized information from the knowledge base via the EHR interface, and 3) lessons learned via the process. The integration was achieved through a combination of manual and automatic methods. Our method has advantages for scaling up to and maintaining knowledge bases of any size, with the goal of assisting clinicians and other EHR users in making better informed health care decisions.

  17. Role of Third Party Logistics Providers with Advanced it to Increase Customer Satisfaction in Supply Chain Integration

    OpenAIRE

    Zaryab Sheikh; Shafaq Rana

    2012-01-01

    The main area of change in organizational strategy is the extensive use of third party logistics providers who are using advanced information technology tools and integration of supply chain to enhance customer satisfaction. By outsourcing the logistics operations, companies can focus on their core competencies and other important areas of organization which can’t be outsourced. The analysis of this paper is conducted by discussing different concepts of supply chain integration, customer sati...

  18. A development and integration of the concentration database for relative method, k0 method and absolute method in instrumental neutron activation analysis using Microsoft Access

    International Nuclear Information System (INIS)

    Hoh Siew Sin

    2012-01-01

    Instrumental Neutron Activation Analysis (INAA) is offen used to determine and calculate the concentration of an element in the sample by the National University of Malaysia, especially students of Nuclear Science Program. The lack of a database service leads consumers to take longer time to calculate the concentration of an element in the sample. This is because we are more dependent on software that is developed by foreign researchers which are costly. To overcome this problem, a study has been carried out to build an INAA database software. The objective of this study is to build a database software that help the users of INAA in Relative Method and Absolute Method for calculating the element concentration in the sample using Microsoft Excel 2010 and Microsoft Access 2010. The study also integrates k 0 data, k 0 Concent and k 0 -Westcott to execute and complete the system. After the integration, a study was conducted to test the effectiveness of the database software by comparing the concentrations between the experiments and in the database. Triple Bare Monitor Zr-Au and Cr-Mo-Au were used in Abs-INAA as monitor to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration are the net peak area (N p ), the measurement time (t m ), the irradiation time (t irr ), k-factor (k), thermal to epithermal neutron flux ratio (f), the parameters of the neutron flux distribution epithermal (α) and detection efficiency (ε p ). For Com-INAA databases, reference material IAEA-375 Soil was used to calculate the concentration of elements in the sample. CRM, SRM are also used in this database. After the INAA database integration, a verification process was to examine the effectiveness of the Abs-INAA was carried out by comparing the sample concentration between the in database and the experiment. The result of the experimental concentration value of INAA database software performed with high accuracy and precision. ICC

  19. Database Replication

    CERN Document Server

    Kemme, Bettina

    2010-01-01

    Database replication is widely used for fault-tolerance, scalability and performance. The failure of one database replica does not stop the system from working as available replicas can take over the tasks of the failed replica. Scalability can be achieved by distributing the load across all replicas, and adding new replicas should the load increase. Finally, database replication can provide fast local access, even if clients are geographically distributed clients, if data copies are located close to clients. Despite its advantages, replication is not a straightforward technique to apply, and

  20. Integrated sequence analysis pipeline provides one-stop solution for identifying disease-causing mutations.

    Science.gov (United States)

    Hu, Hao; Wienker, Thomas F; Musante, Luciana; Kalscheuer, Vera M; Kahrizi, Kimia; Najmabadi, Hossein; Ropers, H Hilger

    2014-12-01

    Next-generation sequencing has greatly accelerated the search for disease-causing defects, but even for experts the data analysis can be a major challenge. To facilitate the data processing in a clinical setting, we have developed a novel medical resequencing analysis pipeline (MERAP). MERAP assesses the quality of sequencing, and has optimized capacity for calling variants, including single-nucleotide variants, insertions and deletions, copy-number variation, and other structural variants. MERAP identifies polymorphic and known causal variants by filtering against public domain databases, and flags nonsynonymous and splice-site changes. MERAP uses a logistic model to estimate the causal likelihood of a given missense variant. MERAP considers the relevant information such as phenotype and interaction with known disease-causing genes. MERAP compares favorably with GATK, one of the widely used tools, because of its higher sensitivity for detecting indels, its easy installation, and its economical use of computational resources. Upon testing more than 1,200 individuals with mutations in known and novel disease genes, MERAP proved highly reliable, as illustrated here for five families with disease-causing variants. We believe that the clinical implementation of MERAP will expedite the diagnostic process of many disease-causing defects. © 2014 WILEY PERIODICALS, INC.

  1. Integrating Modeling and Monitoring to Provide Long-Term Control of Contaminants

    International Nuclear Information System (INIS)

    Fogwell, Th.

    2009-01-01

    An introduction is presented of the types of problems that exist for long-term control of radionuclides at DOE sites. A breakdown of the distributions at specific sites is given, together with the associated difficulties. A paradigm for remediation showing the integration of monitoring with modeling is presented. It is based on a feedback system that allows for the monitoring to act as principal sensors in a control system. Currently the establishment of a very prescriptive monitoring program fails to have a mechanism for improving models and improving control of the contaminants. The resulting system can be optimized to improve performance. Optimizing monitoring automatically entails linking the monitoring with modeling. If monitoring designs were required to be more efficient, thus requiring optimization, then the monitoring automatically becomes linked to modeling. Records of decision could be written to accommodate revisions in monitoring as better modeling evolves. The technical pieces of the required paradigm are already available; they just need to be implemented and applied to solve the long-term control of the contaminants. An integration of the various parts of the system is presented. Each part is described, and examples are given. References are given to other projects which bring together similar elements in systems for the control of contaminants. Trends are given for the development of the technical features of a robust system. Examples of monitoring methods for specific sites are given. The examples are used to illustrate how such a system would work. Examples of technology needs are presented. Finally, other examples of integrated modeling-monitoring approaches are presented. (authors)

  2. Energy Consumption Database

    Science.gov (United States)

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  3. Interactions between parents of technology-dependent children and providers: an integrative review.

    Science.gov (United States)

    Jachimiec, Jennifer A; Obrecht, Jennifer; Kavanaugh, Karen

    2015-03-01

    This article is a review of the literature on the experiences of parents and their interactions with healthcare providers while caring for their technology-dependent child(ren) in their homes. Results are presented in the following themes: information needs, respect and partnership with healthcare providers, care coordination, and experiences with home healthcare nurses. Parents needed information and guidance and felt supported when providers recognized parents' expertise with the child's care, and offered reassurance and confirmation about their practices. Home healthcare clinicians provided supportive care in the home, but their presence created challenges for the family. By acknowledging and valuing the parents' expertise, healthcare providers can empower parents to confidently care for their child.

  4. Does Environmental Enrichment Reduce Stress? An Integrated Measure of Corticosterone from Feathers Provides a Novel Perspective

    Science.gov (United States)

    Fairhurst, Graham D.; Frey, Matthew D.; Reichert, James F.; Szelest, Izabela; Kelly, Debbie M.; Bortolotti, Gary R.

    2011-01-01

    Enrichment is widely used as tool for managing fearfulness, undesirable behaviors, and stress in captive animals, and for studying exploration and personality. Inconsistencies in previous studies of physiological and behavioral responses to enrichment led us to hypothesize that enrichment and its removal are stressful environmental changes to which the hormone corticosterone and fearfulness, activity, and exploration behaviors ought to be sensitive. We conducted two experiments with a captive population of wild-caught Clark's nutcrackers (Nucifraga columbiana) to assess responses to short- (10-d) and long-term (3-mo) enrichment, their removal, and the influence of novelty, within the same animal. Variation in an integrated measure of corticosterone from feathers, combined with video recordings of behaviors, suggests that how individuals perceive enrichment and its removal depends on the duration of exposure. Short- and long-term enrichment elicited different physiological responses, with the former acting as a stressor and birds exhibiting acclimation to the latter. Non-novel enrichment evoked the strongest corticosterone responses of all the treatments, suggesting that the second exposure to the same objects acted as a physiological cue, and that acclimation was overridden by negative past experience. Birds showed weak behavioral responses that were not related to corticosterone. By demonstrating that an integrated measure of glucocorticoid physiology varies significantly with changes to enrichment in the absence of agonistic interactions, our study sheds light on potential mechanisms driving physiological and behavioral responses to environmental change. PMID:21412426

  5. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  6. Proteomic biomarkers for ovarian cancer risk in women with polycystic ovary syndrome: a systematic review and biomarker database integration.

    Science.gov (United States)

    Galazis, Nicolas; Olaleye, Olalekan; Haoula, Zeina; Layfield, Robert; Atiomo, William

    2012-12-01

    To review and identify possible biomarkers for ovarian cancer (OC) in women with polycystic ovary syndrome (PCOS). Systematic literature searches of MEDLINE, EMBASE, and Cochrane using the search terms "proteomics," "proteomic," and "ovarian cancer" or "ovarian carcinoma." Proteomic biomarkers for OC were then integrated with an updated previously published database of all proteomic biomarkers identified to date in patients with PCOS. Academic department of obstetrics and gynecology in the United Kingdom. A total of 180 women identified in the six studies. Tissue samples from women with OC vs. tissue samples from women without OC. Proteomic biomarkers, proteomic technique used, and methodologic quality score. A panel of six biomarkers was overexpressed both in women with OC and in women with PCOS. These biomarkers include calreticulin, fibrinogen-γ, superoxide dismutase, vimentin, malate dehydrogenase, and lamin B2. These biomarkers could help improve our understanding of the links between PCOS and OC and could potentially be used to identify subgroups of women with PCOS at increased risk of OC. More studies are required to further evaluate the role these biomarkers play in women with PCOS and OC. Copyright © 2012 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  7. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  8. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  9. RODOS database adapter

    International Nuclear Information System (INIS)

    Xie Gang

    1995-11-01

    Integrated data management is an essential aspect of many automatical information systems such as RODOS, a real-time on-line decision support system for nuclear emergency management. In particular, the application software must provide access management to different commercial database systems. This report presents the tools necessary for adapting embedded SQL-applications to both HP-ALLBASE/SQL and CA-Ingres/SQL databases. The design of the database adapter and the concept of RODOS embedded SQL syntax are discussed by considering some of the most important features of SQL-functions and the identification of significant differences between SQL-implementations. Finally fully part of the software developed and the administrator's and installation guides are described. (orig.) [de

  10. The UCSC Genome Browser Database: 2008 update

    DEFF Research Database (Denmark)

    Karolchik, D; Kuhn, R M; Baertsch, R

    2007-01-01

    The University of California, Santa Cruz, Genome Browser Database (GBD) provides integrated sequence and annotation data for a large collection of vertebrate and model organism genomes. Seventeen new assemblies have been added to the database in the past year, for a total coverage of 19 vertebrat...

  11. An Interoperable Cartographic Database

    OpenAIRE

    Slobodanka Ključanin; Zdravko Galić

    2007-01-01

    The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on t...

  12. An integrated decision making model for the selection of sustainable forward and reverse logistic providers

    DEFF Research Database (Denmark)

    Govindan, Kannan; Agarwal, Vernika; Darbari, Jyoti Dhingra

    2017-01-01

    Due to rising concerns for environmental sustainability, the Indian electronic industry faces immense pressure to incorporate effective sustainable practices into the supply chain (SC) planning. Consequently, manufacturing enterprises (ME) are exploring the option of re-examining their SC...... strategies and taking a formalized approach towards a sustainable partnership with logistics providers. To begin with, it is imperative to associate with sustainable forward and reverse logistics providers to manage effectively the upward and downstream flows simultaneously. In this context, this paper...... improve the sustainable performance value of the SC network and secure reasonable profits. The managerial implications drawn from the result analysis provide a sustainable framework to the ME for enhancing its corporate image....

  13. The CUTLASS database facilities

    International Nuclear Information System (INIS)

    Jervis, P.; Rutter, P.

    1988-09-01

    The enhancement of the CUTLASS database management system to provide improved facilities for data handling is seen as a prerequisite to its effective use for future power station data processing and control applications. This particularly applies to the larger projects such as AGR data processing system refurbishments, and the data processing systems required for the new Coal Fired Reference Design stations. In anticipation of the need for improved data handling facilities in CUTLASS, the CEGB established a User Sub-Group in the early 1980's to define the database facilities required by users. Following the endorsement of the resulting specification and a detailed design study, the database facilities have been implemented as an integral part of the CUTLASS system. This paper provides an introduction to the range of CUTLASS Database facilities, and emphasises the role of Database as the central facility around which future Kit 1 and (particularly) Kit 6 CUTLASS based data processing and control systems will be designed and implemented. (author)

  14. Science in Sync: Integrating Science with Literacy Provides Rewarding Learning Opportunities in Both Subjects

    Science.gov (United States)

    Wallace, Carolyn S.; Coffey, Debra

    2016-01-01

    The "Next Generation Science Standards'" ("NGSS") eight scientific and engineering practices invite teachers to develop key investigative skills while addressing important disciplinary science ideas (NGSS Lead States 2013). The "NGSS" can also provide direct links to "Common Core English Language Arts…

  15. Integrating inventory control and capacity management at a maintenance service provider

    NARCIS (Netherlands)

    Buyukkaramikli, N.C.; Ooijen, van H.P.G.; Bertrand, J.W.M.

    2015-01-01

    In this paper, we study the capacity flexibility problem of a maintenance service provider, who is running a repair shop and is responsible for the availability of numerous specialized systems which contain a critical component that is prone to failure. Upon a critical component failure, the

  16. Integrating Family as a Discipline by Providing Parent Led Curricula: Impact on LEND Trainees' Leadership Competency.

    Science.gov (United States)

    Keisling, Bruce L; Bishop, Elizabeth A; Roth, Jenness M

    2017-05-01

    Background While the MCH Leadership Competencies and family as a discipline have been required elements of Leadership Education in Neurodevelopmental and related Disabilities (LEND) programs for over a decade, little research has been published on the efficacy of either programmatic component in the development of the next generation of leaders who can advocate and care for Maternal and Child Health (MCH) populations. Objective To test the effectiveness of integrating the family discipline through implementation of parent led curricula on trainees' content knowledge, skills, and leadership development in family-centered care, according to the MCH Leadership Competencies. Methods One hundred and two long-term (≥ 300 h) LEND trainees completed a clinical and leadership training program which featured intensive parent led curricula supported by a full-time family faculty member. Trainees rated themselves on the five Basic and Advanced skill items that comprise MCH Leadership Competency 8: Family-centered Care at the beginning and conclusion of their LEND traineeship. Results When compared to their initial scores, trainees rated themselves significantly higher across all family-centered leadership competency items at the completion of their LEND traineeship. Conclusions The intentional engagement of a full-time family faculty member and parent led curricula that include didactic and experiential components are associated with greater identification and adoption by trainees of family-centered attitudes, skills, and practices. However, the use of the MCH Leadership Competencies as a quantifiable measure of program evaluation, particularly leadership development, is limited.

  17. Nuclear reactors in de-regulated markets: Integration between providers and customers?

    International Nuclear Information System (INIS)

    Girard, Philippe

    2006-01-01

    The deregulation of electricity markets has in most cases coincided with the end of state monopolies, where financial risks were borne by customers/citizens. Today, despite an economic advantage, nuclear power development faces two main problems: public acceptance and reticence of investors (banks, utilities shareholders). The development of electricity markets provides different financial instruments in order to hedge financial risks, but it is currently difficult to fix forward contracts for more than three to four years, and this period is insufficient for the financing of a nuclear reactor. A solution could be the evolution of nuclear providers into nuclear operators selling electricity (MWh) rather than selling nuclear capacity (MW), nuclear fuel and services. In this case, their customers would be utilities and big customers aiming to hedge a part of their supplies with long-term contracts or stakes in nuclear reactors without some nuclear constraints. (author)

  18. Falls prevention among older people and care providers: protocol for an integrative review

    OpenAIRE

    Cuesta Benjumea, Carmen de la; Henriques, Maria Adriana; Abad Corpa, Eva; Roe, Brenda; Orts-Cortés, María Isabel; Lidón-Cerezuela, Beatriz; Avendaño-Céspedes, Almudena; Oliver-Carbonell, José Luis; Sánchez Ardila, Carmen

    2017-01-01

    Aim. To review the evidence about the role of care providers in fall prevention in older adults aged ≥ 65 years, this includes their views, strategies, and approaches on falls prevention and effectiveness of nursing interventions. Background. Some fall prevention programmes are successfully implemented and led by nurses and it is acknowledged the vital role they play in developing plans for fall prevention. Nevertheless, there has not been a systematic review of the literature that describes ...

  19. A payer-provider partnership for integrated care of patients receiving dialysis.

    Science.gov (United States)

    Kindy, Justin; Roer, David; Wanovich, Robert; McMurray, Stephen

    2018-04-01

    Patients with end-stage renal disease (ESRD) are clinically complex, requiring intensive and costly care. Coordinated care may improve outcomes and reduce costs. The objective of this study was to determine the impact of a payer-provider care partnership on key clinical and economic outcomes in enrolled patients with ESRD.  Retrospective observational study. Data on patient demographics and clinical outcomes were abstracted from the electronic health records of the dialysis provider. Data on healthcare costs were collected from payer claims. Data were collected for a baseline period prior to initiation of the partnership (July 2011-June 2012) and for two 12-month periods following initiation (April 2013-March 2014 and April 2014-March 2015). Among both Medicare Advantage and commercial insurance program members, the rate of central venous catheter use for vascular access was lower following initiation of the partnership compared with the baseline period. Likewise, hospital admission rates, emergency department visit rates, and readmission rates were lower following partnership initiation. Rates of influenza and pneumococcal vaccination were higher than 95% throughout all 3 time periods. Total medical costs were lower for both cohorts of members in the second 12-month period following partnership initiation compared with the baseline period. Promising trends were observed among members participating in this payer-provider care partnership with respect to both clinical and economic outcomes. This suggests that collaborations with shared incentives may be a valuable approach for patients with ESRD.

  20. Integrating Self-Determination and Job Demands-Resources Theory in Predicting Mental Health Provider Burnout.

    Science.gov (United States)

    Dreison, Kimberly C; White, Dominique A; Bauer, Sarah M; Salyers, Michelle P; McGuire, Alan B

    2018-01-01

    Limited progress has been made in reducing burnout in mental health professionals. Accordingly, we identified factors that might protect against burnout and could be productive focal areas for future interventions. Guided by self-determination theory, we examined whether supervisor autonomy support, self-efficacy, and staff cohesion predict provider burnout. 358 staff from 13 agencies completed surveys. Higher levels of supervisor autonomy support, self-efficacy, and staff cohesion were predictive of lower burnout, even after accounting for job demands. Although administrators may be limited in their ability to reduce job demands, our findings suggest that increasing core job resources may be a viable alternative.

  1. Integrating Project Management, Product Design with Industry Sponsored Projects provides Stimulating Senior Capstone Experiences

    Directory of Open Access Journals (Sweden)

    Phillip A. Sanger

    2011-07-01

    Full Text Available

    Abstract ¾ Many students are uncomfortable with real world engineering problems where needs and requirements must be concretely defined and the selection of design solutions is not black and white. This paper describes a two semester, multi-disciplinary senior capstone project for students from three Engineering and Technology Department programs (electrical engineering, electrical and computer engineering technology, and engineering technology that brings together the tools of project management and the creative product development process into industry sponsored projects.  The projects are fully integrated with the Center for Rapid Product Realization with its dual goals of economic development and enhanced learning.  The stage/gate development process is used with six formal reviews covering the development of the proposal through to the fabrication and testing of the project’s output.  Over the past four years thirty five (35 projects have been undertaken with students getting an exciting

  2. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    Science.gov (United States)

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Issues in Big-Data Database Systems

    Science.gov (United States)

    2014-06-01

    that big data will not be manageable using conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems...conventional relational database technology, and it is true that alternative paradigms, such as NoSQL systems and search engines, have much to offer...scale well, and because integration with external data sources is so difficult. NoSQL systems are more open to this integration, and provide excellent

  4. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider

    Directory of Open Access Journals (Sweden)

    Heather E Douglas

    2017-04-01

    Full Text Available Introduction: There is limited evidence of the benefits of information and communication technology (ICT to support integrated aged care services. Objectives: We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to support service integration. Methods: Our case study included seventeen interviews and eleven observation sessions with a purposive sample of staff within the organisation. Inductive analysis was used to develop a model of ICT-supported information exchange. Results: Management staff described the integrated care model designed to underpin carelink+. Frontline staff described complex information exchange processes supporting coordination of client services. Mismatches between the data quality and the functions carelink+ was designed to support necessitated the evolution of new work processes associated with the system. Conclusions: There is value in explicitly modelling the work processes that emerge as a consequence of ICT. Continuous evaluation of the match between ICT and work processes will help aged care organisations to achieve higher levels of ICT maturity that support their efforts to provide integrated care to clients.

  5. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider.

    Science.gov (United States)

    Douglas, Heather E; Georgiou, Andrew; Tariq, Amina; Prgomet, Mirela; Warland, Andrew; Armour, Pauline; Westbrook, Johanna I

    2017-04-10

    There is limited evidence of the benefits of information and communication technology (ICT) to support integrated aged care services. We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to support service integration. Our case study included seventeen interviews and eleven observation sessions with a purposive sample of staff within the organisation. Inductive analysis was used to develop a model of ICT-supported information exchange. Management staff described the integrated care model designed to underpin carelink+. Frontline staff described complex information exchange processes supporting coordination of client services. Mismatches between the data quality and the functions carelink+ was designed to support necessitated the evolution of new work processes associated with the system. There is value in explicitly modelling the work processes that emerge as a consequence of ICT. Continuous evaluation of the match between ICT and work processes will help aged care organisations to achieve higher levels of ICT maturity that support their efforts to provide integrated care to clients.

  6. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider

    Science.gov (United States)

    Georgiou, Andrew; Tariq, Amina; Prgomet, Mirela; Warland, Andrew; Armour, Pauline; Westbrook, Johanna I

    2017-01-01

    Introduction: There is limited evidence of the benefits of information and communication technology (ICT) to support integrated aged care services. Objectives: We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to support service integration. Methods: Our case study included seventeen interviews and eleven observation sessions with a purposive sample of staff within the organisation. Inductive analysis was used to develop a model of ICT-supported information exchange. Results: Management staff described the integrated care model designed to underpin carelink+. Frontline staff described complex information exchange processes supporting coordination of client services. Mismatches between the data quality and the functions carelink+ was designed to support necessitated the evolution of new work processes associated with the system. Conclusions: There is value in explicitly modelling the work processes that emerge as a consequence of ICT. Continuous evaluation of the match between ICT and work processes will help aged care organisations to achieve higher levels of ICT maturity that support their efforts to provide integrated care to clients. PMID:29042851

  7. Geobiology of the Critical Zone: the Hierarchies of Process, Form and Life provide an Integrated Ontology

    Science.gov (United States)

    Cotterill, Fenton P. D.

    2016-04-01

    geomorphology characterize Africa's older surfaces, many of which qualify as palimpsests: overwritten and reshaped repeatedly over timescales of 10 000-100 000 000 yr. Inheritance, equifinality, and exhumation are commonly invoked to explain such landscape patterns, but are difficult to measure and thus test; here Africa's vast, deep regoliths epitomize the starkness of these challenges facing researchers across much of the continent. These deficiencies and problems are magnified when we consider the knowledge we seek of African landscape evolution toward resolving the complex history of the African plate since its individuation. The credentials of this knowledge are prescribed by the evidence needed to test competing hypotheses, especially invoking first order determinants of landscape dynamics e.g. membrane tectonics (Oxburgh ER & Turcotte DL 1974. Earth Planet. Sci. Lett. 22:133-140) versus plumes (Foulger G 2013. Plates vs Plumes: A Geological Controversy. Wiley Blackwell). The evidence needed to test such competing hypotheses demands robust reconstructions of the individuated histories of landforms; in the African context, robustness pertains to the representativeness of events reconstructed in form and space (up to continental scales) and back through time from the Neogene into the Late Mesozoic. The ideal map of quantitative evidence must aim to integrate salient details in the trajectories of individuated landforms representing the principal landscapes of all Africa's margins, basins and watersheds. This in turn demands measurements - in mesoscale detail - of relief, drainage and regolith back though time, wherever keystone packages of evidence have survived Gondwana break up and its aftermath. Such a strategy is indeed ambitious, and it may well be dismissed as impractical. Nevertheless, the alternatives fall short. If it is to be representative of the history it purports to explain, we need the mesoscale facts to inform any narrative of a larger landscape (regional

  8. The Hydrograph Analyst, an Arcview GIS Extension That Integrates Point, Spatial, and Temporal Data Provides A Graphical User Interface for Hydrograph Analysis

    International Nuclear Information System (INIS)

    Jones, M.L.; O'Brien, G.M.; Jones, M.L.

    2000-01-01

    The Hydrograph Analyst (HA) is an ArcView GIS 3.2 extension developed by the authors to analyze hydrographs from a network of ground-water wells and springs in a regional ground-water flow model. ArcView GIS integrates geographic, hydrologic, and descriptive information and provides the base functionality needed for hydrograph analysis. The HA extends ArcView's base functionality by automating data integration procedures and by adding capabilities to visualize and analyze hydrologic data. Data integration procedures were automated by adding functionality to the View document's Document Graphical User Interface (DocGUI). A menu allows the user to query a relational database and select sites which are displayed as a point theme in a View document. An ''Identify One to Many'' tool is provided within the View DocGUI to retrieve all hydrologic information for a selected site and display it in a simple and concise tabular format. For example, the display could contain various records from many tables storing data for one site. Another HA menu allows the user to generate a hydrograph for sites selected from the point theme. Hydrographs generated by the HA are added as hydrograph documents and accessed by the user with the Hydrograph DocGUI, which contains tools and buttons for hydrograph analysis. The Hydrograph DocGUI has a ''Select By Polygon'' tool used for isolating particular points on the hydrograph inside a user-drawn polygon or the user could isolate the same points by constructing a logical expression with the ArcView GIS ''Query Builder'' dialog that is also accessible in the Hydrograph DocGUI. Other buttons can be selected to alter the query applied to the active hydrograph. The selected points on the active hydrograph can be attributed (or flagged) individually or as a group using the ''Flag'' tool found on the Hydrograph DocGUI. The ''Flag'' tool activates a dialog box that prompts the user to select an attribute and ''methods'' or ''conditions'' that qualify

  9. NASA's Functional Task Test: Providing Information for an Integrated Countermeasure System

    Science.gov (United States)

    Bloomberg, J. J.; Feiveson, A. H.; Laurie, S. S.; Lee, S. M. C.; Mulavara, A. P.; Peters, B. T.; Platts, S. H.; Ploutz-Snyder, L. L.; Reschke, M. F.; Ryder, J. W.; hide

    2015-01-01

    postural stability (i.e. hatch opening, ladder climb, manual manipulation of objects and tool use) showed little reduction in performance. These changes in functional performance were paralleled by similar decrements in sensorimotor tests designed to specifically assess postural equilibrium and dynamic gait control. Bed rest subjects experienced similar deficits both in functional tests with balance challenges and in sensorimotor tests designed to evaluate postural and gait control as spaceflight subjects indicating that body support unloading experienced during spaceflight plays a central role in post-flight alteration of functional task performance. To determine how differences in body-support loading experienced during in-flight treadmill exercise affect postflight functional performance, the loading history for each subject during in-flight treadmill (T2) exercise was correlated with postflight measures of performance. ISS crewmembers who walked on the treadmill with higher pull-down loads had enhanced post-flight performance on tests requiring mobility. Taken together the spaceflight and bed rest data point to the importance of supplementing inflight exercise countermeasures with balance and sensorimotor adaptability training. These data also support the notion that inflight treadmill exercise performed with higher body loading provides sensorimotor benefits leading to improved performance on functional tasks that require dynamic postural stability and mobility.

  10. Integration of mental health resources in a primary care setting leads to increased provider satisfaction and patient access.

    Science.gov (United States)

    Vickers, Kristin S; Ridgeway, Jennifer L; Hathaway, Julie C; Egginton, Jason S; Kaderlik, Angela B; Katzelnick, David J

    2013-01-01

    This evaluation assessed the opinions and experiences of primary care providers and their support staff before and after implementation of expanded on-site mental health services and related system changes in a primary care clinic. Individual semistructured interviews, which contained a combination of open-ended questions and rating scales, were used to elicit opinions about mental health services before on-site system and resource changes occurred and repeated following changes that were intended to improve access to on-site mental health care. In the first set of interviews, prior to expanding mental health services, primary care providers and support staff were generally dissatisfied with the availability and scheduling of on-site mental health care. Patients were often referred outside the primary care clinic for mental health treatment, to the detriment of communication and coordinated care. Follow-up interviews conducted after expansion of mental health services, scheduling refinements and other system changes revealed improved provider satisfaction in treatment access and coordination of care. Providers appreciated immediate and on-site social worker availability to triage mental health needs and help access care, and on-site treatment was viewed as important for remaining informed about patient care the primary care providers are not delivering directly. Expanding integrated mental health services resulted in increased staff and provider satisfaction. Our evaluation identified key components of satisfaction, including on-site collaboration and assistance triaging patient needs. The sustainability of integrated models of care requires additional study. © 2013.

  11. Partial valuation of the goods and services that it provides the mangrove ecosystem: An integrated ecological-economic analysis

    International Nuclear Information System (INIS)

    Castiblanco R, Carmenza

    2002-01-01

    The article presents a methodology to value the economic benefits of the use of some goods and services that provides the mangrove ecosystem, located in the municipality of Tumaco. An ecological analysis is developed integrated to an economic evaluation that allows expressing in monetary terms some values of use of the mangrove; this value are compared with the profitability that reports the Camaroniculture, productive activity that is constituted at the moment, in the most profitable alternative use

  12. Implementing Information and Communication Technology to Support Community Aged Care Service Integration: Lessons from an Australian Aged Care Provider

    OpenAIRE

    Douglas, Heather E; Georgiou, Andrew; Tariq, Amina; Prgomet, Mirela; Warland, Andrew; Armour, Pauline; Westbrook, Johanna I

    2017-01-01

    Introduction: There is limited evidence of the benefits of information and communication technology (ICT) to support integrated aged care services. Objectives: We undertook a case study to describe carelink+, a centralised client service management ICT system implemented by a large aged and community care service provider, Uniting. We sought to explicate the care-related information exchange processes associated with carelink+ and identify lessons for organisations attempting to use ICT to su...

  13. An Integral Model to Provide Reactive and Proactive Services in an Academic CSIRT Based on Business Intelligence

    OpenAIRE

    Walter Fuertes; Francisco Reyes; Paúl Valladares; Freddy Tapia; Theofilos Toulkeridis; Ernesto Pérez

    2017-01-01

    Cyber-attacks have increased in severity and complexity. That requires, that the CERT/CSIRT research and develops new security tools. Therefore, our study focuses on the design of an integral model based on Business Intelligence (BI), which provides reactive and proactive services in a CSIRT, in order to alert and reduce any suspicious or malicious activity on information systems and data networks. To achieve this purpose, a solution has been assembled, that generates information stores, bein...

  14. Integration for coexistence? Implementation of intercultural health care policy in Ghana from the perspective of service users and providers.

    Science.gov (United States)

    Gyasi, Razak Mohammed; Poku, Adjoa Afriyie; Boateng, Simon; Amoah, Padmore Adusei; Mumin, Alhassan Abdul; Obodai, Jacob; Agyemang-Duah, Williams

    2017-01-01

    In spite of the World Health Organization's recommendations over the past decades, Ghana features pluralistic rather than truly integrated medical system. Policies about the integration of complementary medicine into the national health care delivery system need to account for individual-level involvement and cultural acceptability of care rendered by health care providers. Studies in Ghana, however, have glossed over the standpoint of the persons of the illness episode about the intercultural health care policy framework. This paper explores the health care users, and providers' experiences and attitudes towards the implementation of intercultural health care policy in Ghana. In-depth interviews, augmented with informal conversations, were conducted with 16 health service users, 7 traditional healers and 6 health professionals in the Sekyere South District and Kumasi Metropolis in the Ashanti Region of Ghana. Data were thematically analysed and presented based on the a posteriori inductive reduction approach. Findings reveal a widespread positive attitude to, and support for integrative medical care in Ghana. However, inter-provider communication in a form of cross-referrals and collaborative mechanisms between healers and health professionals seldom occurs and remains unofficially sanctioned. Traditional healers and health care professionals are skeptical about intercultural health care policy mainly due to inadequate political commitment for provider education. The medical practitioners have limited opportunity to undergo training for integrative medical practice. We also find a serious mistrust between the practitioners due to the "diversity of healing approaches and techniques." Weak institutional support, lack of training to meet standards of practice, poor registration and regulatory measures as well as negative perception of the integrative medical policy inhibit its implementation in Ghana. In order to advance any useful intercultural health care policy in

  15. Software listing: CHEMTOX database

    International Nuclear Information System (INIS)

    Moskowitz, P.D.

    1993-01-01

    Initially launched in 1983, the CHEMTOX Database was among the first microcomputer databases containing hazardous chemical information. The database is used in many industries and government agencies in more than 17 countries. Updated quarterly, the CHEMTOX Database provides detailed environmental and safety information on 7500-plus hazardous substances covered by dozens of regulatory and advisory sources. This brief listing describes the method of accessing data and provides ordering information for those wishing to obtain the CHEMTOX Database

  16. Tight-coupling of groundwater flow and transport modelling engines with spatial databases and GIS technology: a new approach integrating Feflow and ArcGIS

    Directory of Open Access Journals (Sweden)

    Ezio Crestaz

    2012-09-01

    Full Text Available Implementation of groundwater flow and transport numerical models is generally a challenge, time-consuming and financially-demanding task, in charge to specialized modelers and consulting firms. At a later stage, within clearly stated limits of applicability, these models are often expected to be made available to less knowledgeable personnel to support/design and running of predictive simulations within more familiar environments than specialized simulation systems. GIS systems coupled with spatial databases appear to be ideal candidates to address problem above, due to their much wider diffusion and expertise availability. Current paper discusses the issue from a tight-coupling architecture perspective, aimed at integration of spatial databases, GIS and numerical simulation engines, addressing both observed and computed data management, retrieval and spatio-temporal analysis issues. Observed data can be migrated to the central database repository and then used to set up transient simulation conditions in the background, at run time, while limiting additional complexity and integrity failure risks as data duplication during data transfer through proprietary file formats. Similarly, simulation scenarios can be set up in a familiar GIS system and stored to spatial database for later reference. As numerical engine is tightly coupled with the GIS, simulations can be run within the environment and results themselves saved to the database. Further tasks, as spatio-temporal analysis (i.e. for postcalibration auditing scopes, cartography production and geovisualization, can then be addressed using traditional GIS tools. Benefits of such an approach include more effective data management practices, integration and availability of modeling facilities in a familiar environment, streamlining spatial analysis processes and geovisualization requirements for the non-modelers community. Major drawbacks include limited 3D and time-dependent support in

  17. Experiment Databases

    Science.gov (United States)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  18. Forming and sustaining partnerships to provide integrated services for young people: an overview based on the headspace Geelong experience.

    Science.gov (United States)

    Callaly, Tom; von Treuer, Kathryn; van Hamond, Toni; Windle, Kelly

    2011-02-01

    To discuss critical considerations in the formation and maintenance of agency partnerships designed to provide integrated care for young people. Two years after its establishment, an evaluation of the headspace Barwon collaboration and a review of the health-care and management literature on agency collaboration were conducted. The principal findings together with the authors' experience working at establishing and maintaining the partnership are used to discuss critical issues in forming and maintaining inter-agency partnerships. Structural and process considerations are necessary but not sufficient for the successful formation and maintenance of inter-agency partnerships and integrated care provision. Specifically, organizational culture change and staff engagement is a significant challenge and planning for this is essential and often neglected. Although agreeing on common goals and objectives is an essential first step in forming partnerships designed to provide integrated care, goodwill is not enough, and the literature consistently shows that most collaborations fail to meet their objectives. Principles and lessons of organizational behaviour and management practices in the business sector can contribute a great deal to partnership planning. © 2011 Blackwell Publishing Asia Pty Ltd.

  19. Improving sexual health communication between older women and their providers: how the integrative model of behavioral prediction can help.

    Science.gov (United States)

    Hughes, Anne K; Rostant, Ola S; Curran, Paul G

    2014-07-01

    Talking about sexual health can be a challenge for some older women. This project was initiated to identify key factors that improve communication between aging women and their primary care providers. A sample of women (aged 60+) completed an online survey regarding their intent to communicate with a provider about sexual health. Using the integrative model of behavioral prediction as a guide, the survey instrument captured data on attitudes, perceived norms, self-efficacy, and intent to communicate with a provider about sexual health. Data were analyzed using structural equation modeling. Self-efficacy and perceived norms were the most important factors predicting intent to communicate for this sample of women. Intent did not vary with race, but mean scores of the predictors of intent varied for African American and White women. Results can guide practice and intervention with ethnically diverse older women who may be struggling to communicate about their sexual health concerns. © The Author(s) 2013.

  20. A coordination language for databases

    DEFF Research Database (Denmark)

    Li, Ximeng; Wu, Xi; Lluch Lafuente, Alberto

    2017-01-01

    We present a coordination language for the modeling of distributed database applications. The language, baptized Klaim-DB, borrows the concepts of localities and nets of the coordination language Klaim but re-incarnates the tuple spaces of Klaim as databases. It provides high-level abstractions...... and primitives for the access and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of Klaim-DB and develop a type system that avoids potential runtime errors such as certain evaluation errors and mismatches of data format in tables, which are monitored...... in the semantics. The use of the language is illustrated in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. Raising the abstraction level and encapsulating integrity checks in the language primitives have benefited the modeling task...

  1. Equipping providers with principles, knowledge and skills to successfully integrate behaviour change counselling into practice: a primary healthcare framework.

    Science.gov (United States)

    Vallis, M; Lee-Baggley, D; Sampalli, T; Ryer, A; Ryan-Carson, S; Kumanan, K; Edwards, L

    2018-01-01

    There is an urgent need for healthcare providers and healthcare systems to support productive interactions with patients that promote sustained health behaviour change in order to improve patient and population health outcomes. Behaviour change theories and interventions have been developed and evaluated in experimental contexts; however, most healthcare providers have little training, and therefore low confidence in, behaviour change counselling. Particularly important is how to integrate theory and method to support healthcare providers to engage in behaviour change counselling competently. In this article, we describe a general training model developed from theory, evidence, experience and stakeholder engagement. This model will set the stage for future evaluation research on training needed to achieve competency, sustainability of competency, as well as effectiveness/cost-effectiveness of training in supporting behaviour change. A framework to support competency based training in behaviour change counselling is described in this article. This framework is designed to be integrative, sustainable, scalable and capable of being evaluated in follow-up studies. Effective training in behaviour change counselling is critical to meet the current and future healthcare needs of patients living with, or at risk of, chronic diseases. Increasing competency in establishing change-based relationships, assessing and promoting readiness to change, implementing behaviour modification and addressing psychosocial issues will be value added to the healthcare system. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  2. An integrated database on ticks and tick-borne zoonoses in the tropics and subtropics with special reference to developing and emerging countries.

    Science.gov (United States)

    Vesco, Umberto; Knap, Nataša; Labruna, Marcelo B; Avšič-Županc, Tatjana; Estrada-Peña, Agustín; Guglielmone, Alberto A; Bechara, Gervasio H; Gueye, Arona; Lakos, Andras; Grindatto, Anna; Conte, Valeria; De Meneghi, Daniele

    2011-05-01

    Tick-borne zoonoses (TBZ) are emerging diseases worldwide. A large amount of information (e.g. case reports, results of epidemiological surveillance, etc.) is dispersed through various reference sources (ISI and non-ISI journals, conference proceedings, technical reports, etc.). An integrated database-derived from the ICTTD-3 project ( http://www.icttd.nl )-was developed in order to gather TBZ records in the (sub-)tropics, collected both by the authors and collaborators worldwide. A dedicated website ( http://www.tickbornezoonoses.org ) was created to promote collaboration and circulate information. Data collected are made freely available to researchers for analysis by spatial methods, integrating mapped ecological factors for predicting TBZ risk. The authors present the assembly process of the TBZ database: the compilation of an updated list of TBZ relevant for (sub-)tropics, the database design and its structure, the method of bibliographic search, the assessment of spatial precision of geo-referenced records. At the time of writing, 725 records extracted from 337 publications related to 59 countries in the (sub-)tropics, have been entered in the database. TBZ distribution maps were also produced. Imported cases have been also accounted for. The most important datasets with geo-referenced records were those on Spotted Fever Group rickettsiosis in Latin-America and Crimean-Congo Haemorrhagic Fever in Africa. The authors stress the need for international collaboration in data collection to update and improve the database. Supervision of data entered remains always necessary. Means to foster collaboration are discussed. The paper is also intended to describe the challenges encountered to assemble spatial data from various sources and to help develop similar data collections.

  3. The STRING database in 2011

    DEFF Research Database (Denmark)

    Szklarczyk, Damian; Franceschini, Andrea; Kuhn, Michael

    2011-01-01

    present an update on the online database resource Search Tool for the Retrieval of Interacting Genes (STRING); it provides uniquely comprehensive coverage and ease of access to both experimental as well as predicted interaction information. Interactions in STRING are provided with a confidence score...... models, extensive data updates and strongly improved connectivity and integration with third-party resources. Version 9.0 of STRING covers more than 1100 completely sequenced organisms; the resource can be reached at http://string-db.org....

  4. The experience of providing end-of-life care to a relative with advanced dementia: an integrative literature review.

    Science.gov (United States)

    Peacock, Shelley C

    2013-04-01

    The number of people with dementia is growing at an alarming rate. An abundance of research over the past two decades has examined the complex aspects of caring for a relative with dementia. However, far less research has been conducted specific to the experiences of family caregivers providing end-of-life care, which is perplexing, as dementia is a terminal illness. This article presents what is known and highlights the gaps in the literature relevant to the experiences of family caregivers of persons with dementia at the end of life. A thorough search of the Cumulative Index to Nursing and Allied Health Literature (CINAHL) and PubMed databases from 1960 to 2011 was conducted. Ten studies were identified that specifically addressed the experience of family caregivers providing end-of-life care to a relative with advanced dementia. Common themes of these studies included: 1) the experience of grief, 2) guilt and burden with decision making, 3) how symptoms of depression may or may not be resolved with death of the care receiver, 4) how caregivers respond to the end-stage of dementia, and 5) expressed needs of family caregivers. It is evident from this literature review that much remains to be done to conceptualize the experience of end-of-life caregiving in dementia.

  5. Irinotecan and Oxaliplatin Might Provide Equal Benefit as Adjuvant Chemotherapy for Patients with Resectable Synchronous Colon Cancer and Liver-confined Metastases: A Nationwide Database Study.

    Science.gov (United States)

    Liang, Yi-Hsin; Shao, Yu-Yun; Chen, Ho-Min; Cheng, Ann-Lii; Lai, Mei-Shu; Yeh, Kun-Huei

    2017-12-01

    Although irinotecan and oxaliplatin are both standard treatments for advanced colon cancer, it remains unknown whether either is effective for patients with resectable synchronous colon cancer and liver-confined metastasis (SCCLM) after curative surgery. A population-based cohort of patients diagnosed with de novo SCCLM between 2004 and 2009 was established by searching the database of the Taiwan Cancer Registry and the National Health Insurance Research Database of Taiwan. Patients who underwent curative surgery as their first therapy followed by chemotherapy doublets were classified into the irinotecan group or oxaliplatin group accordingly. Patients who received radiotherapy or did not receive chemotherapy doublets were excluded. We included 6,533 patients with de novo stage IV colon cancer. Three hundred and nine of them received chemotherapy doublets after surgery; 77 patients received irinotecan and 232 patients received oxaliplatin as adjuvant chemotherapy. The patients in both groups exhibited similar overall survival (median: not reached vs. 40.8 months, p=0.151) and time to the next line of treatment (median: 16.5 vs. 14.3 months, p=0.349) in both univariate and multivariate analyses. Additionally, patients with resectable SCCLM had significantly shorter median overall survival than patients with stage III colon cancer who underwent curative surgery and subsequent adjuvant chemotherapy, but longer median overall survival than patients with de novo stage IV colon cancer who underwent surgery only at the primary site followed by standard systemic chemotherapy (p<0.001). Irinotecan and oxaliplatin exhibited similar efficacy in patients who underwent curative surgery for resectable SCCLM. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  6. Overlap of proteomics biomarkers between women with pre-eclampsia and PCOS: a systematic review and biomarker database integration.

    Science.gov (United States)

    Khan, Gulafshana Hafeez; Galazis, Nicolas; Docheva, Nikolina; Layfield, Robert; Atiomo, William

    2015-01-01

    Do any proteomic biomarkers previously identified for pre-eclampsia (PE) overlap with those identified in women with polycystic ovary syndrome (PCOS). Five previously identified proteomic biomarkers were found to be common in women with PE and PCOS when compared with controls. Various studies have indicated an association between PCOS and PE; however, the pathophysiological mechanisms supporting this association are not known. A systematic review and update of our PCOS proteomic biomarker database was performed, along with a parallel review of PE biomarkers. The study included papers from 1980 to December 2013. In all the studies analysed, there were a total of 1423 patients and controls. The number of proteomic biomarkers that were catalogued for PE was 192. Five proteomic biomarkers were shown to be differentially expressed in women with PE and PCOS when compared with controls: transferrin, fibrinogen α, β and γ chain variants, kininogen-1, annexin 2 and peroxiredoxin 2. In PE, the biomarkers were identified in serum, plasma and placenta and in PCOS, the biomarkers were identified in serum, follicular fluid, and ovarian and omental biopsies. The techniques employed to detect proteomics have limited ability in identifying proteins that are of low abundance, some of which may have a diagnostic potential. The sample sizes and number of biomarkers identified from these studies do not exclude the risk of false positives, a limitation of all biomarker studies. The biomarkers common to PE and PCOS were identified from proteomic analyses of different tissues. This data amalgamation of the proteomic studies in PE and in PCOS, for the first time, discovered a panel of five biomarkers for PE which are common to women with PCOS, including transferrin, fibrinogen α, β and γ chain variants, kininogen-1, annexin 2 and peroxiredoxin 2. If validated, these biomarkers could provide a useful framework for the knowledge infrastructure in this area. To accomplish this goal, a

  7. Integrating Medication Therapy Management (MTM Services Provided by Community Pharmacists into a Community-Based Accountable Care Organization (ACO

    Directory of Open Access Journals (Sweden)

    Brian Isetts

    2017-10-01

    Full Text Available (1 Background: As the U.S. healthcare system evolves from fee-for-service financing to global population-based payments designed to be accountable for both quality and total cost of care, the effective and safe use of medications is gaining increased importance. The purpose of this project was to determine the feasibility of integrating medication therapy management (MTM services provided by community pharmacists into the clinical care teams and the health information technology (HIT infrastructure for Minnesota Medicaid recipients of a 12-county community-based accountable care organization (ACO. (2 Methods: The continuous quality improvement evaluation methodology employed in this project was the context + mechanism = outcome (CMO model to account for the fact that programs only work insofar as they introduce promising ideas, solutions and opportunities in the appropriate social and cultural contexts. Collaborations between a 12-county ACO and 15 community pharmacies in Southwest Minnesota served as the social context for this feasibility study of MTM referrals to community pharmacists. (3 Results: All 15 community pharmacy sites were integrated into the HIT infrastructure through Direct Secure Messaging, and there were 32 recipients who received MTM services subsequent to referrals from the ACO at 5 of the 15 community pharmacies over a 1-year implementation phase. (4 Conclusion: At the conclusion of this project, an effective electronic communication and MTM referral system was activated, and consideration was given to community pharmacists providing MTM in future ACO shared savings agreements.

  8. Scaling up health knowledge at European level requires sharing integrated data: an approach for collection of database specification

    Directory of Open Access Journals (Sweden)

    Menditto E

    2016-06-01

    Full Text Available Enrica Menditto,1 Angela Bolufer De Gea,2 Caitriona Cahir,3,4 Alessandra Marengoni,5 Salvatore Riegler,1 Giuseppe Fico,6 Elisio Costa,7 Alessandro Monaco,8 Sergio Pecorelli,5 Luca Pani,8 Alexandra Prados-Torres9 1School of Pharmacy, CIRFF/Center of Pharmacoeconomics, University of Naples Federico II, Naples, Italy; 2Directorate-General for Health and Food Safety, European Commission, Brussels, Belgium; 3Division of Population Health Sciences, Royal College of Surgeons in Ireland, 4Department of Pharmacology and Therapeutics, St James’s Hospital, Dublin, Ireland; 5Department of Clinical and Experimental Science, University of Brescia, Brescia; 6Life Supporting Technologies, Photonics Technology and Bioengineering Department, School of Telecomunications Engineering, Polytechnic University of Madrid, Madrid, Spain; 7Faculty of Pharmacy, University of Porto, Porto, Portugal; 8Italian Medicines Agency – AIFA, Rome, Italy; 9EpiChron Research Group on Chronic Diseases, Aragón Health Sciences Institute (IACS, IIS Aragón REDISSEC ISCIII, Miguel Servet University Hospital, University of Zaragoza, Zaragoza, Spain Abstract: Computerized health care databases have been widely described as an excellent opportunity for research. The availability of “big data” has brought about a wave of innovation in projects when conducting health services research. Most of the available secondary data sources are restricted to the geographical scope of a given country and present heterogeneous structure and content. Under the umbrella of the European Innovation Partnership on Active and Healthy Ageing, collaborative work conducted by the partners of the group on “adherence to prescription and medical plans” identified the use of observational and large-population databases to monitor medication-taking behavior in the elderly. This article describes the methodology used to gather the information from available databases among the Adherence Action Group partners

  9. An Interoperable Cartographic Database

    Directory of Open Access Journals (Sweden)

    Slobodanka Ključanin

    2007-05-01

    Full Text Available The concept of producing a prototype of interoperable cartographic database is explored in this paper, including the possibilities of integration of different geospatial data into the database management system and their visualization on the Internet. The implementation includes vectorization of the concept of a single map page, creation of the cartographic database in an object-relation database, spatial analysis, definition and visualization of the database content in the form of a map on the Internet. 

  10. Scopus database: a review.

    Science.gov (United States)

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  11. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    Science.gov (United States)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  12. Integrating the Radiology Information System with Computerised Provider Order Entry: The Impact on Repeat Medical Imaging Investigations.

    Science.gov (United States)

    Vecellio, Elia; Georgiou, Andrew

    2016-01-01

    Repeat and redundant procedures in medical imaging are associated with increases in resource utilisation and labour costs. Unnecessary medical imaging in some modalities, such as X-Ray (XR) and Computed Tomography (CT) is an important safety issue because it exposes patients to ionising radiation which can be carcinogenic and is associated with higher rates of cancer. The aim of this study was to assess the impact of implementing an integrated Computerised Provider Order Entry (CPOE)/Radiology Information System (RIS)/Picture Archiving and Communications System (PACS) system on the number of XR and CT imaging procedures (including repeat imaging requests) for inpatients at a large metropolitan hospital. The study found that patients had an average 0.47 fewer XR procedures and 0.07 fewer CT procedures after the implementation of the integrated system. Part of this reduction was driven by a lower rate of repeat procedures: the average inpatient had 0.13 fewer repeat XR procedures within 24-hours of the previous identical XR procedure. A similar decrease was not evident for repeat CT procedures. Reduced utilisation of imaging procedures (especially those within very short intervals from the previous identical procedure, which are more likely to be redundant) has implications for the safety of patients and the cost of medical imaging services.

  13. Event driven software package for the database of Integrated Coastal and Marine Area Management (ICMAM) (Developed in 'C')

    Digital Repository Service at National Institute of Oceanography (India)

    Sadhuram, Y.; Murty, T.V.R.; Chandramouli, P.; Murthy, K.S.R.

    National Institute of Oceanography (NIO, RC, Visakhapatnam, India) had taken up the Integrated Coastal and Marine Area Management (ICMAM) project funded by Department of Ocean Development (DOD), New Delhi, India. The main objective of this project...

  14. DDPC: Dragon database of genes associated with prostate cancer

    KAUST Repository

    Maqungo, Monique; Kaur, Mandeep; Kwofie, Samuel K.; Radovanovic, Aleksandar; Schaefer, Ulf; Schmeier, Sebastian; Oppon, Ekow; Christoffels, Alan; Bajic, Vladimir B.

    2010-01-01

    associated with Prostate Cancer (DDPC) as an integrated knowledgebase of genes experimentally verified as implicated in PC. DDPC is distinctive from other databases in that (i) it provides pre-compiled biomedical text-mining information on PC, which otherwise

  15. Better, Sooner, More Convenient? The reality of pursuing greater integration between primary and secondary healthcare providers in New Zealand.

    Science.gov (United States)

    Lovelock, Kirsten; Martin, Greg; Gauld, Robin; MacRae, Jayden

    2017-01-01

    This article focuses on the results of evaluations of two business plans developed in response to a policy initiative which aimed to achieve greater integration between primary and secondary health providers in New Zealand. We employ the Consolidated Framework for Implementation Research to inform our analysis. The Better, Sooner, More Convenient policy programme involved the development of business plans and, within each business plan, a range of areas of focus and associated work-streams. The evaluations employed a mixed method multi-level case study design, involving qualitative face-to-face interviews with front-line staff, clinicians and management in two districts, one in the North Island and the other in the South Island, and an analysis of routine data tracked ambulatory sensitive hospitalisations and emergency department presentations. Two postal surveys were conducted, one focussing on the patient care experiences of integration and care co-ordination and the second focussing on the perspectives of health professionals in primary and secondary settings in both districts. Both evaluations revealed non-significant changes in ambulatory sensitive hospitalisations and emergency department presentation rates and slow uneven progress with areas of focus and their associated work-streams. Our evaluations revealed a range of implementation issues, the barriers and facilitators to greater integration of healthcare services and the implications for those who were responsible for putting policy into practice. The business plans were shown to be overly ambitious and compromised by the size and scope of the business plans; dysfunctional governance arrangements and associated accountability issues; organisational inability to implement change quickly with appropriate and timely funding support; an absence of organisational structural change allowing parity with the policy objectives; barriers that were encountered because of inadequate attention to organisational

  16. Better, Sooner, More Convenient? The reality of pursuing greater integration between primary and secondary healthcare providers in New Zealand

    Directory of Open Access Journals (Sweden)

    Kirsten Lovelock

    2017-03-01

    Full Text Available Objectives: This article focuses on the results of evaluations of two business plans developed in response to a policy initiative which aimed to achieve greater integration between primary and secondary health providers in New Zealand. We employ the Consolidated Framework for Implementation Research to inform our analysis. The Better, Sooner, More Convenient policy programme involved the development of business plans and, within each business plan, a range of areas of focus and associated work-streams. Methods: The evaluations employed a mixed method multi-level case study design, involving qualitative face-to-face interviews with front-line staff, clinicians and management in two districts, one in the North Island and the other in the South Island, and an analysis of routine data tracked ambulatory sensitive hospitalisations and emergency department presentations. Two postal surveys were conducted, one focussing on the patient care experiences of integration and care co-ordination and the second focussing on the perspectives of health professionals in primary and secondary settings in both districts. Results: Both evaluations revealed non-significant changes in ambulatory sensitive hospitalisations and emergency department presentation rates and slow uneven progress with areas of focus and their associated work-streams. Our evaluations revealed a range of implementation issues, the barriers and facilitators to greater integration of healthcare services and the implications for those who were responsible for putting policy into practice. Conclusion: The business plans were shown to be overly ambitious and compromised by the size and scope of the business plans; dysfunctional governance arrangements and associated accountability issues; organisational inability to implement change quickly with appropriate and timely funding support; an absence of organisational structural change allowing parity with the policy objectives; barriers that were

  17. Comparison of SSS and SRS calculated from normal databases provided by QPS and 4D-MSPECT manufacturers and from identical institutional normals

    International Nuclear Information System (INIS)

    Knollmann, Daniela; Knebel, Ingrid; Gebhard, Michael; Krohn, Thomas; Buell, Ulrich; Schaefer, Wolfgang M.; Koch, Karl-Christian

    2008-01-01

    There is proven evidence for the importance of myocardial perfusion-single-photon emission computed tomography (SPECT) with computerised determination of summed stress and rest scores (SSS/SRS) for the diagnosis of coronary artery disease (CAD). SSS and SRS can thereby be calculated semi-quantitatively using a 20-segment model by comparing tracer-uptake with values from normal databases (NDB). Four severity-degrees for SSS and SRS are normally used: 99m Tc-tetrofosmin, triple-head-camera, 30 s/view, 20 views/head) from 36 men with a low post-stress test CAD probability and visually normal SPECT findings. Patient group was 60 men showing the entire CAD-spectrum referred for routine perfusion-SPECT. Stress/rest results of automatic quantification of the 60 patients were compared to M-NDB and I-NDB. After reclassifying SSS/SRS into the four severity degrees, kappa (κ) values were calculated to objectify agreement. Mean values (vs M-NDB) were 9.4 ± 10.3 (SSS) and 5.8 ± 9.7 (SRS) for QPS and 8.2 ± 8.7 (SSS) and 6.2 ± 7.8 (SRS) for 4D-MSPECT. Thirty seven of sixty SSS classifications (κ = 0.462) and 40/60 SRS classifications (κ = 0.457) agreed. Compared to I-NDB, mean values were 10.2 ± 11.6 (SSS) and 6.5 ± 10.4 (SRS) for QPS and 9.2 ± 9.3 (SSS) and 7.2 ± 8.6 (SRS) for 4D-MSPECT. Forty four of sixty patients agreed in SSS and SRS (κ = 0.621 resp. 0.58). Considerable differences between SSS/SRS obtained with QPS and 4D-MSPECT were found when using M-NDB. Even using identical patients and identical I-NDB, the algorithms still gave substantial different results. (orig.)

  18. Comparison of SSS and SRS calculated from normal databases provided by QPS and 4D-MSPECT manufacturers and from identical institutional normals.

    Science.gov (United States)

    Knollmann, Daniela; Knebel, Ingrid; Koch, Karl-Christian; Gebhard, Michael; Krohn, Thomas; Buell, Ulrich; Schaefer, Wolfgang M

    2008-02-01

    There is proven evidence for the importance of myocardial perfusion-single-photon emission computed tomography (SPECT) with computerised determination of summed stress and rest scores (SSS/SRS) for the diagnosis of coronary artery disease (CAD). SSS and SRS can thereby be calculated semi-quantitatively using a 20-segment model by comparing tracer-uptake with values from normal databases (NDB). Four severity-degrees for SSS and SRS are normally used: or =14. Manufacturers' NDBs (M-NDBs) often do not fit the institutional (I) settings. Therefore, this study compared SSS and SRS obtained with the algorithms Quantitative Perfusion SPECT (QPS) and 4D-MSPECT using M-NDB and I-NDB. I-NDBs were obtained using QPS and 4D-MSPECT from exercise stress data (450 MBq (99m)Tc-tetrofosmin, triple-head-camera, 30 s/view, 20 views/head) from 36 men with a low post-stress test CAD probability and visually normal SPECT findings. Patient group was 60 men showing the entire CAD-spectrum referred for routine perfusion-SPECT. Stress/rest results of automatic quantification of the 60 patients were compared to M-NDB and I-NDB. After reclassifying SSS/SRS into the four severity degrees, kappa values were calculated to objectify agreement. Mean values (vs M-NDB) were 9.4 +/- 10.3 (SSS) and 5.8 +/- 9.7 (SRS) for QPS and 8.2 +/- 8.7 (SSS) and 6.2 +/- 7.8 (SRS) for 4D-MSPECT. Thirty seven of sixty SSS classifications (kappa = 0.462) and 40/60 SRS classifications (kappa = 0.457) agreed. Compared to I-NDB, mean values were 10.2 +/- 11.6 (SSS) and 6.5 +/- 10.4 (SRS) for QPS and 9.2 +/- 9.3 (SSS) and 7.2 +/- 8.6 (SRS) for 4D-MSPECT. Forty four of sixty patients agreed in SSS and SRS (kappa = 0.621 resp. 0.58). Considerable differences between SSS/SRS obtained with QPS and 4D-MSPECT were found when using M-NDB. Even using identical patients and identical I-NDB, the algorithms still gave substantial different results.

  19. Simplification of integrity constraints for data integration

    DEFF Research Database (Denmark)

    Christiansen, Henning; Martinenghi, Davide

    2004-01-01

    , because either the global database is known to be consistent or suitable actions have been taken to provide consistent views. The present work generalizes simplification techniques for integrity checking in traditional databases to the combined case. Knowledge of local consistency is employed, perhaps...

  20. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    Science.gov (United States)

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  1. TcruziDB, an Integrated Database, and the WWW Information Server for the Trypanosoma cruzi Genome Project

    Directory of Open Access Journals (Sweden)

    Degrave Wim

    1997-01-01

    Full Text Available Data analysis, presentation and distribution is of utmost importance to a genome project. A public domain software, ACeDB, has been chosen as the common basis for parasite genome databases, and a first release of TcruziDB, the Trypanosoma cruzi genome database, is available by ftp from ftp://iris.dbbm.fiocruz.br/pub/genomedb/TcruziDB as well as versions of the software for different operating systems (ftp://iris.dbbm.fiocruz.br/pub/unixsoft/. Moreover, data originated from the project are available from the WWW server at http://www.dbbm.fiocruz.br. It contains biological and parasitological data on CL Brener, its karyotype, all available T. cruzi sequences from Genbank, data on the EST-sequencing project and on available libraries, a T. cruzi codon table and a listing of activities and participating groups in the genome project, as well as meeting reports. T. cruzi discussion lists (tcruzi-l@iris.dbbm.fiocruz.br and tcgenics@iris.dbbm.fiocruz.br are being maintained for communication and to promote collaboration in the genome project

  2. CyanoEXpress: A web database for exploration and visualisation of the integrated transcriptome of cyanobacterium Synechocystis sp. PCC6803.

    Science.gov (United States)

    Hernandez-Prieto, Miguel A; Futschik, Matthias E

    2012-01-01

    Synechocystis sp. PCC6803 is one of the best studied cyanobacteria and an important model organism for our understanding of photosynthesis. The early availability of its complete genome sequence initiated numerous transcriptome studies, which have generated a wealth of expression data. Analysis of the accumulated data can be a powerful tool to study transcription in a comprehensive manner and to reveal underlying regulatory mechanisms, as well as to annotate genes whose functions are yet unknown. However, use of divergent microarray platforms, as well as distributed data storage make meta-analyses of Synechocystis expression data highly challenging, especially for researchers with limited bioinformatic expertise and resources. To facilitate utilisation of the accumulated expression data for a wider research community, we have developed CyanoEXpress, a web database for interactive exploration and visualisation of transcriptional response patterns in Synechocystis. CyanoEXpress currently comprises expression data for 3073 genes and 178 environmental and genetic perturbations obtained in 31 independent studies. At present, CyanoEXpress constitutes the most comprehensive collection of expression data available for Synechocystis and can be freely accessed. The database is available for free at http://cyanoexpress.sysbiolab.eu.

  3. Leveraging Web Services in Providing Efficient Discovery, Retrieval, and Integration of NASA-Sponsored Observations and Predictions

    Science.gov (United States)

    Bambacus, M.; Alameh, N.; Cole, M.

    2006-12-01

    The Applied Sciences Program at NASA focuses on extending the results of NASA's Earth-Sun system science research beyond the science and research communities to contribute to national priority applications with societal benefits. By employing a systems engineering approach, supporting interoperable data discovery and access, and developing partnerships with federal agencies and national organizations, the Applied Sciences Program facilitates the transition from research to operations in national applications. In particular, the Applied Sciences Program identifies twelve national applications, listed at http://science.hq.nasa.gov/earth-sun/applications/, which can be best served by the results of NASA aerospace research and development of science and technologies. The ability to use and integrate NASA data and science results into these national applications results in enhanced decision support and significant socio-economic benefits for each of the applications. This paper focuses on leveraging the power of interoperability and specifically open standard interfaces in providing efficient discovery, retrieval, and integration of NASA's science research results. Interoperability (the ability to access multiple, heterogeneous geoprocessing environments, either local or remote by means of open and standard software interfaces) can significantly increase the value of NASA-related data by increasing the opportunities to discover, access and integrate that data in the twelve identified national applications (particularly in non-traditional settings). Furthermore, access to data, observations, and analytical models from diverse sources can facilitate interdisciplinary and exploratory research and analysis. To streamline this process, the NASA GeoSciences Interoperability Office (GIO) is developing the NASA Earth-Sun System Gateway (ESG) to enable access to remote geospatial data, imagery, models, and visualizations through open, standard web protocols. The gateway (online

  4. SSC lattice database and graphical interface

    International Nuclear Information System (INIS)

    Trahern, C.G.; Zhou, J.

    1991-11-01

    When completed the Superconducting Super Collider will be the world's largest accelerator complex. In order to build this system on schedule, the use of database technologies will be essential. In this paper we discuss one of the database efforts underway at the SSC, the lattice database. The SSC lattice database provides a centralized source for the design of each major component of the accelerator complex. This includes the two collider rings, the High Energy Booster, Medium Energy Booster, Low Energy Booster, and the LINAC as well as transfer and test beam lines. These designs have been created using a menagerie of programs such as SYNCH, DIMAD, MAD, TRANSPORT, MAGIC, TRACE3D AND TEAPOT. However, once a design has been completed, it is entered into a uniform database schema in the database system. In this paper we discuss the reasons for creating the lattice database and its implementation via the commercial database system SYBASE. Each lattice in the lattice database is composed of a set of tables whose data structure can describe any of the SSC accelerator lattices. In order to allow the user community access to the databases, a programmatic interface known as dbsf (for database to several formats) has been written. Dbsf creates ascii input files appropriate to the above mentioned accelerator design programs. In addition it has a binary dataset output using the Self Describing Standard data discipline provided with the Integrated Scientific Tool Kit software tools. Finally we discuss the graphical interfaces to the lattice database. The primary interface, known as OZ, is a simulation environment as well as a database browser

  5. Study on safety of a nuclear ship having an integral marine water reactor. Intelligent information database program concerned with thermal-hydraulic characteristics

    International Nuclear Information System (INIS)

    Inasaka, Fujio; Nariai, Hideki; Kobayashi, Michiyuki; Murata, Hiroyuki; Aya, Izuo

    2001-01-01

    As a high economical marine reactor with sufficient safety functions, an integrated type marine water reactor has been considered most promising. At the National Maritime Research Institute, a series of the experimental studies on the thermal-hydraulic characteristics of an integrated/passive-safety type marine water reactor such as the flow boiling of a helical-coil type steam generator, natural circulation of primary water under a ship rolling motion and flashing-condensation oscillation phenomena in pool water has been conducted. This current study aims at making use of the safety analysis or evaluation of a future marine water reactor by developing an intelligent information database program concerned with the thermal-hydraulic characteristics of an integral/passive-safety reactor on the basis of the above-mentioned valuable experimental knowledge. Since the program was created as a Windows application using the Visual Basic, it is available to the public and can be easily installed in the operating system. Main functions of the program are as follows: (1) steady state flow boiling analysis and determination of stability limit for any helical-coil type once-through steam generator design. (2) analysis and comparison with the flow boiling data, (3) reference and graphic display of the experimental data, (4) indication of the knowledge information such as analysis method and results of the study. The program will be useful for the design of not only the future integrated type marine water reactor but also the small sized water reactor. (author)

  6. Substrate-Integrated Waveguide PCB Leaky-Wave Antenna Design Providing Multiple Steerable Beams in the V-Band

    Directory of Open Access Journals (Sweden)

    Matthias Steeg

    2017-12-01

    Full Text Available A periodic leaky-wave antenna (LWA design based on low loss substrate-integrated waveguide (SIW technology with inset half-wave microstrip antennas is presented. The developed LWA operates in the V-band between 50 and 70 GHz and has been fabricated using standard printed circuit board (PCB technology. The presented LWA is highly functional and very compact supporting 1D beam steering and multibeam operation with only a single radio frequency (RF feeding port. Within the operational 50–70 GHz bandwidth, the LWA scans through broadside, providing over 40° H-plane beam steering. When operated within the 57–66 GHz band, the maximum steering angle is 18.2°. The maximum gain of the fabricated LWAs is 15.4 dBi with only a small gain variation of +/−1.5 dB across the operational bandwidth. The beam steering and multibeam capability of the fabricated LWA is further utilized to support mobile users in a 60 GHz hot-spot. For a single user, a maximum wireless on-off keying (OOK data rate of 2.5 Gbit/s is demonstrated. Multibeam operation is achieved using the LWA in combination with multiple dense wavelength division multiplexing (WDM channels and remote optical heterodyning. Experimentally, multibeam operation supporting three users within a 57–66 GHz hot-spot with a total wireless cell capacity of 3 Gbit/s is achieved.

  7. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  8. Electronic database of arterial aneurysms

    Directory of Open Access Journals (Sweden)

    Fabiano Luiz Erzinger

    2014-12-01

    Full Text Available Background:The creation of an electronic database facilitates the storage of information, as well as streamlines the exchange of data, making easier the exchange of knowledge for future research.Objective:To construct an electronic database containing comprehensive and up-to-date clinical and surgical data on the most common arterial aneurysms, to help advance scientific research.Methods:The most important specialist textbooks and articles found in journals and on internet databases were reviewed in order to define the basic structure of the protocol. Data were computerized using the SINPE© system for integrated electronic protocols and tested in a pilot study.Results:The data entered onto the system was first used to create a Master protocol, organized into a structure of top-level directories covering a large proportion of the content on vascular diseases as follows: patient history; physical examination; supplementary tests and examinations; diagnosis; treatment; and clinical course. By selecting items from the Master protocol, Specific protocols were then created for the 22 arterial sites most often involved by aneurysms. The program provides a method for collection of data on patients including clinical characteristics (patient history and physical examination, supplementary tests and examinations, treatments received and follow-up care after treatment. Any information of interest on these patients that is contained in the protocol can then be used to query the database and select data for studies.Conclusions:It proved possible to construct a database of clinical and surgical data on the arterial aneurysms of greatest interest and, by adapting the data to specific software, the database was integrated into the SINPE© system, thereby providing a standardized method for collection of data on these patients and tools for retrieving this information in an organized manner for use in scientific studies.

  9. SU-E-T-571: Newly Emerging Integrated Transmission Detector Systems Provide Online Quality Assurance of External Beam Radiation Therapy

    International Nuclear Information System (INIS)

    Hoffman, D; Chung, E; Hess, C; Stern, R; Benedict, S

    2015-01-01

    Purpose: Two newly emerging transmission detectors positioned upstream from the patient have been evaluated for online quality assurance of external beam radiotherapy. The prototype for the Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area ion chamber mounted on the linac accessory tray to monitor photon fluence, energy, beam shape, and gantry position during treatment. The ion chamber utilizes a thickness gradient which records variable response dependent on beam position. The prototype of Delta4 Discover™, developed by ScandiDos (Uppsala, Sweden) is a linac accessory tray mounted 4040 diode array that measures photon fluence during patient treatment. Both systems are employable for patient specific QA prior to treatment delivery. Methods: Our institution evaluated the reproducibility of measurements using various beam types, including VMAT treatment plans with both the IQM ion chamber and the Delta4 Discover diode array. Additionally, the IQM’s effect on photon fluence, dose response, simulated beam error detection, and the accuracy of the integrated barometer, thermometer, and inclinometer were characterized. The evaluated photon beam errors are based on the annual tolerances specified in AAPM TG-142. Results: Repeated VMAT treatments were measured with 0.16% reproducibility by the IQM and 0.55% reproducibility by the Delta4 Discover. The IQM attenuated 6, 10, and 15 MV photon beams by 5.43±0.02%, 4.60±0.02%, and 4.21±0.03% respectively. Photon beam profiles were affected <1.5% in the non-penumbra regions. The IQM’s ion chamber’s dose response was linear and the thermometer, barometer, and inclinometer agreed with other calibrated devices. The device detected variations in monitor units delivered (1%), field position (3mm), single MLC leaf positions (13mm), and photon energy. Conclusion: We have characterized two new transmissions detector systems designed to provide in-vivo like measurements upstream

  10. SU-E-T-571: Newly Emerging Integrated Transmission Detector Systems Provide Online Quality Assurance of External Beam Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D; Chung, E; Hess, C; Stern, R; Benedict, S [UC Davis Cancer Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Two newly emerging transmission detectors positioned upstream from the patient have been evaluated for online quality assurance of external beam radiotherapy. The prototype for the Integral Quality Monitor (IQM), developed by iRT Systems GmbH (Koblenz, Germany) is a large-area ion chamber mounted on the linac accessory tray to monitor photon fluence, energy, beam shape, and gantry position during treatment. The ion chamber utilizes a thickness gradient which records variable response dependent on beam position. The prototype of Delta4 Discover™, developed by ScandiDos (Uppsala, Sweden) is a linac accessory tray mounted 4040 diode array that measures photon fluence during patient treatment. Both systems are employable for patient specific QA prior to treatment delivery. Methods: Our institution evaluated the reproducibility of measurements using various beam types, including VMAT treatment plans with both the IQM ion chamber and the Delta4 Discover diode array. Additionally, the IQM’s effect on photon fluence, dose response, simulated beam error detection, and the accuracy of the integrated barometer, thermometer, and inclinometer were characterized. The evaluated photon beam errors are based on the annual tolerances specified in AAPM TG-142. Results: Repeated VMAT treatments were measured with 0.16% reproducibility by the IQM and 0.55% reproducibility by the Delta4 Discover. The IQM attenuated 6, 10, and 15 MV photon beams by 5.43±0.02%, 4.60±0.02%, and 4.21±0.03% respectively. Photon beam profiles were affected <1.5% in the non-penumbra regions. The IQM’s ion chamber’s dose response was linear and the thermometer, barometer, and inclinometer agreed with other calibrated devices. The device detected variations in monitor units delivered (1%), field position (3mm), single MLC leaf positions (13mm), and photon energy. Conclusion: We have characterized two new transmissions detector systems designed to provide in-vivo like measurements upstream

  11. Stackfile Database

    Science.gov (United States)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  12. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  13. The economic impact of GERD and PUD: examination of direct and indirect costs using a large integrated employer claims database.

    Science.gov (United States)

    Joish, Vijay N; Donaldson, Gary; Stockdale, William; Oderda, Gary M; Crawley, Joseph; Sasane, Rahul; Joshua-Gotlib, Sandra; Brixner, Diana I

    2005-04-01

    The objective of this study was to examine the relationship of work loss associated with gastro- the relationship of work loss associated with gastro- the relationship of work loss associated with gastro-esophageal reflux disease (GERD) and peptic ulcer disease (GERD) and peptic ulcer disease (PUD) in a large population of employed individuals in the United States (US) and quantify the individuals in the United States (US) and quantify the economic impact of these diseases to the employer. A proprietary database that contained work place absence, disability and workers' compensation data in addition to prescription drug and medical claims was used to answer the objectives. Employees with a medical claim with an ICD-9 code for GERD or PUD were identified from 1 January 1997 to 31 December 2000. A cohort of controls was identified for the same time period using the method of frequency matching on age, gender, industry type, occupational status, and employment status. Work absence rates and health care costs were compared between the groups after adjusting for demo graphic, and employment differences using analysis of covariance models. There were significantly lower (p rate of adjusted all-cause absenteeism and sickness-related absenteeism were observed between the disease groups versus the controls. In particular, controls had an average of 1.2 to 1.6 days and 0.4 to 0.6 lower all-cause and sickness-related absenteeism compared to the disease groups. The incremental economic impact projected to a hypothetical employed population was estimated to be $3441 for GERD, $1374 for PUD, and $4803 for GERD + PUD per employee per year compared to employees without these diseases. Direct medical cost and work absence in employees with GERD, PUD and GERD + PUD represent a significant burden to employees and employers.

  14. Efficacy of hospital in the home services providing care for patients admitted from emergency departments: an integrative review.

    Science.gov (United States)

    Varney, Jane; Weiland, Tracey J; Jelinek, George

    2014-06-01

    Increases in emergency department (ED) demand may compromise patient outcomes, leading not only to overcrowding in the ED, increased ED waiting times and increased ED length of stay, but also compromising patient safety; the risk of adverse events is known to rise in the presence of overcrowding. Hospital in the home (HiTH) services may offer one means of reducing ED demand. This integrative review sought to assess the efficacy of admission-avoidance HiTH services that admit patients directly from the ED. Papers published between 1995 and 2013 were identified through searches of Medline, CINAHL and Google. English-language studies that assessed the efficacy of a HiTH service and that recruited at least one-third of the participants directly from the ED were included in the review. A HiTH service was considered one that provided health professional support to patients at home for a time-limited period, thus avoiding the need for hospitalization. Twenty-two articles met the inclusion criteria for this review. The interventions were diverse in terms of the clinical interventions delivered, the range and intensity of health professional input and the conditions treated. The studies included in the review found no effect on clinical outcomes, rates of adverse events or complications, although patient satisfaction and costs were consistently and favourably affected by HiTH treatment. Given evidence suggesting that HiTH services which recruit patients directly from the ED contribute to cost-savings, greater patient satisfaction and safety and efficacy outcomes that are at least equivalent to those associated with hospital-based care, the expansion of such programmes might therefore be considered a priority for policy makers.

  15. Integrated application of transcriptomics and metabolomics provides insights into glycogen content regulation in the Pacific oyster Crassostrea gigas.

    Science.gov (United States)

    Li, Busu; Song, Kai; Meng, Jie; Li, Li; Zhang, Guofan

    2017-09-11

    The Pacific oyster Crassostrea gigas is an important marine fishery resource, which contains high levels of glycogen that contributes to the flavor and the quality of the oyster. However, little is known about the molecular and chemical mechanisms underlying glycogen content differences in Pacific oysters. Using a homogeneous cultured Pacific oyster family, we explored these regulatory networks at the level of the metabolome and the transcriptome. Oysters with the highest and lowest natural glycogen content were selected for differential transcriptome and metabolome analysis. We identified 1888 differentially-expressed genes, seventy-five differentially-abundant metabolites, which are part of twenty-seven signaling pathways that were enriched using an integrated analysis of the interaction between the differentially-expressed genes and the differentially-abundant metabolites. Based on these results, we found that a high expression of carnitine O-palmitoyltransferase 2 (CPT2), indicative of increased fatty acid degradation, is associated with a lower glycogen content. Together, a high level of expression of phosphoenolpyruvate carboxykinase (PEPCK), and high levels of glucogenic amino acids likely underlie the increased glycogen production in high-glycogen oysters. In addition, the higher levels of the glycolytic enzymes hexokinase (HK) and pyruvate kinase (PK), as well as of the TCA cycle enzymes malate dehydrogenase (MDH) and pyruvate carboxylase (PYC), imply that there is a concomitant up-regulation of energy metabolism in high-glycogen oysters. High-glycogen oysters also appeared to have an increased ability to cope with stress, since the levels of the antioxidant glutathione peroxidase enzyme 5 (GPX5) gene were also increased. Our results suggest that amino acids and free fatty acids are closely related to glycogen content in oysters. In addition, oysters with a high glycogen content have a greater energy production capacity and a greater ability to cope with

  16. DomeHaz, a Global Hazards Database: Understanding Cyclic Dome-forming Eruptions, Contributions to Hazard Assessments, and Potential for Future Use and Integration with Existing Cyberinfrastructure

    Science.gov (United States)

    Ogburn, S. E.; Calder, E.; Loughlin, S.

    2013-12-01

    Dome-forming eruptions can extend for significant periods of time and can be dangerous; nearly all dome-forming eruptions have been associated with some level of explosive activity. Large Plinian explosions with a VEI ≥ 4 sometimes occur in association with dome-forming eruptions. Many of the most significant volcanic events of recent history are in this category. The 1902-1905 eruption of Mt. Pelée, Martinique; the 1980-1986 eruption of Mount St. Helens, USA; and the 1991 eruption of Mt. Pinatubo, Philippines all demonstrate the destructive power of VEI ≥ 4 dome-forming eruptions. Global historical analysis is a powerful tool for decision-making as well as for scientific discovery. In the absence of monitoring data or a knowledge of a volcano's eruptive history, global analysis can provide a method of understanding what might be expected based on similar eruptions. This study investigates the relationship between large explosive eruptions and lava dome growth and develops DomeHaz, a global database of dome-forming eruptions from 1000 AD to present. It is currently hosted on VHub (https://vhub.org/groups/domedatabase/), a community cyberinfrastructure for sharing data, collaborating, and modeling. DomeHaz contains information about 367 dome-forming episodes, including duration of dome growth, duration of pauses in extrusion, extrusion rates, and the timing and magnitude of associated explosions. Data sources include the The Smithsonian Institution Global Volcanism Program (GVP), Bulletin of the Global Volcanism Network, and all relevant published review papers, research papers, and reports. This database builds upon previous work (e.g Newhall and Melson, 1983) in light of newly available data for lava dome eruptions. There have been 46 new dome-forming eruptions, 13 eruptions that continued past 1982, 151 new dome-growth episodes, and 8 VEI ≥ 4 events since Newhall and Melson's work in 1983. Analysis using DomeHaz provides useful information regarding the

  17. Biofuel Database

    Science.gov (United States)

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  18. Community Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This excel spreadsheet is the result of merging at the port level of several of the in-house fisheries databases in combination with other demographic databases such...

  19. Integration of Density Dependence and Concentration Response Models Provides an Ecologically Relevant Assessment of Populations Exposed to Toxicants

    Science.gov (United States)

    The assessment of toxic exposure on wildlife populations involves the integration of organism level effects measured in toxicity tests (e.g., chronic life cycle) and population models. These modeling exercises typically ignore density dependence, primarily because information on ...

  20. Keyword Search in Databases

    CERN Document Server

    Yu, Jeffrey Xu; Chang, Lijun

    2009-01-01

    It has become highly desirable to provide users with flexible ways to query/search information over databases as simple as keyword search like Google search. This book surveys the recent developments on keyword search over databases, and focuses on finding structural information among objects in a database using a set of keywords. Such structural information to be returned can be either trees or subgraphs representing how the objects, that contain the required keywords, are interconnected in a relational database or in an XML database. The structural keyword search is completely different from

  1. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Utilization of a Clinical Trial Management System for the Whole Clinical Trial Process as an Integrated Database: System Development.

    Science.gov (United States)

    Park, Yu Rang; Yoon, Young Jo; Koo, HaYeong; Yoo, Soyoung; Choi, Chang-Min; Beck, Sung-Ho; Kim, Tae Won

    2018-04-24

    Clinical trials pose potential risks in both communications and management due to the various stakeholders involved when performing clinical trials. The academic medical center has a responsibility and obligation to conduct and manage clinical trials while maintaining a sufficiently high level of quality, therefore it is necessary to build an information technology system to support standardized clinical trial processes and comply with relevant regulations. The objective of the study was to address the challenges identified while performing clinical trials at an academic medical center, Asan Medical Center (AMC) in Korea, by developing and utilizing a clinical trial management system (CTMS) that complies with standardized processes from multiple departments or units, controlled vocabularies, security, and privacy regulations. This study describes the methods, considerations, and recommendations for the development and utilization of the CTMS as a consolidated research database in an academic medical center. A task force was formed to define and standardize the clinical trial performance process at the site level. On the basis of the agreed standardized process, the CTMS was designed and developed as an all-in-one system complying with privacy and security regulations. In this study, the processes and standard mapped vocabularies of a clinical trial were established at the academic medical center. On the basis of these processes and vocabularies, a CTMS was built which interfaces with the existing trial systems such as the electronic institutional review board health information system, enterprise resource planning, and the barcode system. To protect patient data, the CTMS implements data governance and access rules, and excludes 21 personal health identifiers according to the Health Insurance Portability and Accountability Act (HIPAA) privacy rule and Korean privacy laws. Since December 2014, the CTMS has been successfully implemented and used by 881 internal and

  3. 600 MW nuclear power database

    International Nuclear Information System (INIS)

    Cao Ruiding; Chen Guorong; Chen Xianfeng; Zhang Yishu

    1996-01-01

    600 MW Nuclear power database, based on ORACLE 6.0, consists of three parts, i.e. nuclear power plant database, nuclear power position database and nuclear power equipment database. In the database, there are a great deal of technique data and picture of nuclear power, provided by engineering designing units and individual. The database can give help to the designers of nuclear power

  4. Provider and Staff Perceptions and Experiences Implementing Behavioral Health Integration in Six Low-Income Health Care Organizations.

    Science.gov (United States)

    Farb, Heather; Sacca, Katie; Variano, Margaret; Gentry, Lisa; Relle, Meagan; Bertrand, Jane

    2018-01-01

    Behavioral health integration (BHI) is a proven, effective practice for addressing the joint behavioral health and medical health needs of vulnerable populations. As part of the New Orleans Charitable Health Fund (NOCHF) program, this study addressed a gap in literature to better understand factors that impact the implementation of BHI by analyzing perceptions and practices among staff at integrating organizations. Using a mixed-method design, quantitative results from the Levels of Integration Measure (LIM), a survey tool for assessing staff perceptions of BHI in primary care settings (n=86), were analyzed alongside qualitative results from in-depth interviews with staff (n=27). Findings highlighted the roles of strong leadership, training, and process changes on staff collaboration, relationships, and commitment to BHI. This study demonstrates the usefulness of the LIM in conjunction with in-depth interviews as an assessment tool for understanding perceptions and organizational readiness for BHI implementation.

  5. Providing an integrated waste management strategy and operation focused on project end states at the Hanford site

    International Nuclear Information System (INIS)

    Blackford, L.

    2009-01-01

    responsibilities; 3) provides a basis for budgeting; and 4) reflects a concerted goal of achieving full regulatory compliance and remediation (with enforceable milestones) in an aggressive manner. CHPRC's approach to safely accelerate and accomplish solid waste stabilization and disposition provides an integrated waste management (WM) strategy and operation focused on project end states. CHPRC will present planned approaches for waste stabilization and disposition based on lessons learned at other DOE sites and discuss optimizing solutions to accelerating TPA milestone M-91 TRU waste retrieval activities through innovation and increased production, Point of Generation waste management, unique transport and packaging systems that are re-usable, and in-field waste handling and treatment processes that have generated cost saving and are generating early-completion cost savings of $66 million on the PRC that can be redirected to other pressing Hanford Plateau Remediation Contract (PRC) projects. (authors)

  6. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  7. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  8. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  9. International Nuclear Safety Center (INSC) database

    International Nuclear Information System (INIS)

    Sofu, T.; Ley, H.; Turski, R.B.

    1997-01-01

    As an integral part of DOE's International Nuclear Safety Center (INSC) at Argonne National Laboratory, the INSC Database has been established to provide an interactively accessible information resource for the world's nuclear facilities and to promote free and open exchange of nuclear safety information among nations. The INSC Database is a comprehensive resource database aimed at a scope and level of detail suitable for safety analysis and risk evaluation for the world's nuclear power plants and facilities. It also provides an electronic forum for international collaborative safety research for the Department of Energy and its international partners. The database is intended to provide plant design information, material properties, computational tools, and results of safety analysis. Initial emphasis in data gathering is given to Soviet-designed reactors in Russia, the former Soviet Union, and Eastern Europe. The implementation is performed under the Oracle database management system, and the World Wide Web is used to serve as the access path for remote users. An interface between the Oracle database and the Web server is established through a custom designed Web-Oracle gateway which is used mainly to perform queries on the stored data in the database tables

  10. In-situ databases and comparison of ESA Ocean Colour Climate Change Initiative (OC-CCI) products with precursor data, towards an integrated approach for ocean colour validation and climate studies

    Science.gov (United States)

    Brotas, Vanda; Valente, André; Couto, André B.; Grant, Mike; Chuprin, Andrei; Jackson, Thomas; Groom, Steve; Sathyendranath, Shubha

    2014-05-01

    Ocean colour (OC) is an Oceanic Essential Climate Variable, which is used by climate modellers and researchers. The European Space Agency (ESA) Climate Change Initiative project, is the ESA response for the need of climate-quality satellite data, with the goal of providing stable, long-term, satellite-based ECV data products. The ESA Ocean Colour CCI focuses on the production of Ocean Colour ECV uses remote sensing reflectances to derive inherent optical properties and chlorophyll a concentration from ESA's MERIS (2002-2012) and NASA's SeaWiFS (1997 - 2010) and MODIS (2002-2012) sensor archives. This work presents an integrated approach by setting up a global database of in situ measurements and by inter-comparing OC-CCI products with pre-cursor datasets. The availability of in situ databases is fundamental for the validation of satellite derived ocean colour products. A global distribution in situ database was assembled, from several pre-existing datasets, with data spanning between 1997 and 2012. It includes in-situ measurements of remote sensing reflectances, concentration of chlorophyll-a, inherent optical properties and diffuse attenuation coefficient. The database is composed from observations of the following datasets: NOMAD, SeaBASS, MERMAID, AERONET-OC, BOUSSOLE and HOTS. The result was a merged dataset tuned for the validation of satellite-derived ocean colour products. This was an attempt to gather, homogenize and merge, a large high-quality bio-optical marine in situ data, as using all datasets in a single validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. An inter-comparison analysis between OC-CCI chlorophyll-a product and satellite pre-cursor datasets was done with single missions and merged single mission products. Single mission datasets considered were SeaWiFS, MODIS-Aqua and MERIS; merged mission datasets were obtained from the GlobColour (GC) as well as the Making Earth Science

  11. ADVANCED VOCATIONAL TRAINING OF ENVIRONMENTAL PROFESSIONALS FOR PROVIDING SUSTAINABLE DEVELOPMENT OF RAILWAYS OF UKRAINE ON THE WAY TO EUROPEAN INTEGRATION

    Directory of Open Access Journals (Sweden)

    Zoriana Dvulit

    2017-12-01

    Full Text Available The subject of the research is to study the state of issue of advanced training of environmental professionals and specialists on six railways of Ukrzaliznytsia PJSC: Donetsk, Lviv, Odesa, Pivdenna (Southern, Pivdenno-Zakhidna (Southwestern and Pridniprovska Railways. The purpose of the article is to study the issue of providing the necessary qualification level of postgraduate education (advanced training of environmental professionals and specialists at six Ukrainian railways. The methodology of the research: In order to achieve the goal, the following methods are used in the article: 1 statistical methods and methods of comparative analysis; 2 questionnaires and expert surveys of environmental professionals and specialists; 3 taxonomic methods. The novelty of the research. The state of the issue of ensuring the necessary level of professional development of environmental professionals and specialists at six railways of Ukrzaliznytsia PJSC is investigated. Namely: 1. the state of the level of professional development of environmental professionals and specialists of 6 railways of Ukrzaliznytsia PJSC for the period from 2012 to 2016 is researched and evaluated; its structural and dynamic analysis is carried out; 2. calculations of taxonomic indicators of the level of development of career development system for environmental professionals and specialists as distribution of expenses for advanced training for 6 railways for 2012–2016 are made; 3. carried out a questionnaire of environmental professionals and specialists, whose list of functional responsibilities, in accordance with the job description (both staffed and part-time workers, includes issues of the use of natural resources and environmental protection – in order to clarify the availability of environmental education, the length of work in the railway, the length of work in positions associated with environmental activities, the level of satisfaction with the content of their work

  12. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  13. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  14. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  15. An evaluation of the performance of an integrated solar combined cycle plant provided with air-linear parabolic collectors

    International Nuclear Information System (INIS)

    Amelio, Mario; Ferraro, Vittorio; Marinelli, Valerio; Summaria, Antonio

    2014-01-01

    An evaluation of the performance of an innovative solar system integrated in a combined cycle plant is presented, in which the heat transfer fluid flowing in linear parabolic collectors is the same oxidant air that is introduced into the combustion chamber of the plant. This peculiarity allows a great simplification of the plant. There is a 22% saving of fossil fuel results in design conditions and 15.5% on an annual basis, when the plant works at nominal volumetric flow rate in the daily hours. The net average year efficiency is 60.9% against the value of 51.4% of a reference combined cycle plant without solar integration. Moreover, an economic evaluation of the plant is carried out, which shows that the extra-cost of the solar part is recovered in about 5 years. - Highlights: • A model to calculate an innovative ISCCS (Integrated solar Combined Cycle Systems) solar plant is presented. • The plant uses air as heat transfer fluid as well as oxidant in the combustor. • The plant presents a very high thermodynamic efficiency. • The plant is very simple in comparison with existing ISCCS

  16. Database Security: A Historical Perspective

    OpenAIRE

    Lesov, Paul

    2010-01-01

    The importance of security in database research has greatly increased over the years as most of critical functionality of the business and military enterprises became digitized. Database is an integral part of any information system and they often hold sensitive data. The security of the data depends on physical security, OS security and DBMS security. Database security can be compromised by obtaining sensitive data, changing data or degrading availability of the database. Over the last 30 ye...

  17. LandIT Database

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Pedersen, Torben Bach

    2010-01-01

    and reporting purposes. This paper presents the LandIT database; which is result of the LandIT project, which refers to an industrial collaboration project that developed technologies for communication and data integration between farming devices and systems. The LandIT database in principal is based...... on the ISOBUS standard; however the standard is extended with additional requirements, such as gradual data aggregation and flexible exchange of farming data. This paper describes the conceptual and logical schemas of the proposed database based on a real-life farming case study....

  18. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  19. Integrated social facility location planning for decision support: Accessibility studies provide support to facility location and integration of social service provision

    CSIR Research Space (South Africa)

    Green, Cheri A

    2012-09-01

    Full Text Available for two or more facilities to create an integrated plan for development Step 6 Costing of development plan Case Study Access norms and thresholds guidelines in accessibility analysis Appropriate norms/provision guidelines facilitate both service... access norms and threshold standards ?Test the relationship between service demand and the supply (service capacity) of the facility provision points within a defined catchment area ?Promote the ?right?sizing? of facilities relative to the demand...

  20. Project of an information integrated system to provide support to the regulatory control of the radioactive waste inventory

    International Nuclear Information System (INIS)

    Christovao, Marilia Tavares

    2005-05-01

    Sources and radioactive waste deriving from industry activities, medical practice and other areas are collected, received, and stored as waste on Brazilian Nuclear Energy Commission (CNEN) Institutes, that also generate, treat and store their own radioactive waste. The object of this project is to present an Integrated Information System named SICORR, having as guidelines, the referred processes to the radioactive waste regulatory control, under the responsibility of the Radioactive Waste Division (DIREJ), the General Coordination of Licensing and Control (CGLC), the Directorate of Safety and Radiation Protection (DRS) and the CNEN. The main objective of the work was reached, once the project SICORR modeling considers the radioactive waste control inventory, enclosing the treatment and integration of the radioactive waste and the radionuclides data and processes; the installations that produce, use, transport or store radiation sources data; and, CNEN Institutes responsible for the radioactive waste management data. The SICORR functions or essential modules involve the data treatment, integration, standardization and consistency between the processes. The SICORR specification and the analysis results are registered in documents, Software Specification Proposal (PESw) and Software Requirements Specification (ERSw), and are presented in text, in diagrams and user interfaces. Use cases have been used in the SICORR context diagram. The user interfaces for each use case have been detailed, defining the graphical layout, the relationships description with other interfaces, the interface details properties and the commands and the product entrances and exits. For objects radioactive waste and radionuclides, states diagrams have been drawn. The activities diagram represents the business model process. The class diagram represents the static objects and relationships that exist between them, under the specification point of view. The class diagram have been determined

  1. Integration of copy number and transcriptomics provides risk stratification in prostate cancer: A discovery and validation cohort study

    Science.gov (United States)

    Ross-Adams, H.; Lamb, A.D.; Dunning, M.J.; Halim, S.; Lindberg, J.; Massie, C.M.; Egevad, L.A.; Russell, R.; Ramos-Montoya, A.; Vowler, S.L.; Sharma, N.L.; Kay, J.; Whitaker, H.; Clark, J.; Hurst, R.; Gnanapragasam, V.J.; Shah, N.C.; Warren, A.Y.; Cooper, C.S.; Lynch, A.G.; Stark, R.; Mills, I.G.; Grönberg, H.; Neal, D.E.

    2015-01-01

    Background Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer (MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene

  2. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  3. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  4. Atomic Spectra Database (ASD)

    Science.gov (United States)

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  5. HIV/TB co-infection:perspectives of TB patients and providers on the integrated HIV/TB pilot program in Tamilnadu, India

    OpenAIRE

    Lakshminarayanan, Mahalakshmi

    2009-01-01

    The WHO recommends routine HIV testing among TB patients as a key strategy to combat the dual HIV/TB epidemic. India has integrated its HIV and TB control programs and is offering provider initiated HIV testing for all TB patients since 2007. Using a mixed methods approach, this study aims to understand the perspectives of TB patients and providers on the integrated HIV/TB pilot program in Tamilnadu, India. A survey conducted by the Tuberculosis Research Center, India on 300 TB patients is th...

  6. WEB Services Networks and Technological Hybrids — The Integration Challenges of WAN Distributed Computing for ASP Providers

    Science.gov (United States)

    Mroczkiewicz, Pawel

    A necessity of integration of both information systems and office software existing in organizations has had a long history. The beginning of this kind of solutions reaches back to the old generation of network protocols called EDI (Electronic Data Interchange) and EDIFACT standard, which was initiated in 1988 and has dynamically evolved ever since (S. Michalski, M. Suskiewicz, 1995). The mentioned protocol was usually used for converting documents into natural formats processed by applications. It caused problems with binary files and, furthermore, the communication mechanisms had to be modified each time new documents or applications were added. When we compare EDI with the previously used communication mechanisms, EDI was a great step forward as it was the first, big scale attempt to define standards of data interchange between the applications in business transactions (V. Leyland, 1995, p. 47).

  7. The human keratinocyte two-dimensional protein database (update 1994): towards an integrated approach to the study of cell proliferation, differentiation and skin diseases

    DEFF Research Database (Denmark)

    Celis, J E; Rasmussen, H H; Olsen, E

    1994-01-01

    The master two-dimensional (2-D) gel database of human keratinocytes currently lists 3087 cellular proteins (2168 isoelectric focusing, IEF; and 919 none-quilibrium pH gradient electrophoresis, NEPHGE), many of which correspond to posttranslational modifications, 890 polypeptides have been...... in the database. We also report a database of proteins recovered from the medium of noncultured, unfractionated keratinocytes. This database lists 398 polypeptides (309 IEF; 89 NEPHGE) of which 76 have been identified. The aim of the comprehensive databases is to gather, through a systematic study...

  8. Linking the Taiwan Fish Database to the Global Database

    Directory of Open Access Journals (Sweden)

    Kwang-Tsao Shao

    2007-03-01

    Full Text Available Under the support of the National Digital Archive Program (NDAP, basic species information about most Taiwanese fishes, including their morphology, ecology, distribution, specimens with photos, and literatures have been compiled into the "Fish Database of Taiwan" (http://fishdb.sinica.edu.tw. We expect that the all Taiwanese fish species databank (RSD, with 2800+ species, and the digital "Fish Fauna of Taiwan" will be completed in 2007. Underwater ecological photos and video images for all 2,800+ fishes are quite difficult to achieve but will be collected continuously in the future. In the last year of NDAP, we have successfully integrated all fish specimen data deposited at 7 different institutes in Taiwan as well as their collection maps on the Google Map and Google Earth. Further, the database also provides the pronunciation of Latin scientific names and transliteration of Chinese common names by referring to the Romanization system for all Taiwanese fishes (2,902 species in 292 families so far. The Taiwanese fish species checklist with Chinese common/vernacular names and specimen data has been updated periodically and provided to the global FishBase as well as the Global Biodiversity Information Facility (GBIF through the national portal of the Taiwan Biodiversity Information Facility (TaiBIF. Thus, Taiwanese fish data can be queried and browsed on the WWW. For contributing to the "Barcode of Life" and "All Fishes" international projects, alcohol-preserved specimens of more than 1,800 species and cryobanking tissues of 800 species have been accumulated at RCBAS in the past two years. Through this close collaboration between local and global databases, "The Fish Database of Taiwan" now attracts more than 250,000 visitors and achieves 5 million hits per month. We believe that this local database is becoming an important resource for education, research, conservation, and sustainable use of fish in Taiwan.

  9. Building a comprehensive mill-level database for the Industrial Sectors Integrated Solutions (ISIS) model of the U.S. pulp and paper sector.

    Science.gov (United States)

    Modak, Nabanita; Spence, Kelley; Sood, Saloni; Rosati, Jacky Ann

    2015-01-01

    Air emissions from the U.S. pulp and paper sector have been federally regulated since 1978; however, regulations are periodically reviewed and revised to improve efficiency and effectiveness of existing emission standards. The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant, sector-based analyses that are performed in conjunction with regulatory development. The model utilizes a multi-sector, multi-product dynamic linear modeling framework that evaluates the economic impact of emission reduction strategies for multiple air pollutants. The ISIS model considers facility-level economic, environmental, and technical parameters, as well as sector-level market data, to estimate the impacts of environmental regulations on the pulp and paper industry. Specifically, the model can be used to estimate U.S. and global market impacts of new or more stringent air regulations, such as impacts on product price, exports and imports, market demands, capital investment, and mill closures. One major challenge to developing a representative model is the need for an extensive amount of data. This article discusses the collection and processing of data for use in the model, as well as the methods used for building the ISIS pulp and paper database that facilitates the required analyses to support the air quality management of the pulp and paper sector.

  10. The on scene command and control system (OSC2) : an integrated incident command system (ICS) forms-database management system and oil spill trajectory and fates model

    International Nuclear Information System (INIS)

    Anderson, E.; Galagan, C.; Howlett, E.

    1998-01-01

    The On Scene Command and Control (OSC 2 ) system is an oil spill modeling tool which was developed to combine Incident Command System (ICS) forms, an underlying database, an integrated geographical information system (GIS) and an oil spill trajectory and fate model. The first use of the prototype OSC 2 system was at a PREP drill conducted at the U.S. Coast Guard Marine Safety Office, San Diego, in April 1998. The goal of the drill was to simulate a real-time response over a 36-hour period using the Unified Command System. The simulated spill was the result of a collision between two vessels inside San Diego Bay that caused the release of 2,000 barrels of fuel oil. The hardware component of the system which was tested included three notebook computers, two laser printers, and a poster printer. The field test was a success but it was not a rigorous test of the system's capabilities. The map display was useful in quickly setting up the ICS divisions and groups and in deploying resources. 6 refs., 1 tab., 5 figs

  11. Building a Comprehensive Mill-Level Database for the Industrial Sectors Integrated Solutions (ISIS) Model of the U.S. Pulp and Paper Sector

    Science.gov (United States)

    Modak, Nabanita; Spence, Kelley; Sood, Saloni; Rosati, Jacky Ann

    2015-01-01

    Air emissions from the U.S. pulp and paper sector have been federally regulated since 1978; however, regulations are periodically reviewed and revised to improve efficiency and effectiveness of existing emission standards. The Industrial Sectors Integrated Solutions (ISIS) model for the pulp and paper sector is currently under development at the U.S. Environmental Protection Agency (EPA), and can be utilized to facilitate multi-pollutant, sector-based analyses that are performed in conjunction with regulatory development. The model utilizes a multi-sector, multi-product dynamic linear modeling framework that evaluates the economic impact of emission reduction strategies for multiple air pollutants. The ISIS model considers facility-level economic, environmental, and technical parameters, as well as sector-level market data, to estimate the impacts of environmental regulations on the pulp and paper industry. Specifically, the model can be used to estimate U.S. and global market impacts of new or more stringent air regulations, such as impacts on product price, exports and imports, market demands, capital investment, and mill closures. One major challenge to developing a representative model is the need for an extensive amount of data. This article discusses the collection and processing of data for use in the model, as well as the methods used for building the ISIS pulp and paper database that facilitates the required analyses to support the air quality management of the pulp and paper sector. PMID:25806516

  12. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  13. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Science.gov (United States)

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan. Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  14. Integrative analysis of kinase networks in TRAIL-induced apoptosis provides a source of potential targets for combination therapy

    DEFF Research Database (Denmark)

    So, Jonathan; Pasculescu, Adrian; Dai, Anna Y.

    2015-01-01

    phosphoproteomics. With these protein interaction maps, we modeled information flow through the networks and identified apoptosis-modifying kinases that are highly connected to regulated substrates downstream of TRAIL. The results of this analysis provide a resource of potential targets for the development of TRAIL...

  15. RDD Databases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This database was established to oversee documents issued in support of fishery research activities including experimental fishing permits (EFP), letters of...

  16. Snowstorm Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20...

  17. Dealer Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dealer reporting databases contain the primary data reported by federally permitted seafood dealers in the northeast. Electronic reporting was implemented May 1,...

  18. Integrated analysis of epigenomic and genomic changes by DNA methylation dependent mechanisms provides potential novel biomarkers for prostate cancer.

    Science.gov (United States)

    White-Al Habeeb, Nicole M A; Ho, Linh T; Olkhov-Mitsel, Ekaterina; Kron, Ken; Pethe, Vaijayanti; Lehman, Melanie; Jovanovic, Lidija; Fleshner, Neil; van der Kwast, Theodorus; Nelson, Colleen C; Bapat, Bharati

    2014-09-15

    Epigenetic silencing mediated by CpG methylation is a common feature of many cancers. Characterizing aberrant DNA methylation changes associated with tumor progression may identify potential prognostic markers for prostate cancer (PCa). We treated two PCa cell lines, 22Rv1 and DU-145 with the demethylating agent 5-Aza 2'-deoxycitidine (DAC) and global methylation status was analyzed by performing methylation-sensitive restriction enzyme based differential methylation hybridization strategy followed by genome-wide CpG methylation array profiling. In addition, we examined gene expression changes using a custom microarray. Gene Set Enrichment Analysis (GSEA) identified the most significantly dysregulated pathways. In addition, we assessed methylation status of candidate genes that showed reduced CpG methylation and increased gene expression after DAC treatment, in Gleason score (GS) 8 vs. GS6 patients using three independent cohorts of patients; the publically available The Cancer Genome Atlas (TCGA) dataset, and two separate patient cohorts. Our analysis, by integrating methylation and gene expression in PCa cell lines, combined with patient tumor data, identified novel potential biomarkers for PCa patients. These markers may help elucidate the pathogenesis of PCa and represent potential prognostic markers for PCa patients.

  19. National database

    DEFF Research Database (Denmark)

    Kristensen, Helen Grundtvig; Stjernø, Henrik

    1995-01-01

    Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen.......Artikel om national database for sygeplejeforskning oprettet på Dansk Institut for Sundheds- og Sygeplejeforskning. Det er målet med databasen at samle viden om forsknings- og udviklingsaktiviteter inden for sygeplejen....

  20. Livestock Anaerobic Digester Database

    Science.gov (United States)

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  1. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  2. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  3. A METHOD AND AN APPARATUS FOR PROVIDING TIMING SIGNALS TO A NUMBER OF CIRCUITS, AN INTEGRATED CIRCUIT AND A NODE

    DEFF Research Database (Denmark)

    2006-01-01

    A method of providing or transporting a timing signal between a number of circuits, electrical or optical, where each circuit is fed by a node. The nodes forward timing signals between each other, and at least one node is adapted to not transmit a timing signal before having received a timing...... signal from at least two nodes. In this manner, the direction of the timing skew between nodes and circuits is known and data transport between the circuits made easier....

  4. Integrating a Typhoon Event Database with an Optimal Flood Operation Model on the Real-Time Flood Control of the Tseng-Wen Reservoir

    Science.gov (United States)

    Chen, Y. W.; Chang, L. C.

    2012-04-01

    Typhoons which normally bring a great amount of precipitation are the primary natural hazard in Taiwan during flooding season. Because the plentiful rainfall quantities brought by typhoons are normally stored for the usage of the next draught period, the determination of release strategies for flood operation of reservoirs which is required to simultaneously consider not only the impact of reservoir safety and the flooding damage in plain area but also for the water resource stored in the reservoir after typhoon becomes important. This study proposes a two-steps study process. First, this study develop an optimal flood operation model (OFOM) for the planning of flood control and also applies the OFOM on Tseng-wun reservoir and the downstream plain related to the reservoir. Second, integrating a typhoon event database with the OFOM mentioned above makes the proposed planning model have ability to deal with a real-time flood control problem and names as real-time flood operation model (RTFOM). Three conditions are considered in the proposed models, OFOM and RTFOM, include the safety of the reservoir itself, the reservoir storage after typhoons and the impact of flooding in the plain area. Besides, the flood operation guideline announced by government is also considered in the proposed models. The these conditions and the guideline can be formed as an optimization problem which is solved by the genetic algorithm (GA) in this study. Furthermore, a distributed runoff model, kinematic-wave geomorphic instantaneous unit hydrograph (KW-GIUH), and a river flow simulation model, HEC-RAS, are used to simulate the river water level of Tseng-wun basin in the plain area and the simulated level is shown as an index of the impact of flooding. Because the simulated levels are required to re-calculate iteratively in the optimization model, applying a recursive artificial neural network (recursive ANN) instead of the HEC-RAS model can significantly reduce the computational burden of

  5. Product Licenses Database Application

    CERN Document Server

    Tonkovikj, Petar

    2016-01-01

    The goal of this project is to organize and centralize the data about software tools available to CERN employees, as well as provide a system that would simplify the license management process by providing information about the available licenses and their expiry dates. The project development process is consisted of two steps: modeling the products (software tools), product licenses, legal agreements and other data related to these entities in a relational database and developing the front-end user interface so that the user can interact with the database. The result is an ASP.NET MVC web application with interactive views for displaying and managing the data in the underlying database.

  6. Collaborate across silos: Perceived barriers to integration of care for the elderly from the perspectives of service providers.

    Science.gov (United States)

    Lau, Janice Ying-Chui; Wong, Eliza Lai-Yi; Chung, Roger Y; Law, Stephen C K; Threapleton, Diane; Kiang, Nicole; Chau, Patsy; Wong, Samuel Y S; Woo, Jean; Yeoh, Eng-Kiong

    2018-04-27

    To examine the barriers that hinder collaboration between health care and social care services and to report recommendations for effective collaboration to meet the growing support and care needs of our ageing population. Data for this qualitative study were obtained from interviews with 7 key informants (n = 42) and 22 focus groups (n = 117) consisting of service providers who were from the health care or social care sectors and supporting elderly patients with multiple chronic diseases or long-term care needs. Data collection was conducted from 2015 to 2016. The data were analysed using an inductive approach on the basis of thematic analysis. Qualitative analysis reviewed a number of factors that play a significant role in setting up barriers at the operational level, including fragmentation and lack of sustainability of discharge programmes provided by non-governmental organisations, lack of capacity of homes for the elderly, limitation of time and resources, and variation of roles in supporting end-of-life care decisions between the medical and social sectors. Other barriers are those of communication to be found at the structural level and perceptual ones that exist between professionals. Of these, perceptual barriers affect attitudes and create mistrust and interprofessional stereotypes and a hierarchy between the health care and social care sectors. Health care and social care service providers recognise the need for collaborative work to enhance continuity of care and ageing in place; however, their efforts are hindered by the identified barriers that need to be dealt with in practical terms and by a change of policy. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Immunization with Hexon modified adenoviral vectors integrated with gp83 epitope provides protection against Trypanosoma cruzi infection.

    Directory of Open Access Journals (Sweden)

    Anitra L Farrow

    2014-08-01

    Full Text Available Trypanosoma cruzi is the causative agent of Chagas disease. Chagas disease is an endemic infection that affects over 8 million people throughout Latin America and now has become a global challenge. The current pharmacological treatment of patients is unsuccessful in most cases, highly toxic, and no vaccines are available. The results of inadequate treatment could lead to heart failure resulting in death. Therefore, a vaccine that elicits neutralizing antibodies mediated by cell-mediated immune responses and protection against Chagas disease is necessary.The "antigen capsid-incorporation" strategy is based upon the display of the T. cruzi epitope as an integral component of the adenovirus' capsid rather than an encoded transgene. This strategy is predicted to induce a robust humoral immune response to the presented antigen, similar to the response provoked by native Ad capsid proteins. The antigen chosen was T. cruzi gp83, a ligand that is used by T. cruzi to attach to host cells to initiate infection. The gp83 epitope, recognized by the neutralizing MAb 4A4, along with His6 were incorporated into the Ad serotype 5 (Ad5 vector to generate the vector Ad5-HVR1-gp83-18 (Ad5-gp83. This vector was evaluated by molecular and immunological analyses. Vectors were injected to elicit immune responses against gp83 in mouse models. Our findings indicate that mice immunized with the vector Ad5-gp83 and challenged with a lethal dose of T. cruzi trypomastigotes confer strong immunoprotection with significant reduction in parasitemia levels, increased survival rate and induction of neutralizing antibodies.This data demonstrates that immunization with adenovirus containing capsid-incorporated T. cruzi antigen elicits a significant anti-gp83-specific response in two different mouse models, and protection against T. cruzi infection by eliciting neutralizing antibodies mediated by cell-mediated immune responses, as evidenced by the production of several Ig isotypes

  8. Integration Between Mental Health-Care Providers and Traditional Spiritual Healers: Contextualising Islam in the Twenty-First Century.

    Science.gov (United States)

    Chowdhury, Nayeefa

    2016-10-01

    In the United Arab Emirates, neuropsychiatric disorders are estimated to contribute to one-fifth of the global burden of disease. Studies show that the UAE citizens' apathy towards seeking professional mental health services is associated with the 'religious viewpoints' on the issue, societal stigma, lack of awareness of mental health and lack of confidence in mental health-care providers. Mental health expenditures by the UAE government health ministry are not available exclusively. The majority of primary health-care doctors and nurses have not received official in-service training on mental health within the last 5 years. Efforts are to be made at deconstructing the position of mental illness and its treatments in the light of Islamic Jurisprudence; drafting culturally sensitive and relevant models of mental health care for Emirati citizens; liaising between Imams of mosques and professional mental health service providers; launching small-scale pilot programs in collaboration with specialist institutions; facilitating mentoring in line with Science, Technology, Engineering and Math (STEM) outreach programmes for senior school Emirati students concerning mental health; and promoting mental health awareness in the wider community through participation in events open to public.

  9. Physics analysis database for the DIII-D tokamak

    International Nuclear Information System (INIS)

    Schissel, D.P.; Bramson, G.; DeBoo, J.C.

    1986-01-01

    The authors report on a centralized database for handling reduced data for physics analysis implemented for the DIII-D tokamak. Each database record corresponds to a specific snapshot in time for a selected discharge. Features of the database environment include automatic updating, data integrity checks, and data traceability. Reduced data from each diagnostic comprises a dedicated data bank (a subset of the database) with quality assurance provided by a physicist. These data banks will be used to create profile banks which will be input to a transport code to create a transport bank. Access to the database is initially through FORTRAN programs. One user interface, PLOTN, is a command driven program to select and display data subsets. Another user interface, PROF, compares and displays profiles. The database is implemented on a Digital Equipment Corporation VAX 8600 running VMS

  10. Example-based learning: comparing the effects of additionally providing three different integrative learning activities on physiotherapy intervention knowledge.

    Science.gov (United States)

    Dyer, Joseph-Omer; Hudon, Anne; Montpetit-Tourangeau, Katherine; Charlin, Bernard; Mamede, Sílvia; van Gog, Tamara

    2015-03-07

    Example-based learning using worked examples can foster clinical reasoning. Worked examples are instructional tools that learners can use to study the steps needed to solve a problem. Studying worked examples paired with completion examples promotes acquisition of problem-solving skills more than studying worked examples alone. Completion examples are worked examples in which some of the solution steps remain unsolved for learners to complete. Providing learners engaged in example-based learning with self-explanation prompts has been shown to foster increased meaningful learning compared to providing no self-explanation prompts. Concept mapping and concept map study are other instructional activities known to promote meaningful learning. This study compares the effects of self-explaining, completing a concept map and studying a concept map on conceptual knowledge and problem-solving skills among novice learners engaged in example-based learning. Ninety-one physiotherapy students were randomized into three conditions. They performed a pre-test and a post-test to evaluate their gains in conceptual knowledge and problem-solving skills (transfer performance) in intervention selection. They studied three pairs of worked/completion examples in a digital learning environment. Worked examples consisted of a written reasoning process for selecting an optimal physiotherapy intervention for a patient. The completion examples were partially worked out, with the last few problem-solving steps left blank for students to complete. The students then had to engage in additional self-explanation, concept map completion or model concept map study in order to synthesize and deepen their knowledge of the key concepts and problem-solving steps. Pre-test performance did not differ among conditions. Post-test conceptual knowledge was higher (P example and completion example strategies to foster intervention selection.

  11. Database resources for the tuberculosis community.

    Science.gov (United States)

    Lew, Jocelyne M; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D; Gordon, Stephen V; Schnappinger, Dirk; Cole, Stewart T; Sobral, Bruno

    2013-01-01

    Access to online repositories for genomic and associated "-omics" datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th-8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Provider Experiences with Prison Care and Aftercare for Women with Co-occurring Mental Health and Substance Use Disorders: Treatment, Resource, and Systems Integration Challenges.

    Science.gov (United States)

    Johnson, Jennifer E; Schonbrun, Yael Chatav; Peabody, Marlanea E; Shefner, Ruth T; Fernandes, Karen M; Rosen, Rochelle K; Zlotnick, Caron

    2015-10-01

    Incarcerated women with co-occurring mental health and substance use disorders (COD) face complex psychosocial challenges at community reentry. This study used qualitative methods to evaluate the perspectives of 14 prison and aftercare providers about service delivery challenges and treatment needs of reentering women with COD. Providers viewed the needs of women prisoners with COD as distinct from those of women with substance use alone and from men with COD. Providers described optimal aftercare for women with COD as including contact with the same provider before and after release, access to services within 24-72 hours after release, assistance with managing multiple social service agencies, assistance with relationship issues, and long-term follow-up. Providers also described larger service system and societal issues, including systems integration and ways in which a lack of prison and community aftercare resources impacted quality of care and reentry outcomes. Practice and policy implications are provided.

  13. Provider Experiences with Prison Care and Aftercare for Women with Co-occurring Mental Health and Substance Use Disorders: Treatment, Resource, and Systems Integration Challenges

    Science.gov (United States)

    Johnson, Jennifer E.; Schonbrun, Yael Chatav; Peabody, Marlanea E.; Shefner, Ruth T.; Fernandes, Karen M.; Rosen, Rochelle K.; Zlotnick, Caron

    2014-01-01

    Incarcerated women with co-occurring mental health and substance use disorders (COD) face complex psychosocial challenges at community reentry. This study used qualitative methods to evaluate the perspectives of 14 prison and aftercare providers about service delivery challenges and treatment needs of reentering women with COD. Providers viewed the needs of women prisoners with COD as distinct from those of women with substance use alone and from men with COD. Providers described optimal aftercare for women with COD as including contact with the same provider before and after release, access to services within 24–72 hours after release, assistance with managing multiple social service agencies, assistance with relationship issues, and long-term follow-up. Providers also described larger service system and societal issues, including systems integration and ways in which a lack of prison and community aftercare resources impacted quality of care and reentry outcomes. Practice and policy implications are provided. PMID:24595815

  14. Health care providers' perceived barriers to and need for the implementation of a national integrated health care standard on childhood obesity in the Netherlands - a mixed methods approach.

    Science.gov (United States)

    Schalkwijk, Annemarie A H; Nijpels, Giel; Bot, Sandra D M; Elders, Petra J M

    2016-03-08

    In 2010, a national integrated health care standard for (childhood) obesity was published and disseminated in the Netherlands. The aim of this study is to gain insight into the needs of health care providers and the barriers they face in terms of implementing this integrated health care standard. A mixed-methods approach was applied using focus groups, semi-structured, face-to-face interviews and an e-mail-based internet survey. The study's participants included: general practitioners (GPs) (focus groups); health care providers in different professions (face-to-face interviews) and health care providers, including GPs; youth health care workers; pediatricians; dieticians; psychologists and physiotherapists (survey). First, the transcripts from the focus groups were analyzed thematically. The themes identified in this process were then used to analyze the interviews. The results of the analysis of the qualitative data were used to construct the statements used in the e-mail-based internet survey. Responses to items were measured on a 5-point Likert scale and were categorized into three outcomes: 'agree' or 'important' (response categories 1 and 2), 'disagree' or 'not important'. Twenty-seven of the GPs that were invited (51 %) participated in four focus groups. Seven of the nine health care professionals that were invited (78 %) participated in the interviews and 222 questionnaires (17 %) were returned and included in the analysis. The following key barriers were identified with regard to the implementation of the integrated health care standard: reluctance to raise the subject; perceived lack of motivation and knowledge on the part of the parents; previous negative experiences with lifestyle programs; financial constraints and the lack of a structured multidisciplinary approach. The main needs identified were: increased knowledge and awareness on the part of both health care providers and parents/children; a social map of effective intervention; structural

  15. Experiences and meanings of integration of TCAM (Traditional, Complementary and Alternative Medical) providers in three Indian states: results from a cross-sectional, qualitative implementation research study.

    Science.gov (United States)

    Nambiar, D; Narayan, V V; Josyula, L K; Porter, J D H; Sathyanarayana, T N; Sheikh, K

    2014-11-25

    Efforts to engage Traditional, Complementary and Alternative Medical (TCAM) practitioners in the public health workforce have growing relevance for India's path to universal health coverage. We used an action-centred framework to understand how policy prescriptions related to integration were being implemented in three distinct Indian states. Health departments and district-level primary care facilities in the states of Kerala, Meghalaya and Delhi. In each state, two or three districts were chosen that represented a variation in accessibility and distribution across TCAM providers (eg, small or large proportions of local health practitioners, Homoeopaths, Ayurvedic and/or Unani practitioners). Per district, two blocks or geographical units were selected. TCAM and allopathic practitioners, administrators and representatives of the community at the district and state levels were chosen based on publicly available records from state and municipal authorities. A total of 196 interviews were carried out: 74 in Kerala, and 61 each in Delhi and Meghalaya. We sought to understand experiences and meanings associated with integration across stakeholders, as well as barriers and facilitators to implementing policies related to integration of Traditional, Complementary and Alternative (TCA) providers at the systems level. We found that individual and interpersonal attributes tended to facilitate integration, while system features and processes tended to hinder it. Collegiality, recognition of stature, as well as exercise of individual personal initiative among TCA practitioners and of personal experience of TCAM among allopaths enabled integration. The system, on the other hand, was characterised by the fragmentation of jurisdiction and facilities, intersystem isolation, lack of trust in and awareness of TCA systems, and inadequate infrastructure and resources for TCA service delivery. State-tailored strategies that routinise interaction, reward individual and system

  16. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    Science.gov (United States)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  17. Simulator with integrated HW and SW for prediction of thermal comfort to provide feedback to the climate control system

    Science.gov (United States)

    Pokorný, Jan; Kopečková, Barbora; Fišer, Jan; JÍcha, Miroslav

    2018-06-01

    The aim of the paper is to assemble a simulator for evaluation of thermal comfort in car cabins in order to give a feedback to the HVAC (heating, ventilation and air conditioning) system. The HW (hardware) part of simulator is formed by thermal manikin Newton and RH (relative humidity), velocity and temperature probes. The SW (software) part consists of the Thermal Comfort Analyser (using ISO 14505-2) and Virtual Testing Stand of Car Cabin defining the heat loads of car cabin. Simulator can provide recommendation for the climate control how to improve thermal comfort in cabin by distribution and directing of air flow, and also by amount of ventilation power to keep optimal temperature inside a cabin. The methods of evaluation of thermal comfort were verified by tests with 10 test subjects for summer (summer clothing, ambient air temperature 30 °C, HVAC setup: +24 °C auto) and winter conditions (winter clothing, ambient air temperature -5 °C, HVAC setup: +18 °C auto). The tests confirmed the validity of the thermal comfort evaluation using the thermal manikin and ISO 14505-2.

  18. Simulator with integrated HW and SW for prediction of thermal comfort to provide feedback to the climate control system

    Directory of Open Access Journals (Sweden)

    Pokorný Jan

    2018-01-01

    Full Text Available The aim of the paper is to assemble a simulator for evaluation of thermal comfort in car cabins in order to give a feedback to the HVAC (heating, ventilation and air conditioning system. The HW (hardware part of simulator is formed by thermal manikin Newton and RH (relative humidity, velocity and temperature probes. The SW (software part consists of the Thermal Comfort Analyser (using ISO 14505-2 and Virtual Testing Stand of Car Cabin defining the heat loads of car cabin. Simulator can provide recommendation for the climate control how to improve thermal comfort in cabin by distribution and directing of air flow, and also by amount of ventilation power to keep optimal temperature inside a cabin. The methods of evaluation of thermal comfort were verified by tests with 10 test subjects for summer (summer clothing, ambient air temperature 30 °C, HVAC setup: +24 °C auto and winter conditions (winter clothing, ambient air temperature -5 °C, HVAC setup: +18 °C auto. The tests confirmed the validity of the thermal comfort evaluation using the thermal manikin and ISO 14505-2.

  19. Molecules, morphometrics and new fossils provide an integrated view of the evolutionary history of Rhinopomatidae (Mammalia: Chiroptera).

    Science.gov (United States)

    Hulva, Pavel; Horácek, Ivan; Benda, Petr

    2007-09-14

    The Rhinopomatidae, traditionally considered to be one of the most ancient chiropteran clades, remains one of the least known groups of Rhinolophoidea. No relevant fossil record is available for this family. Whereas there have been extensive radiations in related families Rhinolophidae and Hipposideridae, there are only a few species in the Rhinopomatidae and their phylogenetic relationship and status are not fully understood. Here we present (a) a phylogenetic analysis based on a partial cytochrome b sequence, (b) new fossils from the Upper Miocene site Elaiochoria 2 (Chalkidiki, Greece), which represents the first appearance datum of the family based on the fossil record, and (c) discussion of the phylogeographic patterns in both molecular and morphological traits. We found deep divergences in the Rhinopoma hardwickii lineage, suggesting that the allopatric populations in (i) Iran and (ii) North Africa and the Middle East should have separate species status. The latter species (R. cystops) exhibits a shallow pattern of isolation by distance (separating the Middle East and the African populations) that contrasts with the pattern of geographic variation in the morphometrical traits. A deep genetic gap was also found in Rhinopoma muscatellum (Iran vs. Yemen). We found only minute genetic distance between R. microphyllum from the Levant and India, which fails to support the sub/species distinctness of the Indian form (R. microphyllum kinneari). The mtDNA survey provided phylogenetic tree of the family Rhinopomatidae for the first time and revealed an unexpected diversification of the group both within R. hardwickii and R. muscatellum morphospecies. The paleobiogeographic scenario compiled in respect to molecular clock data suggests that the family originated in the region south of the Eocene Western Tethyan seaway or in India, and extended its range during the Early Miocene. The fossil record suggests a Miocene spread into the Mediterranean region, followed by a post

  20. Molecules, morphometrics and new fossils provide an integrated view of the evolutionary history of Rhinopomatidae (Mammalia: Chiroptera

    Directory of Open Access Journals (Sweden)

    Benda Petr

    2007-09-01

    Full Text Available Abstract Background The Rhinopomatidae, traditionally considered to be one of the most ancient chiropteran clades, remains one of the least known groups of Rhinolophoidea. No relevant fossil record is available for this family. Whereas there have been extensive radiations in related families Rhinolophidae and Hipposideridae, there are only a few species in the Rhinopomatidae and their phylogenetic relationship and status are not fully understood. Results Here we present (a a phylogenetic analysis based on a partial cytochrome b sequence, (b new fossils from the Upper Miocene site Elaiochoria 2 (Chalkidiki, Greece, which represents the first appearance datum of the family based on the fossil record, and (c discussion of the phylogeographic patterns in both molecular and morphological traits. We found deep divergences in the Rhinopoma hardwickii lineage, suggesting that the allopatric populations in (i Iran and (ii North Africa and the Middle East should have separate species status. The latter species (R. cystops exhibits a shallow pattern of isolation by distance (separating the Middle East and the African populations that contrasts with the pattern of geographic variation in the morphometrical traits. A deep genetic gap was also found in Rhinopoma muscatellum (Iran vs. Yemen. We found only minute genetic distance between R. microphyllum from the Levant and India, which fails to support the sub/species distinctness of the Indian form (R. microphyllum kinneari. Conclusion The mtDNA survey provided phylogenetic tree of the family Rhinopomatidae for the first time and revealed an unexpected diversification of the group both within R. hardwickii and R. muscatellum morphospecies. The paleobiogeographic scenario compiled in respect to molecular clock data suggests that the family originated in the region south of the Eocene Western Tethyan seaway or in India, and extended its range during the Early Miocene. The fossil record suggests a Miocene spread

  1. ZAGRADA - A New Radiocarbon Database

    International Nuclear Information System (INIS)

    Portner, A.; Obelic, B.; Krajcar Bornic, I.

    2008-01-01

    In the Radiocarbon and Tritium Laboratory at the Rudjer Boskovic Institute three different techniques for 14C dating have been used: Gas Proportional Counting (GPC), Liquid Scintillation Counting (LSC) and preparation of milligram-sized samples for AMS dating (Accelerator Mass Spectrometry). The use of several measurement techniques has initiated a need for development of a new relational database ZAGRADA (Zagreb Radiocarbon Database) since the existing software package CARBO could not satisfy the requirements for parallel processing/using of several techniques. Using the SQL procedures, and constraints defined by primary and foreign keys, ZAGRADA enforces high data integrity and provides better performances in data filtering and sorting. Additionally, the new database for 14C samples is a multi-user oriented application that can be accessed from remote computers in the work group providing thus better efficiency of laboratory activities. In order to facilitate data handling and processing in ZAGRADA, the graphical user interface is designed to be user-friendly and to perform various actions on data like input, corrections, searching, sorting and output to printer. All invalid actions performed in user interface are registered with short textual description of an error occurred and appearing on screen in message boxes. Unauthorized access is also prevented by login control and each application window has implemented support to track last changes made by the user. The implementation of a new database for 14C samples has significant contribution to scientific research performed in the Radiocarbon and Tritium Laboratory and will provide better and easier communication with customers.(author)

  2. Principles of data integration

    CERN Document Server

    Doan, AnHai; Ives, Zachary

    2012-01-01

    How do you approach answering queries when your data is stored in multiple databases that were designed independently by different people? This is first comprehensive book on data integration and is written by three of the most respected experts in the field. This book provides an extensive introduction to the theory and concepts underlying today's data integration techniques, with detailed, instruction for their application using concrete examples throughout to explain the concepts. Data integration is the problem of answering queries that span multiple data sources (e.g., databases, web

  3. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    Science.gov (United States)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  4. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Models and Results Database (MAR-D) reference manual. Volume 8

    International Nuclear Information System (INIS)

    Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The primary function of MAR-D is to create a data repository for completed PRAs and Individual Plant Examinations (IPEs) by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS, and FRANTIC software. As probabilistic risk assessments and individual plant examinations are submitted to the NRC for review, MAR-D can be used to convert the models and results from the study for use with IRRAS and SARA. Then, these data can be easily accessed by future studies and will be in a form that will enhance the analysis process. This reference manual provides an overview of the functions available within MAR-D and step-by-step operating instructions

  5. Fire test database

    International Nuclear Information System (INIS)

    Lee, J.A.

    1989-01-01

    This paper describes a project recently completed for EPRI by Impell. The purpose of the project was to develop a reference database of fire tests performed on non-typical fire rated assemblies. The database is designed for use by utility fire protection engineers to locate test reports for power plant fire rated assemblies. As utilities prepare to respond to Information Notice 88-04, the database will identify utilities, vendors or manufacturers who have specific fire test data. The database contains fire test report summaries for 729 tested configurations. For each summary, a contact is identified from whom a copy of the complete fire test report can be obtained. Five types of configurations are included: doors, dampers, seals, wraps and walls. The database is computerized. One version for IBM; one for Mac. Each database is accessed through user-friendly software which allows adding, deleting, browsing, etc. through the database. There are five major database files. One each for the five types of tested configurations. The contents of each provides significant information regarding the test method and the physical attributes of the tested configuration. 3 figs

  6. Introducing perennial biomass crops into agricultural landscapes to address water quality challenges and provide other environmental services: Integrating perennial bioenergy crops into agricultural landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Cacho, J. F. [Environmental Science Division, Argonne National Laboratory, Lemont IL USA; Negri, M. C. [Environmental Science Division, Argonne National Laboratory, Lemont IL USA; Zumpf, C. R. [Environmental Science Division, Argonne National Laboratory, Lemont IL USA; Campbell, P. [Environmental Science Division, Argonne National Laboratory, Lemont IL USA

    2017-11-29

    The world is faced with a difficult multiple challenge of meeting nutritional, energy, and other basic needs, under a limited land and water budget, of between 9 and 10 billion people in the next three decades, mitigating impacts of climate change, and making agricultural production resilient. More productivity is expected from agricultural lands, but intensification of production could further impact the integrity of our finite surface water and groundwater resources. Integrating perennial bioenergy crops in agricultural lands could provide biomass for biofuel and potential improvements on the sustainability of commodity crop production. This article provides an overview of ways in which research has shown that perennial bioenergy grasses and short rotation woody crops can be incorporated into agricultural production systems with reduced indirect land use change, while increasing water quality benefits. Current challenges and opportunities as well as future directions are also highlighted.

  7. Evaluation of the performances of wastewater treatment services provided by the metropolitan municipalities in Turkey using Entropy integrated SAW, MOORA and TOPSIS

    OpenAIRE

    Ayyıldız, Ertuğrul; Özçelik, Gökhan

    2018-01-01

    Reusingof the wastewater has a vital importance because of limited natural waterresources all around the world. Recycled wastewater can be used in many areassuch as agriculture, industry, cleaning etc. Treatment of wastewater is one ofthe important tasks of metropolitan municipalities. The aim of this study is toevaluate the performances of wastewater treatment services provided by themetropolitan municipalities in Turkey using Entropy integrated SAW, MOORA andTOPSIS methods. In the scope of ...

  8. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  9. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  10. Improving sexual health for HIV patients by providing a combination of integrated public health and hospital care services; a one-group pre- and post test intervention comparison

    Directory of Open Access Journals (Sweden)

    Dukers-Muijrers Nicole HTM

    2012-12-01

    Full Text Available Abstract Background Hospital HIV care and public sexual health care (a Sexual Health Care Centre services were integrated to provide sexual health counselling and sexually transmitted infections (STIs testing and treatment (sexual health care to larger numbers of HIV patients. Services, need and usage were assessed using a patient perspective, which is a key factor for the success of service integration. Methods The study design was a one-group pre-test and post-test comparison of 447 HIV-infected heterosexual individuals and men who have sex with men (MSM attending a hospital-based HIV centre serving the southern region of the Netherlands. The intervention offered comprehensive sexual health care using an integrated care approach. The main outcomes were intervention uptake, patients’ pre-test care needs (n=254, and quality rating. Results Pre intervention, 43% of the patients wanted to discuss sexual health (51% MSM; 30% heterosexuals. Of these patients, 12% to 35% reported regular coverage, and up to 25% never discussed sexual health topics at their HIV care visits. Of the patients, 24% used our intervention. Usage was higher among patients who previously expressed a need to discuss sexual health. Most patients who used the integrated services were new users of public health services. STIs were detected in 13% of MSM and in none of the heterosexuals. The quality of care was rated good. Conclusions The HIV patients in our study generally considered sexual health important, but the regular counselling and testing at the HIV care visit was insufficient. The integration of public health and hospital services benefited both care sectors and their patients by addressing sexual health questions, detecting STIs, and conducting partner notification. Successful sexual health care uptake requires increased awareness among patients about their care options as well as a cultural shift among care providers.

  11. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  12. InterAction Database (IADB)

    Science.gov (United States)

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  13. Clinical databases in physical therapy.

    NARCIS (Netherlands)

    Swinkels, I.C.S.; Ende, C.H.M. van den; Bakker, D. de; Wees, Ph.J van der; Hart, D.L.; Deutscher, D.; Bosch, W.J.H. van den; Dekker, J.

    2007-01-01

    Clinical databases in physical therapy provide increasing opportunities for research into physical therapy theory and practice. At present, information on the characteristics of existing databases is lacking. The purpose of this study was to identify clinical databases in which physical therapists

  14. The ClaudicatioNet concept: design of a national integrated care network providing active and healthy aging for patients with intermittent claudication.

    Science.gov (United States)

    Lauret, Gert-Jan; Gijsbers, Harm J H; Hendriks, Erik J M; Bartelink, Marie-Louise; de Bie, Rob A; Teijink, Joep A W

    2012-01-01

    Intermittent claudication (IC) is a manifestation of peripheral arterial occlusive disease (PAOD). Besides cardiovascular risk management, supervised exercise therapy (SET) should be offered to all patients with IC. Outdated guidelines, an insufficient number of specialized physiotherapists (PTs), lack of awareness of the importance of SET by referring physicians, and misguided financial incentives all seriously impede the availability of a structured SET program in The Netherlands. By initiating regional care networks, ClaudicatioNet aims to improve the quality of care for patients with IC. Based on the chronic care model as a conceptual framework, these networks should enhance the access, continuity, and (cost) efficiency of the health care system. With the aid of a national database, health care professionals will be able to benchmark patient results while ClaudicatioNet will be able to monitor quality of care by way of functional and patient reported outcome measures. The success of ClaudicatioNet is dependent on several factors. Vascular surgeons, general practitioners and coordinating central caregivers will need to team up and work in close collaboration with specialized PTs. A substantial task in the upcoming years will be to monitor the quality, volume, and distribution of ClaudicatioNet PTs. Finally, misguided financial incentives within the Dutch health care system need to be tackled. With ClaudicatioNet, integrated care pathways are likely to improve in the upcoming years. This should result in the achievement of optimal quality of care for all patients with IC.

  15. Electron Inelastic-Mean-Free-Path Database

    Science.gov (United States)

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  16. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  17. DATABASE REPLICATION IN HETEROGENOUS PLATFORM

    OpenAIRE

    Hendro Nindito; Evaristus Didik Madyatmadja; Albert Verasius Dian Sano

    2014-01-01

    The application of diverse database technologies in enterprises today is increasingly a common practice. To provide high availability and survavibality of real-time information, a database replication technology that has capability to replicate databases under heterogenous platforms is required. The purpose of this research is to find the technology with such capability. In this research, the data source is stored in MSSQL database server running on Windows. The data will be replicated to MyS...

  18. The human interactome knowledge base (hint-kb): An integrative human protein interaction database enriched with predicted protein–protein interaction scores using a novel hybrid technique

    KAUST Repository

    Theofilatos, Konstantinos A.; Dimitrakopoulos, Christos M.; Likothanassis, Spiridon D.; Kleftogiannis, Dimitrios A.; Moschopoulos, Charalampos N.; Alexakos, Christos; Papadimitriou, Stergios; Mavroudi, Seferina P.

    2013-01-01

    Proteins are the functional components of many cellular processes and the identification of their physical protein–protein interactions (PPIs) is an area of mature academic research. Various databases have been developed containing information about

  19. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  20. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    Science.gov (United States)

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  1. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen......We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  2. The Ensembl genome database project.

    Science.gov (United States)

    Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M

    2002-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.

  3. [Integrity].

    Science.gov (United States)

    Gómez Rodríguez, Rafael Ángel

    2014-01-01

    To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.

  4. The ClaudicatioNet concept: design of a national integrated care network providing active and healthy aging for patients with intermittent claudication

    Directory of Open Access Journals (Sweden)

    Lauret GJ

    2012-08-01

    Full Text Available Gert-Jan Lauret,1,2 Harm JH Gijsbers,3 Erik JM Hendriks,2 Marie-Louise Bartelink,4 Rob A de Bie,2 Joep AW Teijink1,2 On behalf of the ClaudicatioNet Study Group members1Department of Vascular Surgery, Catharina Hospital, Eindhoven, The Netherlands; 2Caphri Research Institute, Department of Epidemiology, Maastricht University, Maastricht, The Netherlands; 3Dutch Society for Heart, Vascular and Lung Physiotherapy, Meijerslaan PG Heemstede, The Netherlands; 4Julius Center Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The NetherlandsIntroduction: Intermittent claudication (IC is a manifestation of peripheral arterial occlusive disease (PAOD. Besides cardiovascular risk management, supervised exercise therapy (SET should be offered to all patients with IC. Outdated guidelines, an insufficient number of specialized physiotherapists (PTs, lack of awareness of the importance of SET by referring physicians, and misguided financial incentives all seriously impede the availability of a structured SET program in The Netherlands.Description of care practice: By initiating regional care networks, ClaudicatioNet aims to improve the quality of care for patients with IC. Based on the chronic care model as a conceptual framework, these networks should enhance the access, continuity, and (cost efficiency of the health care system. With the aid of a national database, health care professionals will be able to benchmark patient results while ClaudicatioNet will be able to monitor quality of care by way of functional and patient reported outcome measures.Discussion: The success of ClaudicatioNet is dependent on several factors. Vascular surgeons, general practitioners and coordinating central caregivers will need to team up and work in close collaboration with specialized PTs. A substantial task in the upcoming years will be to monitor the quality, volume, and distribution of ClaudicatioNet PTs. Finally, misguided financial incentives

  5. Phenol-Explorer 2.0: a major update of the Phenol-Explorer database integrating data on polyphenol metabolism and pharmacokinetics in humans and experimental animals

    Science.gov (United States)

    Rothwell, Joseph A.; Urpi-Sarda, Mireia; Boto-Ordoñez, Maria; Knox, Craig; Llorach, Rafael; Eisner, Roman; Cruz, Joseph; Neveu, Vanessa; Wishart, David; Manach, Claudine; Andres-Lacueva, Cristina; Scalbert, Augustin

    2012-01-01

    Phenol-Explorer, launched in 2009, is the only comprehensive web-based database on the content in foods of polyphenols, a major class of food bioactives that receive considerable attention due to their role in the prevention of diseases. Polyphenols are rarely absorbed and excreted in their ingested forms, but extensively metabolized in the body, and until now, no database has allowed the recall of identities and concentrations of polyphenol metabolites in biofluids after the consumption of polyphenol-rich sources. Knowledge of these metabolites is essential in the planning of experiments whose aim is to elucidate the effects of polyphenols on health. Release 2.0 is the first major update of the database, allowing the rapid retrieval of data on the biotransformations and pharmacokinetics of dietary polyphenols. Data on 375 polyphenol metabolites identified in urine and plasma were collected from 236 peer-reviewed publications on polyphenol metabolism in humans and experimental animals and added to the database by means of an extended relational design. Pharmacokinetic parameters have been collected and can be retrieved in both tabular and graphical form. The web interface has been enhanced and now allows the filtering of information according to various criteria. Phenol-Explorer 2.0, which will be periodically updated, should prove to be an even more useful and capable resource for polyphenol scientists because bioactivities and health effects of polyphenols are dependent on the nature and concentrations of metabolites reaching the target tissues. The Phenol-Explorer database is publicly available and can be found online at http://www.phenol-explorer.eu. Database URL: http://www.phenol-explorer.eu PMID:22879444

  6. The CATH database

    Directory of Open Access Journals (Sweden)

    Knudsen Michael

    2010-02-01

    Full Text Available Abstract The CATH database provides hierarchical classification of protein domains based on their folding patterns. Domains are obtained from protein structures deposited in the Protein Data Bank and both domain identification and subsequent classification use manual as well as automated procedures. The accompanying website http://www.cathdb.info provides an easy-to-use entry to the classification, allowing for both browsing and downloading of data. Here, we give a brief review of the database, its corresponding website and some related tools.

  7. ARTI refrigerant database

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1998-03-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to thermophysical properties, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air conditioning and refrigeration equipment. It also references documents addressing compatibility of refrigerants and lubricants with other materials.

  8. Towards a Portuguese database of food microbiological occurrence

    OpenAIRE

    Viegas, Silvia; Machado, Claudia; Dantas, M.Ascenção; Oliveira, Luísa

    2011-01-01

    Aims: To expand the Portuguese Food Information Resource Programme (PortFIR) by building the Portuguese Food Microbiological Information Network (RPIMA) including users, stakeholders, food microbiological data producers that will provide data and information from research, monitoring, epidemiological investigation and disease surveillance. The integration of food data in a national database will improve foodborne risk management. Methods and results Potential members were identified and...

  9. Developments in diffraction databases

    International Nuclear Information System (INIS)

    Jenkins, R.

    1999-01-01

    Full text: There are a number of databases available to the diffraction community. Two of the more important of these are the Powder Diffraction File (PDF) maintained by the International Centre for Diffraction Data (ICDD), and the Inorganic Crystal Structure Database (ICSD) maintained by Fachsinformationzentrum (FIZ, Karlsruhe). In application, the PDF has been used as an indispensable tool in phase identification and identification of unknowns. The ICSD database has extensive and explicit reference to the structures of compounds: atomic coordinates, space group and even thermal vibration parameters. A similar database, but for organic compounds, is maintained by the Cambridge Crystallographic Data Centre. These databases are often used as independent sources of information. However, little thought has been given on how to exploit the combined properties of structural database tools. A recently completed agreement between ICDD and FIZ, plus ICDD and Cambridge, provides a first step in complementary use of the PDF and the ICSD databases. The focus of this paper (as indicated below) is to examine ways of exploiting the combined properties of both databases. In 1996, there were approximately 76,000 entries in the PDF and approximately 43,000 entries in the ICSD database. The ICSD database has now been used to calculate entries in the PDF. Thus, to derive d-spacing and peak intensity data requires the synthesis of full diffraction patterns, i.e., we use the structural data in the ICSD database and then add instrumental resolution information. The combined data from PDF and ICSD can be effectively used in many ways. For example, we can calculate PDF data for an ideally random crystal distribution and also in the absence of preferred orientation. Again, we can use systematic studies of intermediate members in solid solutions series to help produce reliable quantitative phase analyses. In some cases, we can study how solid solution properties vary with composition and

  10. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  11. Native Pig and Chicken Breed Database: NPCDB

    Directory of Open Access Journals (Sweden)

    Hyeon-Soo Jeong

    2014-10-01

    Full Text Available Indigenous (native breeds of livestock have higher disease resistance and adaptation to the environment due to high genetic diversity. Even though their extinction rate is accelerated due to the increase of commercial breeds, natural disaster, and civil war, there is a lack of well-established databases for the native breeds. Thus, we constructed the native pig and chicken breed database (NPCDB which integrates available information on the breeds from around the world. It is a nonprofit public database aimed to provide information on the genetic resources of indigenous pig and chicken breeds for their conservation. The NPCDB (http://npcdb.snu.ac.kr/ provides the phenotypic information and population size of each breed as well as its specific habitat. In addition, it provides information on the distribution of genetic resources across the country. The database will contribute to understanding of the breed’s characteristics such as disease resistance and adaptation to environmental changes as well as the conservation of indigenous genetic resources.

  12. GMDD: a database of GMO detection methods.

    Science.gov (United States)

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-06-04

    Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.

  13. Qualitative Comparison of IGRA and ESRL Radiosonde Archived Databases

    Science.gov (United States)

    Walker, John R.

    2014-01-01

    Multiple databases of atmospheric profile information are freely available to individuals and groups such as the Natural Environments group. Two of the primary database archives provided by NOAA that are most frequently used are those from the Earth Science Research Laboratory (ESRL) and the Integrated Global Radiosonde Archive (IGRA). Inquiries have been made as to why one database is used as opposed to the other, yet to the best of knowledge, no formal comparison has been performed. The goal of this study is to provide a qualitative comparison of the ESRL and IGRA radiosonde databases. For part of this analyses, 14 upper air observation sites were selected. These sites all have the common attribute of having been used or are planned for use in the development of Range Reference Atmospheres (RRAs) in support of NASA's and DOD's current and future goals.

  14. A Generative Approach for Building Database Federations

    Directory of Open Access Journals (Sweden)

    Uwe Hohenstein

    1999-11-01

    Full Text Available A comprehensive, specification-based approach for building database federations is introduced that supports an integrated ODMG2.0 conforming access to heterogeneous data sources seamlessly done in C++. The approach is centered around several generators. A first set of generators produce ODMG adapters for local sources in order to homogenize them. Each adapter represents an ODMG view and supports the ODMG manipulation and querying. The adapters can be plugged into a federation framework. Another generator produces an homogeneous and uniform view by putting an ODMG conforming federation layer on top of the adapters. Input to these generators are schema specifications. Schemata are defined in corresponding specification languages. There are languages to homogenize relational and object-oriented databases, as well as ordinary file systems. Any specification defines an ODMG schema and relates it to an existing data source. An integration language is then used to integrate the schemata and to build system-spanning federated views thereupon. The generative nature provides flexibility with respect to schema modification of component databases. Any time a schema changes, only the specification has to be adopted; new adapters are generated automatically

  15. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  16. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  17. Advanced information technology: Building stronger databases

    Energy Technology Data Exchange (ETDEWEB)

    Price, D. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    This paper discusses the attributes of the Advanced Information Technology (AIT) tool set, a database application builder designed at the Lawrence Livermore National Laboratory. AIT consists of a C library and several utilities that provide referential integrity across a database, interactive menu and field level help, and a code generator for building tightly controlled data entry support. AIT also provides for dynamic menu trees, report generation support, and creation of user groups. Composition of the library and utilities is discussed, along with relative strengths and weaknesses. In addition, an instantiation of the AIT tool set is presented using a specific application. Conclusions about the future and value of the tool set are then drawn based on the use of the tool set with that specific application.

  18. "With a Touch of a Button": Staff perceptions on integrating technology in an Irish service provider for people with intellectual disabilities.

    Science.gov (United States)

    Clifford Simplican, Stacy; Shivers, Carolyn; Chen, June; Leader, Geraldine

    2018-01-01

    People with intellectual disabilities continue to underutilize technology, in part due to insufficient training. Because support staff professionals provide instructional support, how they perceive integrating new technologies is important for people with intellectual disabilities. The authors conducted a sequential mixed-methods exploratory study (quan→QUAL) including quantitative data from online surveys completed by 46 staff members and qualitative data from five focus groups attended by 39 staff members. Quantitative results show strong support for diverse technologies. In contrast, qualitative results suggest that staff members' support of technology decreases when they perceive that technology may jeopardize service users' safety or independence. Although staff members identified increasing independence as the main reason to use new technologies with service users, they also worried that technologies used to increase the social inclusion of service users may pose undue risk and thus may limit their embrace of technology. © 2017 John Wiley & Sons Ltd.

  19. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  20. Yucca Mountain digital database

    International Nuclear Information System (INIS)

    Daudt, C.R.; Hinze, W.J.

    1992-01-01

    This paper discusses the Yucca Mountain Digital Database (DDB) which is a digital, PC-based geographical database of geoscience-related characteristics of the proposed high-level waste (HLW) repository site of Yucca Mountain, Nevada. It was created to provide the US Nuclear Regulatory Commission's (NRC) Advisory Committee on Nuclear Waste (ACNW) and its staff with a visual perspective of geological, geophysical, and hydrological features at the Yucca Mountain site as discussed in the Department of Energy's (DOE) pre-licensing reports

  1. Developmental Anatomy Ontology of Zebrafish: an Integrative semantic framework

    Directory of Open Access Journals (Sweden)

    Belmamoune Mounia

    2007-12-01

    Full Text Available Integration of information is quintessential to make use of the wealth of bioinformatics resources. One aspect of integration is to make databases interoperable through well annotated information. With new databases one strives to store complementary information and such results in collections of heterogeneous information systems. Concepts in these databases need to be connected and ontologies typically provide a common terminology to share information among different resources.

  2. The integration of the treatment for common mental disorders in primary care: experiences of health care providers in the MANAS trial in Goa, India

    Directory of Open Access Journals (Sweden)

    Kirkwood Betty R

    2011-10-01

    Full Text Available Abstract Background The MANAS trial reported that a Lay Health Counsellor (LHC led collaborative stepped care intervention (the "MANAS intervention" for Common Mental Disorders (CMD was effective in public sector primary care clinics but private sector General Practitioners (GPs did as well with or without the additional counsellor. This paper aims to describe the experiences of integrating the MANAS intervention in primary care. Methods Qualitative semi-structured interviews with key members (n = 119 of the primary health care teams upon completion of the trial and additional interviews with control arm GPs upon completion of the outcome analyses which revealed non-inferiority of this arm. Results Several components of the MANAS intervention were reported to have been critically important for facilitating integration, notably: screening and the categorization of the severity of CMD; provision of psychosocial treatments and adherence management; and the support of the visiting psychiatrist. Non-adherence was common, often because symptoms had been controlled or because of doubt that health care interventions could address one's 'life difficulties'. Interpersonal therapy was intended to be provided face to face by the LHC; however it could not be delivered for most eligible patients due to the cost implications related to travel to the clinic and the time lost from work. The LHCs had particular difficulty in working with patients with extreme social difficulties or alcohol related problems, and elderly patients, as the intervention seemed unable to address their specific needs. The control arm GPs adopted practices similar to the principles of the MANAS intervention; GPs routinely diagnosed CMD and provided psychoeducation, advice on life style changes and problem solving, prescribed antidepressants, and referred to specialists as appropriate. Conclusion The key factors which enhance the acceptability and integration of a LHC in primary care are

  3. The LHCb configuration database

    CERN Document Server

    Abadie, Lana; Gaspar, Clara; Jacobsson, Richard; Jost, Beat; Neufeld, Niko

    2005-01-01

    The Experiment Control System (ECS) will handle the monitoring, configuration and operation of all the LHCb experimental equipment. All parameters required to configure electronics equipment under the control of the ECS will reside in a configuration database. The database will contain two kinds of information: 1.\tConfiguration properties about devices such as hardware addresses, geographical location, and operational parameters associated with particular running modes (dynamic properties). 2.\tConnectivity between devices : this consists of describing the output and input connections of a device (static properties). The representation of these data using tables must be complete so that it can provide all the required information to the ECS and must cater for all the subsystems. The design should also guarantee a fast response time, even if a query results in a large volume of data being loaded from the database into the ECS. To fulfil these constraints, we apply the following methodology: Determine from the d...

  4. Database Application Schema Forensics

    Directory of Open Access Journals (Sweden)

    Hector Quintus Beyers

    2014-12-01

    Full Text Available The application schema layer of a Database Management System (DBMS can be modified to deliver results that may warrant a forensic investigation. Table structures can be corrupted by changing the metadata of a database or operators of the database can be altered to deliver incorrect results when used in queries. This paper will discuss categories of possibilities that exist to alter the application schema with some practical examples. Two forensic environments are introduced where a forensic investigation can take place in. Arguments are provided why these environments are important. Methods are presented how these environments can be achieved for the application schema layer of a DBMS. A process is proposed on how forensic evidence should be extracted from the application schema layer of a DBMS. The application schema forensic evidence identification process can be applied to a wide range of forensic settings.

  5. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 2 : knowledge modeling and database development.

    Science.gov (United States)

    2009-12-01

    The Integrated Remote Sensing and Visualization System (IRSV) is being designed to accommodate the needs of todays Bridge Engineers at the : state and local level from several aspects that were documented in Volume One, Summary Report. The followi...

  6. LHCb distributed conditions database

    International Nuclear Information System (INIS)

    Clemencic, M

    2008-01-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here

  7. Customer database for Watrec Oy

    OpenAIRE

    Melnichikhina, Ekaterina

    2016-01-01

    This thesis is a development project for Watrec Oy. Watrec Oy is a Finnish company specializes in “waste-to-energy” issues. Customer Relation Management (CRM) strategies are now being applied within the company. The customer database is the first and trial step towards CRM strategy in Watrec Oy. The reasons for database project lie in lacking of clear customers’ data. The main objectives are: - To integrate the customers’ and project data; - To improve the level of sales and mar...

  8. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  9. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  10. The NCBI BioSystems database.

    Science.gov (United States)

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  11. Physical database design using Oracle

    CERN Document Server

    Burleson, Donald K

    2004-01-01

    INTRODUCTION TO ORACLE PHYSICAL DESIGNPrefaceRelational Databases and Physical DesignSystems Analysis and Physical Database DesignIntroduction to Logical Database DesignEntity/Relation ModelingBridging between Logical and Physical ModelsPhysical Design Requirements Validation PHYSICAL ENTITY DESIGN FOR ORACLEData Relationships and Physical DesignMassive De-Normalization: STAR Schema DesignDesigning Class HierarchiesMaterialized Views and De-NormalizationReferential IntegrityConclusionORACLE HARDWARE DESIGNPlanning the Server EnvironmentDesigning the Network Infrastructure for OracleOracle Netw

  12. Improving the thermal integrity of new single-family detached residential buildings: Documentation for a regional database of capital costs and space conditioning load savings

    International Nuclear Information System (INIS)

    Koomey, J.G.; McMahon, J.E.; Wodley, C.

    1991-07-01

    This report summarizes the costs and space-conditioning load savings from improving new single-family building shells. It relies on survey data from the National Association of Home-builders (NAHB) to assess current insulation practices for these new buildings, and NAHB cost data (aggregated to the Federal region level) to estimate the costs of improving new single-family buildings beyond current practice. Space-conditioning load savings are estimated using a database of loads for prototype buildings developed at Lawrence Berkeley Laboratory, adjusted to reflect population-weighted average weather in each of the ten federal regions and for the nation as a whole

  13. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  14. Network Information Management: The Key To Providing High WAN Availability.

    Science.gov (United States)

    Tysdal, Craig

    1996-01-01

    Discusses problems associated with increasing corporate network complexity as a result of the proliferation of client/server applications at remote locations, and suggests the key to providing high WAN (wide area network) availability is relational databases used in an integrated management approach. (LRW)

  15. Storing XML Documents in Databases

    NARCIS (Netherlands)

    A.R. Schmidt; S. Manegold (Stefan); M.L. Kersten (Martin); L.C. Rivero; J.H. Doorn; V.E. Ferraggine

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML

  16. XCOM: Photon Cross Sections Database

    Science.gov (United States)

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  17. The Danish Sarcoma Database

    Directory of Open Access Journals (Sweden)

    Jorgensen PH

    2016-10-01

    Full Text Available Peter Holmberg Jørgensen,1 Gunnar Schwarz Lausten,2 Alma B Pedersen3 1Tumor Section, Department of Orthopedic Surgery, Aarhus University Hospital, Aarhus, 2Tumor Section, Department of Orthopedic Surgery, Rigshospitalet, Copenhagen, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark Aim: The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population: Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables: The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy; complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data: Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization's International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System. Data quality and completeness are currently secured. Conclusion: The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative

  18. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  19. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  20. Second-Tier Database for Ecosystem Focus, 2000-2001 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J. (University of Washington, School of Aquatic and Fishery Sciences, Seattle, WA)

    2001-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) does not duplicate services provided by other government entities in the region. Rather, it integrates public data for effective access, consideration and application.