WorldWideScience

Sample records for database warehouse toolkit

  1. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  2. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  3. The Data Warehouse Lifecycle Toolkit

    CERN Document Server

    Kimball, Ralph; Thornthwaite, Warren; Mundy, Joy; Becker, Bob

    2011-01-01

    A thorough update to the industry standard for designing, developing, and deploying data warehouse and business intelligence systemsThe world of data warehousing has changed remarkably since the first edition of The Data Warehouse Lifecycle Toolkit was published in 1998. In that time, the data warehouse industry has reached full maturity and acceptance, hardware and software have made staggering advances, and the techniques promoted in the premiere edition of this book have been adopted by nearly all data warehouse vendors and practitioners. In addition, the term "business intelligence" emerge

  4. Database Vs Data Warehouse

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouse technology includes a set of concepts and methods that offer the users useful information for decision making. The necessity to build a data warehouse arises from the necessity to improve the quality of information in the organization. The date proceeding from different sources, having a variety of forms - both structured and unstructured, are filtered according to business rules and are integrated in a single large data collection. Using informatics solutions, managers have understood that data stored in operational systems - including databases, are an informational gold mine that must be exploited. Data warehouses have been developed to answer the increasing demands for complex analysis, which could not be properly achieved with operational databases. The present paper emphasizes some of the criteria that information application developers can use in order to choose between a database solution or a data warehouse one.

  5. Optimized Database of Higher Education Management Using Data Warehouse

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2010-04-01

    Full Text Available The emergence of new higher education institutions has created the competition in higher education market, and data warehouse can be used as an effective technology tools for increasing competitiveness in the higher education market. Data warehouse produce reliable reports for the institution’s high-level management in short time for faster and better decision making, not only on increasing the admission number of students, but also on the possibility to find extraordinary, unconventional funds for the institution. Efficiency comparison was based on length and amount of processed records, total processed byte, amount of processed tables, time to run query and produced record on OLTP database and data warehouse. Efficiency percentages was measured by the formula for percentage increasing and the average efficiency percentage of 461.801,04% shows that using data warehouse is more powerful and efficient rather than using OLTP database. Data warehouse was modeled based on hypercube which is created by limited high demand reports which usually used by high level management. In every table of fact and dimension fields will be inserted which represent the loading constructive merge where the ETL (Extraction, Transformation and Loading process is run based on the old and new files.

  6. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    Science.gov (United States)

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  7. Architectural design of a data warehouse to support operational and analytical queries across disparate clinical databases.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam; Wajngurt, David

    2007-10-11

    As the clinical data warehouse of the New York Presbyterian Hospital has evolved innovative methods of integrating new data sources and providing more effective and efficient data reporting and analysis need to be explored. We designed and implemented a new clinical data warehouse architecture to handle the integration of disparate clinical databases in the institution. By examining the way downstream systems are populated and streamlining the way data is stored we create a virtual clinical data warehouse that is adaptable to future needs of the organization.

  8. Geminivirus data warehouse: a database enriched with machine learning approaches.

    Science.gov (United States)

    Silva, Jose Cleydson F; Carvalho, Thales F M; Basso, Marcos F; Deguchi, Michihito; Pereira, Welison A; Sobrinho, Roberto R; Vidigal, Pedro M P; Brustolini, Otávio J B; Silva, Fabyano F; Dal-Bianco, Maximiller; Fontes, Renildes L F; Santos, Anésia A; Zerbini, Francisco Murilo; Cerqueira, Fabio R; Fontes, Elizabeth P B

    2017-05-05

    The Geminiviridae family encompasses a group of single-stranded DNA viruses with twinned and quasi-isometric virions, which infect a wide range of dicotyledonous and monocotyledonous plants and are responsible for significant economic losses worldwide. Geminiviruses are divided into nine genera, according to their insect vector, host range, genome organization, and phylogeny reconstruction. Using rolling-circle amplification approaches along with high-throughput sequencing technologies, thousands of full-length geminivirus and satellite genome sequences were amplified and have become available in public databases. As a consequence, many important challenges have emerged, namely, how to classify, store, and analyze massive datasets as well as how to extract information or new knowledge. Data mining approaches, mainly supported by machine learning (ML) techniques, are a natural means for high-throughput data analysis in the context of genomics, transcriptomics, proteomics, and metabolomics. Here, we describe the development of a data warehouse enriched with ML approaches, designated geminivirus.org. We implemented search modules, bioinformatics tools, and ML methods to retrieve high precision information, demarcate species, and create classifiers for genera and open reading frames (ORFs) of geminivirus genomes. The use of data mining techniques such as ETL (Extract, Transform, Load) to feed our database, as well as algorithms based on machine learning for knowledge extraction, allowed us to obtain a database with quality data and suitable tools for bioinformatics analysis. The Geminivirus Data Warehouse (geminivirus.org) offers a simple and user-friendly environment for information retrieval and knowledge discovery related to geminiviruses.

  9. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework.

    Science.gov (United States)

    Chen, Yi-An; Tripathi, Lokesh P; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org. © The Author(s) 2016. Published by Oxford University Press.

  10. VariVis: a visualisation toolkit for variation databases

    Directory of Open Access Journals (Sweden)

    Smith Timothy D

    2008-04-01

    Full Text Available Abstract Background With the completion of the Human Genome Project and recent advancements in mutation detection technologies, the volume of data available on genetic variations has risen considerably. These data are stored in online variation databases and provide important clues to the cause of diseases and potential side effects or resistance to drugs. However, the data presentation techniques employed by most of these databases make them difficult to use and understand. Results Here we present a visualisation toolkit that can be employed by online variation databases to generate graphical models of gene sequence with corresponding variations and their consequences. The VariVis software package can run on any web server capable of executing Perl CGI scripts and can interface with numerous Database Management Systems and "flat-file" data files. VariVis produces two easily understandable graphical depictions of any gene sequence and matches these with variant data. While developed with the goal of improving the utility of human variation databases, the VariVis package can be used in any variation database to enhance utilisation of, and access to, critical information.

  11. Envirofacts Data Warehouse

    Science.gov (United States)

    The Envirofacts Data Warehouse contains information from select EPA Environmental program office databases and provides access about environmental activities that may affect air, water, and land anywhere in the United States. The Envirofacts Warehouse supports its own web enabled tools as well as a host of other EPA applications.

  12. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  13. Design database for quantitative trait loci (QTL) data warehouse, data mining, and meta-analysis.

    Science.gov (United States)

    Hu, Zhi-Liang; Reecy, James M; Wu, Xiao-Lin

    2012-01-01

    A database can be used to warehouse quantitative trait loci (QTL) data from multiple sources for comparison, genomic data mining, and meta-analysis. A robust database design involves sound data structure logistics, meaningful data transformations, normalization, and proper user interface designs. This chapter starts with a brief review of relational database basics and concentrates on issues associated with curation of QTL data into a relational database, with emphasis on the principles of data normalization and structure optimization. In addition, some simple examples of QTL data mining and meta-analysis are included. These examples are provided to help readers better understand the potential and importance of sound database design.

  14. Metadata to Support Data Warehouse Evolution

    Science.gov (United States)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  15. Envirofacts Data Warehouse

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Envirofacts Data Warehouse contains information from select EPA Environmental program office databases and provides access about environmental activities that...

  16. Microsoft Enterprise Consortium: A Resource for Teaching Data Warehouse, Business Intelligence and Database Management Systems

    Science.gov (United States)

    Kreie, Jennifer; Hashemi, Shohreh

    2012-01-01

    Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…

  17. TRUNCATULIX--a data warehouse for the legume community.

    Science.gov (United States)

    Henckel, Kolja; Runte, Kai J; Bekel, Thomas; Dondrup, Michael; Jakobi, Tobias; Küster, Helge; Goesmann, Alexander

    2009-02-11

    Databases for either sequence, annotation, or microarray experiments data are extremely beneficial to the research community, as they centrally gather information from experiments performed by different scientists. However, data from different sources develop their full capacities only when combined. The idea of a data warehouse directly adresses this problem and solves it by integrating all required data into one single database - hence there are already many data warehouses available to genetics. For the model legume Medicago truncatula, there is currently no such single data warehouse that integrates all freely available gene sequences, the corresponding gene expression data, and annotation information. Thus, we created the data warehouse TRUNCATULIX, an integrative database of Medicago truncatula sequence and expression data. The TRUNCATULIX data warehouse integrates five public databases for gene sequences, and gene annotations, as well as a database for microarray expression data covering raw data, normalized datasets, and complete expression profiling experiments. It can be accessed via an AJAX-based web interface using a standard web browser. For the first time, users can now quickly search for specific genes and gene expression data in a huge database based on high-quality annotations. The results can be exported as Excel, HTML, or as csv files for further usage. The integration of sequence, annotation, and gene expression data from several Medicago truncatula databases in TRUNCATULIX provides the legume community with access to data and data mining capability not previously available. TRUNCATULIX is freely available at http://www.cebitec.uni-bielefeld.de/truncatulix/.

  18. TRUNCATULIX – a data warehouse for the legume community

    Directory of Open Access Journals (Sweden)

    Runte Kai J

    2009-02-01

    Full Text Available Abstract Background Databases for either sequence, annotation, or microarray experiments data are extremely beneficial to the research community, as they centrally gather information from experiments performed by different scientists. However, data from different sources develop their full capacities only when combined. The idea of a data warehouse directly adresses this problem and solves it by integrating all required data into one single database – hence there are already many data warehouses available to genetics. For the model legume Medicago truncatula, there is currently no such single data warehouse that integrates all freely available gene sequences, the corresponding gene expression data, and annotation information. Thus, we created the data warehouse TRUNCATULIX, an integrative database of Medicago truncatula sequence and expression data. Results The TRUNCATULIX data warehouse integrates five public databases for gene sequences, and gene annotations, as well as a database for microarray expression data covering raw data, normalized datasets, and complete expression profiling experiments. It can be accessed via an AJAX-based web interface using a standard web browser. For the first time, users can now quickly search for specific genes and gene expression data in a huge database based on high-quality annotations. The results can be exported as Excel, HTML, or as csv files for further usage. Conclusion The integration of sequence, annotation, and gene expression data from several Medicago truncatula databases in TRUNCATULIX provides the legume community with access to data and data mining capability not previously available. TRUNCATULIX is freely available at http://www.cebitec.uni-bielefeld.de/truncatulix/.

  19. Development of a medical informatics data warehouse.

    Science.gov (United States)

    Wu, Cai

    2006-01-01

    This project built a medical informatics data warehouse (MedInfo DDW) in an Oracle database to analyze medical information which has been collected through Baylor Family Medicine Clinic (FCM) Logician application. The MedInfo DDW used Star Schema with dimensional model, FCM database as operational data store (ODS); the data from on-line transaction processing (OLTP) were extracted and transferred to a knowledge based data warehouse through SQLLoad, and the patient information was analyzed by using on-line analytic processing (OLAP) in Crystal Report.

  20. A clinician friendly data warehouse oriented toward narrative reports: Dr. Warehouse.

    Science.gov (United States)

    Garcelon, Nicolas; Neuraz, Antoine; Salomon, Rémi; Faour, Hassan; Benoit, Vincent; Delapalme, Arthur; Munnich, Arnold; Burgun, Anita; Rance, Bastien

    2018-04-01

    Clinical data warehouses are often oriented toward integration and exploration of coded data. However narrative reports are of crucial importance for translational research. This paper describes Dr. Warehouse®, an open source data warehouse oriented toward clinical narrative reports and designed to support clinicians' day-to-day use. Dr. Warehouse relies on an original database model to focus on documents in addition to facts. Besides classical querying functionalities, the system provides an advanced search engine and Graphical User Interfaces adapted to the exploration of text. Dr. Warehouse is dedicated to translational research with cohort recruitment capabilities, high throughput phenotyping and patient centric views (including similarity metrics among patients). These features leverage Natural Language Processing based on the extraction of UMLS® concepts, as well as negation and family history detection. A survey conducted after 6 months of use at the Necker Children's Hospital shows a high rate of satisfaction among the users (96.6%). During this period, 122 users performed 2837 queries, accessed 4,267 patients' records and included 36,632 patients in 131 cohorts. The source code is available at this github link https://github.com/imagine-bdd/DRWH. A demonstration based on PubMed abstracts is available at https://imagine-plateforme-bdd.fr/dwh_pubmed/. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Model Data Warehouse dan Business Intelligence untuk Meningkatkan Penjualan pada PT. S

    Directory of Open Access Journals (Sweden)

    Rudy Rudy

    2011-06-01

    Full Text Available Today a lot of companies use information system in every business activity. Every transaction is stored electronically in the database transaction. The transactional database does not help much to assist the executives in making strategic decisions to improve the company competitiveness. The objective of this research is to analyze the operational database system and the information needed by the management to design a data warehouse model which fits the executive information needs in PT. S. The research method uses the Nine-Step Methodology data warehouse design by Ralph Kimball. The result is a data warehouse featuring business intelligence applications to display information of historical data in tables, graphs, pivot tables, and dashboards and has several points of view for the management. This research concludes that a data warehouse which combines multiple database transactions with business intelligence application can help executives to understand the reports in order to accelerate decision-making processes. 

  2. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  3. The Microsoft Data Warehouse Toolkit With SQL Server 2008 R2 and the Microsoft Business Intelligence Toolset

    CERN Document Server

    Mundy, Joy; Kimball, Ralph

    2011-01-01

    Best practices and invaluable advice from world-renowned data warehouse expertsIn this book, leading data warehouse experts from the Kimball Group share best practices for using the upcoming "Business Intelligence release" of SQL Server, referred to as SQL Server 2008 R2. In this new edition, the authors explain how SQL Server 2008 R2 provides a collection of powerful new tools that extend the power of its BI toolset to Excel and SharePoint users and they show how to use SQL Server to build a successful data warehouse that supports the business intelligence requirements that are common to most

  4. Database Are Not Toasters: A Framework for Comparing Data Warehouse Appliances

    Science.gov (United States)

    Trajman, Omer; Crolotte, Alain; Steinhoff, David; Nambiar, Raghunath Othayoth; Poess, Meikel

    The success of Business Intelligence (BI) applications depends on two factors, the ability to analyze data ever more quickly and the ability to handle ever increasing volumes of data. Data Warehouse (DW) and Data Mart (DM) installations that support BI applications have historically been built using traditional architectures either designed from the ground up or based on customized reference system designs. The advent of Data Warehouse Appliances (DA) brings packaged software and hardware solutions that address performance and scalability requirements for certain market segments. The differences between DAs and custom installations make direct comparisons between them impractical and suggest the need for a targeted DA benchmark. In this paper we review data warehouse appliances by surveying thirteen products offered today. We assess the common characteristics among them and propose a classification for DA offerings. We hope our results will help define a useful benchmark for DAs.

  5. Data Warehouse Emissieregistratie. A new tool to sustainability; Data Warehouse Emissieregistratie. Een nieuw instrument op weg naar duurzaamheid

    Energy Technology Data Exchange (ETDEWEB)

    Van Grootveld, G. [VROM-Inpsectie, Den Haag (Netherlands); Op den Kamp, A. [OpdenKamp Adviesgroep, Den Haag (Netherlands)

    2002-12-01

    An overview is given of the possibilities to use and search the title database which contains data on emission of pollution sources in different sectors in the Netherlands. [Dutch] De voorliggende publicatie illustreert de kracht van het Data Warehouse aan de hand van zeven voorbeelden in de hoofdstukken 3 tot en met 9. Daarbij wordt telkens ook een doorkijk naar duurzame ontwikkeling gegeven.In hoofdstuk 10 worden twee cases met een korte handleiding behandeld. In hoofdstuk 1 staat achtergrondinformatie over de milieubeleidketen en de plaats die monitoring daarin neemt. In hoofdstuk 2 worden kort de drie dimensies van het Data Warehouse en de mogelijkheden die het Data Warehouse biedt beschreven (www.emissieregistratie.nl)

  6. Usage of data warehouse for analysing software's bugs

    Science.gov (United States)

    Živanov, Danijel; Krstićev, Danijela Boberić; Mirković, Duško

    2017-07-01

    We analysed the database schema of Bugzilla system and taking into account user's requirements for reporting, we presented a dimensional model for the data warehouse which will be used for reporting software defects. The idea proposed in this paper is not to throw away Bugzilla system because it certainly has many strengths, but to make integration of Bugzilla and the proposed data warehouse. Bugzilla would continue to be used for recording bugs that occur during the development and maintenance of software while the data warehouse would be used for storing data on bugs in an appropriate form, which is more suitable for analysis.

  7. PEMAHAMAN TEORI DATA WAREHOUSE BAGI MAHASISWA TAHUN AWAL JENJANG STRATA SATU BIDANG ILMU KOMPUTER

    Directory of Open Access Journals (Sweden)

    Harco Leslie Hendric Spits Warnars

    2015-01-01

    Full Text Available As a Computer scientist, a computer science students should have understanding about database theory as a concept of data maintenance. Database will be needed in every single human real life computer implementation such as information systems, information technology, internet, games, artificial intelligence, robot and so on. Inevitably, the right data handling and managament will produce excellent technology implementation. Data warehouse as one of the specialization subject which is offered in computer science study program final semester, provide challenge for computer science students.A survey was conducted on 18 students of early year of computer science study program at Surya university and giving hypothesis that for those students who ever heard of a data warehouse would be interested to learn data warehouse and on other hand, students who had never heard of the data warehouse will not be interested to learn data warehouse. Therefore, it is important that delivery of the Data warehouse subject material should be understood by lecturers, so that students can well understoodwith the data warehouse.

  8. Automated Creation of Datamarts from a Clinical Data Warehouse, Driven by an Active Metadata Repository

    Science.gov (United States)

    Rogerson, Charles L.; Kohlmiller, Paul H.; Stutman, Harris

    1998-01-01

    A methodology and toolkit are described which enable the automated metadata-driven creation of datamarts from clinical data warehouses. The software uses schema-to-schema transformation driven by an active metadata repository. Tools for assessing datamart data quality are described, as well as methods for assessing the feasibility of implementing specific datamarts. A methodology for data remediation and the re-engineering of operational data capture is described.

  9. A Relevance-Extended Multi-dimensional Model for a Data Warehouse Contextualized with Documents

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    Current data warehouse and OLAP technologies can be applied to analyze the structured data that companies store in their databases. The circumstances that describe the context associated with these data can be found in other internal and external sources of documents. In this paper we propose...... to combine the traditional corporate data warehouse with a document warehouse, resulting in a contextualized warehouse. Thus, contextualized warehouses keep a historical record of the facts and their contexts as described by the documents. In this framework, the user selects an analysis context which...

  10. ¿Why Data warehouse & Business Intelligence at Universidad Simon Bolivar?

    Directory of Open Access Journals (Sweden)

    Kamagate Azoumana

    2013-01-01

    Full Text Available Abstract The data warehouse is supposed to provide storage, functionality and responsiveness to queries beyond the capabilities of today’s transaction databases. Also Data warehouse is built to improve the data access performance of databases.   Resumen Los almacenes de datos se supone que proporcionan almacenamiento, funcionalidad y capacidad de repuesta a las consultas y análisis más eficiente que las bases de datos transaccionales. También el almacén de datos se construiye para mejorar el rendimiento de acceso a los datos.

  11. [Establishment of data warehouse of needling and moxibustion literature based on data mining].

    Science.gov (United States)

    Wang, Jian-Ling; Li, Ren-Ling; Jia, Chun-Sheng

    2012-02-01

    In order to explore the efficacy specificity and valuable rules of clinical application of needling and moxibustion methods in a large quantity of information from literature, a data warehouse needs being established. On the basis of the original databases of red-hot needle therapy and hydro-acupuncture therapy, and the newly-established databases of acupoint catgut embedding therapy, acupoint application therapy, etc., and in accordance with the characteristics of different types of needling-moxibustion literature information, databases on different subjects were established first. These subject databases constitute a general "literature data warehouse on needling moxibustion methods" composing of multi-subjects and multiple dimensions so as to discover useful regularities about clinical treatment and trials collected in the literature by using data mining techniques. In the present paper, the authors introduce the design of the data warehouse, determination of subjects, establishment of subject relations, application of the administration platform, and application of data. This data warehouse will provide a standard data representation mode, enlarge data attributes and create extensive data links among literature information in the network, and may bring us with considerable convenience and profits in clinical application decision making and scientific research about needling-moxibustion techniques.

  12. PERANCANGAN DAN IMPLEMENTASI DATA WAREHOUSE METEOROLOGI, KLIMATOLOGI, GEOFISIKA DAN BENCANA ALAM

    Directory of Open Access Journals (Sweden)

    Agus Safril

    2014-05-01

    Full Text Available BMKG telah memiliki data berasal dari beberapa sistem basis data historis (legacy system baik yang telah tersimpan dalam sistem informasi database maupun data dalam bentuk lembar kerja (worksheet. Data lama ini sering tidak digunakan ketika  sistem database baru dikembangkan. Agar data lama tetap dapat digunakan, diperlukan integrasi data lama dan baru. Data warehouse adalah konsep yang digunakan untuk mengintegrasikan data dalam penyimpanan sistem database terpadu BMKG. Integrasi data dilakukan dengan melakukan ekstraksi dari sumber data dengan mengambil item data yang diperlukan. Sumber data diperoleh dari sistem informasi yang ada di kelompok meteorologi, klimatologi dan geofisika. Proses integrasi data dimulai dengan ekstraksi (extraction kemudian dilakukan penyeragaman (transformation sehingga sesuai dengan format yang digunakan untuk kepentingan analisis. Selanjutnya dilakukan proses penyimpanan dalam data warehouse (loading. Prototipe data warehouse yang dibangun mencakup proses input data melalui ekstraksi data lama maupun data baru menggunakan media perangkat lunak akuisisi data. Hasil keluaran (output berupa laporan data dengan perioda data sesuai dengan kebutuhan.   The data collections of BMKG is captured from the legacy systems that is stored in the information systems or data worksheet. Sometimes the legacy system is not used when the new DBMS has been developed. In order the legacy system usefull for DBMS of BMKG, the data is integrated from the legacy systems to the new database systems. Data warehouse is the concept to integrate data to the BMKG Data Base Management System (DMBS. To integrate data, data is integrated the data sources from legacy systems that has been stored in the meteorology, climatology and geophysic information system. The next steps is transformed to data that has the format accordance with the weather analysis requirement. Finally, data must be loaded into the data warehouse.  The data warehouse

  13. Structuring warehouse management : Exploring the fit between warehouse characteristics and warehouse planning and control structure, and its effect on warehouse performance

    NARCIS (Netherlands)

    N. Faber (Nynke)

    2015-01-01

    markdownabstractThis dissertation studies the management processes that plan, control, and optimize warehouse operations. The inventory in warehouses decouples supply from demand. As such, economies of scale can be achieved in production, purchasing, and transport. As warehouses become more and more

  14. Warehouses information system design and development

    Science.gov (United States)

    Darajatun, R. A.; Sukanta

    2017-12-01

    Materials/goods handling industry is fundamental for companies to ensure the smooth running of their warehouses. Efficiency and organization within every aspect of the business is essential in order to gain a competitive advantage. The purpose of this research is design and development of Kanban of inventory storage and delivery system. Application aims to facilitate inventory stock checks to be more efficient and effective. Users easily input finished goods from production department, warehouse, customer, and also suppliers. Master data designed as complete as possible to be prepared applications used in a variety of process logistic warehouse variations. The author uses Java programming language to develop the application, which is used for building Java Web applications, while the database used is MySQL. System development methodology that I use is the Waterfall methodology. Waterfall methodology has several stages of the Analysis, System Design, Implementation, Integration, Operation and Maintenance. In the process of collecting data the author uses the method of observation, interviews, and literature.

  15. Electronic warehouse receipts registry as a step from paper to electronic warehouse receipts

    Directory of Open Access Journals (Sweden)

    Kovačević Vlado

    2016-01-01

    Full Text Available The aim of this paper is to determine the economic viability of the electronic warehouse receipt registry introduction, as a step toward electronic warehouse receipts. Both forms of warehouse receipt paper and electronic exist in practice, but paper warehouse receipts are more widespread. In this paper, the dematerialization process is analyzed in two steps. The first step is the dematerialization of warehouse receipt registry, with warehouse receipts still in paper form. The second step is the introduction of electronic warehouse receipts themselves. Dematerialization of warehouse receipts is more complex than that for financial securities, because of the individual characteristics of each warehouse receipt. As a consequence, electronic warehouse receipts are in place for only to a handful of commodities, namely cotton and a few grains. Nevertheless, the movement towards the electronic warehouse receipt, which began several decades ago with financial securities, is now taking hold in the agricultural sector. In this paper is analyzed Serbian electronic registry, since the Serbia is first country in EU with electronic warehouse receipts registry donated by FAO. Performed analysis shows the considerable impact of electronic warehouse receipts registry establishment on enhancing the security of the system of public warehouses, and on advancing the trade with warehouse receipt.

  16. [Peranesthesic Anaphylactic Shocks: Contribution of a Clinical Data Warehouse].

    Science.gov (United States)

    Osmont, Marie-Noëlle; Campillo-Gimenez, Boris; Metayer, Lucie; Jantzem, Hélène; Rochefort-Morel, Cécile; Cuggia, Marc; Polard, Elisabeth

    2015-10-16

    To evaluate the performance of the collection of cases of anaphylactic shock during anesthesia in the Regional Pharmacovigilance Center of Rennes and the contribution of a query in the biomedical data warehouse of the French University Hospital of Rennes in 2009. Different sources were evaluated: the French pharmacovigilance database (including spontaneous reports and reports from a query in the database of the programme de médicalisation des systèmes d'information [PMSI]), records of patients seen in allergo-anesthesia (source considered as comprehensive as possible) and a query in the data warehouse. Analysis of allergo-anesthesia records detected all cases identified by other methods, as well as two other cases (nine cases in total). The query in the data warehouse enabled detection of seven cases out of the nine. Querying full-text reports and structured data extracted from the hospital information system improves the detection of anaphylaxis during anesthesia and facilitates access to data. © 2015 Société Française de Pharmacologie et de Thérapeutique.

  17. Perancangan Data Warehouse Nilai Mahasiswa Dengan Kimball Nine-Step Methodology

    Directory of Open Access Journals (Sweden)

    Ganda Wijaya

    2017-04-01

    Abstract Student grades has many components that can be analyzed to support decision making. Based on this, the authors conducted a study of student grades. The study was conducted on a database that is in the Bureau of Academic and Student Affairs Administration Bina Sarana Informatika (BAAK BSI. The focus of this research is "How to model a data warehouse that can meet the management needs of the data value of students as supporters of evaluation, planning and decision making?". Data warehouse grades students need to be made in order to obtain the information, reports, and can perform multi-dimensional analysis, which in turn can assist management in making policy. Development of the system is done by using System Development Life Cycle (SDLC with Waterfall approach. While the design of the data warehouse using a nine-step methodology kimball. Results obtained in the form of a star schema and data warehouse value. Data warehouses can provide a summary of information that is fast, accurate and continuous so as to assist management in making policies for the future. In general, the benefits of this research are as additional reference in building a data warehouse using a nine-step methodology kimball.   Keywords: Data Warehouse, Kimball Nine-Step Methodology.

  18. Study and application of data mining and data warehouse in CIMS

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Liu, Daxin

    2003-03-01

    The interest in analyzing data has grown tremendously in recent years. To analyze data, a multitude of technologies is need, namely technologies from the fields of Data Warehouse, Data Mining, On-line Analytical Processing (OLAP). This paper gives a new architecture of data warehouse in CIMS according to CRGC-CIMS application engineering. The data source of this architecture comes from database of CRGC-CIMS system. The data is put in global data set by extracting, filtrating and integrating, and then the data is translated to data warehouse according information request. We have addressed two advantages of the new model in CRGC-CIMS application. In addition, a Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support, OLAP queries or data mining. It is important to select the right view to materialize that answer a given set of queries. In this paper, we also have designed algorithms for selecting a set of views to be materialized in a data warehouse in order to answer the most queries under the constraint of given space. First, we give a cost model for selecting materialized views. Then we give the algorithms that adopt gradually recursive method from bottom to top. We give description and realization of algorithms. Finally, we discuss the advantage and shortcoming of our approach and future work.

  19. Data warehouse for assessing animal health, welfare, risk management and -communication.

    Science.gov (United States)

    Nielsen, Annette Cleveland

    2011-01-01

    The objective of this paper is to give an overview of existing databases in Denmark and describe some of the most important of these in relation to establishment of the Danish Veterinary and Food Administrations' veterinary data warehouse. The purpose of the data warehouse and possible use of the data are described. Finally, sharing of data and validity of data is discussed. There are databases in other countries describing animal husbandry and veterinary antimicrobial consumption, but Denmark will be the first country relating all data concerning animal husbandry, -health and -welfare in Danish production animals to each other in a data warehouse. Moreover, creating access to these data for researchers and authorities will hopefully result in easier and more substantial risk based control, risk management and risk communication by the authorities and access to data for researchers for epidemiological studies in animal health and welfare.

  20. Developing a standardized healthcare cost data warehouse.

    Science.gov (United States)

    Visscher, Sue L; Naessens, James M; Yawn, Barbara P; Reinalda, Megan S; Anderson, Stephanie S; Borah, Bijan J

    2017-06-12

    Research addressing value in healthcare requires a measure of cost. While there are many sources and types of cost data, each has strengths and weaknesses. Many researchers appear to create study-specific cost datasets, but the explanations of their costing methodologies are not always clear, causing their results to be difficult to interpret. Our solution, described in this paper, was to use widely accepted costing methodologies to create a service-level, standardized healthcare cost data warehouse from an institutional perspective that includes all professional and hospital-billed services for our patients. The warehouse is based on a National Institutes of Research-funded research infrastructure containing the linked health records and medical care administrative data of two healthcare providers and their affiliated hospitals. Since all patients are identified in the data warehouse, their costs can be linked to other systems and databases, such as electronic health records, tumor registries, and disease or treatment registries. We describe the two institutions' administrative source data; the reference files, which include Medicare fee schedules and cost reports; the process of creating standardized costs; and the warehouse structure. The costing algorithm can create inflation-adjusted standardized costs at the service line level for defined study cohorts on request. The resulting standardized costs contained in the data warehouse can be used to create detailed, bottom-up analyses of professional and facility costs of procedures, medical conditions, and patient care cycles without revealing business-sensitive information. After its creation, a standardized cost data warehouse is relatively easy to maintain and can be expanded to include data from other providers. Individual investigators who may not have sufficient knowledge about administrative data do not have to try to create their own standardized costs on a project-by-project basis because our data

  1. Data Warehouse on the Web for Accelerator Fabrication And Maintenance

    International Nuclear Information System (INIS)

    Chan, A.; Crane, G.; Macgregor, I.; Meyer, S.

    2011-01-01

    A data warehouse grew out of the needs for a view of accelerator information from a lab-wide or project-wide standpoint (often needing off-site data access for the multi-lab PEP-II collaborators). A World Wide Web interface is used to link legacy database systems of the various labs and departments related to the PEP-II Accelerator. In this paper, we describe how links are made via the 'Formal Device Name' field(s) in the disparate databases. We also describe the functionality of a data warehouse in an accelerator environment. One can pick devices from the PEP-II Component List and find the actual components filling the functional slots, any calibration measurements, fabrication history, associated cables and modules, and operational maintenance records for the components. Information on inventory, drawings, publications, and purchasing history are also part of the PEP-II Database. A strategy of relying on a small team, and of linking existing databases rather than rebuilding systems is outlined.

  2. Conceptual Data Warehouse Structures

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    changing information needs. We show how the event-entity-relationship model (EVER) can be used for schema design and query formulation in data warehouses. Our work is based on a layered data warehouse architecture in which a global data warehouse is used for flexible long-term organization and storage...... of all warehouse data whereas local data warehouses are used for efficient query formulation and answering. In order to support flexible modeling of global warehouses we use a flexible version of EVER for global schema design. In order to support efficient query formulation in local data warehouses we...

  3. Securing Document Warehouses against Brute Force Query Attacks

    Directory of Open Access Journals (Sweden)

    Sergey Vladimirovich Zapechnikov

    2017-04-01

    Full Text Available The paper presents the scheme of data management and protocols for securing document collection against adversary users who try to abuse their access rights to find out the full content of confidential documents. The configuration of secure document retrieval system is described and a suite of protocols among the clients, warehouse server, audit server and database management server is specified. The scheme makes it infeasible for clients to establish correspondence between the documents relevant to different search queries until a moderator won’t give access to these documents. The proposed solution allows ensuring higher security level for document warehouses.

  4. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  5. Warehouse Logistics

    OpenAIRE

    Panibratetc, Anastasiia

    2015-01-01

    This research is a review of warehouse logistics on the example of Kannustalo Oy, located in Kannus, Western region of Finland. Kannustalo is an international company of designing, manufacturing and assembling block and turn-key houses. The research subject is logistics process in warehouse system of industrial company. In my work I discussed about theoretical aspect of logistics, logistic functions and processes. Later I considered warehouse as a part of logistics system and provided inf...

  6. WEB APPLICATION TO MANAGE DOCUMENTS USING THE GOOGLE WEB TOOLKIT AND APP ENGINE TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    Velázquez Santana Eugenio César

    2017-12-01

    Full Text Available The application of new information technologies such as Google Web Toolkit and App Engine are making a difference in the academic management of Higher Education Institutions (IES, who seek to streamline their processes as well as reduce infrastructure costs. However, they encounter the problems with regard to acquisition costs, the infrastructure necessary for their use, as well as the maintenance of the software; It is for this reason that the present research aims to describe the application of these new technologies in HEIs, as well as to identify their advantages and disadvantages and the key success factors in their implementation. As a software development methodology, SCRUM was used as well as PMBOK as a project management tool. The main results were related to the application of these technologies in the development of customized software for teachers, students and administrators, as well as the weaknesses and strengths of using them in the cloud. On the other hand, it was also possible to describe the paradigm shift that data warehouses are generating with respect to today's relational databases.

  7. The design and application of data warehouse during modern enterprises environment

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Chunying

    2006-04-01

    The interest in analyzing data has grown tremendously in recent years. To analyze data, a multitude of technologies is need, namely technologies from the fields of Data Warehouse, Data Mining, On-line Analytical Processing (OLAP). This paper proposes the system structure model of the data warehouse during modern enterprises environment according to the information demand for enterprises and the actual demand of user's, and also analyses the benefit of this kind of model in practical application, and provides the setting-up course of the data warehouse model. At the same time it has proposes the total design plans of the data warehouses of modern enterprises. The data warehouse that we build in practical application can be offered: high performance of queries; efficiency of the data; independent characteristic of logical and physical data. In addition, A Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support, OLAP queries or data mining. One of the most important decisions in designing a data warehouse is selection of right views to be materialized. In this paper, we also have designed algorithms for selecting a set of views to be materialized in a data warehouse.First, we give the algorithms for selecting materialized views. Then we use experiments do demonstrate the power of our approach. The results show the proposed algorithm delivers an optimal solution. Finally, we discuss the advantage and shortcoming of our approach and future work.

  8. Warehouse Sanitation Workshop Handbook.

    Science.gov (United States)

    Food and Drug Administration (DHHS/PHS), Washington, DC.

    This workshop handbook contains information and reference materials on proper food warehouse sanitation. The materials have been used at Food and Drug Administration (FDA) food warehouse sanitation workshops, and are selected by the FDA for use by food warehouse operators and for training warehouse sanitation employees. The handbook is divided…

  9. PERANCANGAN SISTEM METADATA UNTUK DATA WAREHOUSE DENGAN STUDI KASUS REVENUE TRACKING PADA PT. TELKOM DIVRE V JAWA TIMUR

    Directory of Open Access Journals (Sweden)

    Yudhi Purwananto

    2004-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Data warehouse merupakan media penyimpanan data dalam perusahaan yang diambil dari berbagai sistem dan dapat digunakan untuk berbagai keperluan seperti analisis dan pelaporan. Di PT Telkom Divre V Jawa Timur telah dibangun sebuah data warehouse yang disebut dengan Regional Database. Di Regional Database memerlukan sebuah komponen penting dalam data warehouse yaitu metadata. Definisi metadata secara sederhana adalah "data tentang data". Dalam penelitian ini dirancang sistem metadata dengan studi kasus Revenue Tracking sebagai komponen analisis dan pelaporan pada Regional Database. Metadata sangat perlu digunakan dalam pengelolaan dan memberikan informasi tentang data warehouse. Proses - proses di dalam data warehouse serta komponen - komponen yang berkaitan dengan data warehouse harus saling terintegrasi untuk mewujudkan karakteristik data warehouse yang subject-oriented, integrated, time-variant, dan non-volatile. Karena itu metadata juga harus memiliki kemampuan mempertukarkan informasi (exchange antar komponen dalam data warehouse tersebut. Web service digunakan sebagai mekanisme pertukaran ini. Web service menggunakan teknologi XML dan protokol HTTP dalam berkomunikasi. Dengan web service, setiap komponen

  10. PERANCANGAN DATA WAREHOUSE UNTUK MENDUKUNG PERENCANAAN PEMASARAN PERGURUAN TINGGI

    Directory of Open Access Journals (Sweden)

    agung prasetyo

    2017-02-01

    Full Text Available Salah satu indikasi perguruan tinggi yang besar adalah dilihat dari jumlah mahasiswa di perguruan tinggi tersebut. Karenanya, mahasiswa baru merupakan salah satu sumber daya yang menentukan berjalannya sebuah perguruan tinggi. Setiap tahunnya STMIK AMIKOM Purwokerto selalu melakukan penerimaan calon mahasiswa. Data mahasiswa baru tersebut sangat berguna bagi bagian pemasaran sebagai informasi untuk evaluasi kegiatan pemasaran berikutnya. Dengan dibangunnya data warehouse dan aplikasi OLAP dengan menggunakan aplikasi Pentaho Data Integration/Kettle sebagai perangkat ETL dan Pentaho Workbench yang merupakan Online Analytical Processing (OLAP sebagai pengolah database, manajemen di STMIK AMIKOM Purwokerto bisa mengambil beberapa informasi misalnya; banyak jumlah pendaftar per-periode/gelombang, per/asal sekolahnya, per/asal sumber informasi yang diperoleh calon mahasiswa baru, serta tren minat terhadap jurusan yang dipilih oleh calon mahasiswa baru. Data warehouse mampu menganalisis data transaksi, mampu memberikan laporan yang dinamis dan mampu memberikan informasi dalam berbagai dimensi tentang penerimaan mahasiswa baru di STMIK AMIKOM Purwokerto.Kata Kunci: Data Warehouse, OLAP, Pentaho, Penerimaan Calon Mahasiswa.

  11. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  12. Efficient data management tools for the heterogeneous big data warehouse

    Science.gov (United States)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  13. Pengembangan Data Warehouse Menggunakan Pendekatan Data-Driven untuk Membantu Pengelolaan SDM

    Directory of Open Access Journals (Sweden)

    Mujiono Mujiono

    2016-01-01

    Full Text Available The basis of bureaucratic reform is the reform of human resources management. One supporting factor is the development of an employee database. To support the management of human resources required including data warehouse and business intelligent tools. The data warehouse is an integrated concept of reliable data storage to provide support to all the needs of the data analysis. In this study developed a data warehouse using the data-driven approach to the source data comes from SIMPEG, SAPK and electronic presence. Data warehouses are designed using the nine steps methodology and unified modeling language (UML notation. Extract transform load (ETL is done by using Pentaho Data Integration by applying transformation maps. Furthermore, to help human resource management, the system is built to perform online analytical processing (OLAP to facilitate web-based information. In this study generated BI application development framework with Model-View-Controller (MVC architecture and OLAP operations are built using the dynamic query generation, PivotTable, and HighChart to present information about PNS, CPNS, Retirement, Kenpa and Presence

  14. A Framework for Designing a Healthcare Outcome Data Warehouse

    Science.gov (United States)

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-01-01

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making. PMID:18066371

  15. A framework for designing a healthcare outcome data warehouse.

    Science.gov (United States)

    Parmanto, Bambang; Scotch, Matthew; Ahmad, Sjarif

    2005-09-06

    Many healthcare processes involve a series of patient visits or a series of outcomes. The modeling of outcomes associated with these types of healthcare processes is different from and not as well understood as the modeling of standard industry environments. For this reason, the typical multidimensional data warehouse designs that are frequently seen in other industries are often not a good match for data obtained from healthcare processes. Dimensional modeling is a data warehouse design technique that uses a data structure similar to the easily understood entity-relationship (ER) model but is sophisticated in that it supports high-performance data access. In the context of rehabilitation services, we implemented a slight variation of the dimensional modeling technique to make a data warehouse more appropriate for healthcare. One of the key aspects of designing a healthcare data warehouse is finding the right grain (scope) for different levels of analysis. We propose three levels of grain that enable the analysis of healthcare outcomes from highly summarized reports on episodes of care to fine-grained studies of progress from one treatment visit to the next. These grains allow the database to support multiple levels of analysis, which is imperative for healthcare decision making.

  16. DICOM Data Warehouse: Part 2.

    Science.gov (United States)

    Langer, Steve G

    2016-06-01

    In 2010, the DICOM Data Warehouse (DDW) was launched as a data warehouse for DICOM meta-data. Its chief design goals were to have a flexible database schema that enabled it to index standard patient and study information, modality specific tags (public and private), and create a framework to derive computable information (derived tags) from the former items. Furthermore, it was to map the above information to an internally standard lexicon that enables a non-DICOM savvy programmer to write standard SQL queries and retrieve the equivalent data from a cohort of scanners, regardless of what tag that data element was found in over the changing epochs of DICOM and ensuing migration of elements from private to public tags. After 5 years, the original design has scaled astonishingly well. Very little has changed in the database schema. The knowledge base is now fluent in over 90 device types. Also, additional stored procedures have been written to compute data that is derivable from standard or mapped tags. Finally, an early concern is that the system would not be able to address the variability DICOM-SR objects has been addressed. As of this writing the system is indexing 300 MR, 600 CT, and 2000 other (XA, DR, CR, MG) imaging studies per day. The only remaining issue to be solved is the case for tags that were not prospectively indexed-and indeed, this final challenge may lead to a noSQL, big data, approach in a subsequent version.

  17. Análisis de rendimiento académico estudiantil usando data warehouse y redes neuronales Analysis of students' academic performance using data warehouse and neural networks

    Directory of Open Access Journals (Sweden)

    Carolina Zambrano Matamala

    2011-12-01

    Full Text Available Cada día las organizaciones tienen más información porque sus sistemas producen una gran cantidad de operaciones diarias que se almacenan en bases de datos transaccionales. Con el fin de analizar esta información histórica, una alternativa interesante es implementar un Data Warehouse. Por otro lado, los Data Warehouse no son capaces de realizar un análisis predictivo por sí mismos, pero las técnicas de inteligencia de máquinas se pueden utilizar para clasificar, agrupar y predecir en base a información histórica con el fin de mejorar la calidad del análisis. En este trabajo se describe una arquitectura de Data Warehouse con el fin de realizar un análisis del desempeño académico de los estudiantes. El Data Warehouse es utilizado como entrada de una arquitectura de red neuronal con tal de analizar la información histórica y de tendencia en el tiempo. Los resultados muestran la viabilidad de utilizar un Data Warehouse para el análisis de rendimiento académico y la posibilidad de predecir el número de asignaturas aprobadas por los estudiantes usando solamente su propia información histórica.Every day organizations have more information because their systems produce a large amount of daily operations which are stored in transactional databases. In order to analyze this historical information, an interesting alternative is to implement a Data Warehouse. In the other hand, Data Warehouses are not able to perform predictive analysis for themselves, but machine learning techniques can be used to classify, grouping and predict historical information in order to improve the quality of analysis. This paper depicts architecture of a Data Warehouse useful to perform an analysis of students' academic performance. The Data Warehouse is used as input of a Neural Network in order to analyze historical information and forecast. The results show the viability of using Data Warehouse for academic performance analysis and the feasibility of

  18. Warehouse location and freight attraction in the greater El Paso region.

    Science.gov (United States)

    2013-12-01

    This project analyzes the current and future warehouse and distribution center locations along the El Paso-Juarez regions in the U.S.-Mexico border. This research seeks has developed a comprehensive database to aid in decision support process for ide...

  19. Perancangan Model Data Warehouse dan Perangkat Analitik untuk Memaksimalkan Proses Pemasaran Hotel: Studi Kasus pada Hotel Abc

    Directory of Open Access Journals (Sweden)

    Eka Miranda

    2013-06-01

    Full Text Available The increasing competition in hotel business forces every hotel to be equiped with analysis tools that can maximize its marketing performance. This paper discusses the development of a data warehouse model and analytic tools to enhance the company's competitive advantage through the utilization of a variety of data, information and knowledge held by the company as a raw material in the decision making process. A study is done at ABC Hotel which uses a database to save the transactional record. However, the database cannot be directly used to support analysis and decision making process. Based on this issue, the company needs a data warehouse model and analytic tools that can be used to store large amounts of data and also potentially to gain a new perspective of data distribution which allows to provide reporting and answers of ad hoc users questions and assist managers in making decisions. Further data warehouse model and analytic tools can be used to help manager to formulate planning and marketing strategies. Data are collected through interviews and literature study, followed by data analysis to analyze business processes, to identify the problems and the information to support analysis process. Furthermore, data warehouse is designed using analysis of records related to the activities in hotel's marketing area and data warehouse model. The result of this paper is data warehouse model and analytic tools to analyze the external and transactional data and to support decision making process in marketing area.

  20. Intelligent environmental data warehouse

    International Nuclear Information System (INIS)

    Ekechukwu, B.

    1998-01-01

    Making quick and effective decisions in environment management are based on multiple and complex parameters, a data warehouse is a powerful tool for the over all management of massive environmental information. Selecting the right data from a warehouse is an important factor consideration for end-users. This paper proposed an intelligent environmental data warehouse system. It consists of data warehouse to feed an environmental researchers and managers with desire environmental information needs to their research studies and decision in form of geometric and attribute data for study area, and a metadata for the other sources of environmental information. In addition, the proposed intelligent search engine works according to a set of rule, which enables the system to be aware of the environmental data wanted by the end-user. The system development process passes through four stages. These are data preparation, warehouse development, intelligent engine development and internet platform system development. (author)

  1. Terrain-Toolkit

    DEFF Research Database (Denmark)

    Wang, Qi; Kaul, Manohar; Long, Cheng

    2014-01-01

    , as will be shown, is used heavily for query processing in spatial databases; and (3) they do not provide the surface distance operator which is fundamental for many applications based on terrain data. Motivated by this, we developed a tool called Terrain-Toolkit for terrain data which accepts a comprehensive set......Terrain data is becoming increasingly popular both in industry and in academia. Many tools have been developed for visualizing terrain data. However, we find that (1) they usually accept very few data formats of terrain data only; (2) they do not support terrain simplification well which...

  2. Web-enabled Data Warehouse and Data Webhouse

    Directory of Open Access Journals (Sweden)

    Cerasela PIRVU

    2008-01-01

    Full Text Available In this paper, our objectives are to understanding what data warehouse means examine the reasons for doing so, appreciate the implications of the convergence of Web technologies and those of the data warehouse and examine the steps for building a Web-enabled data warehouse. The web revolution has propelled the data warehouse out onto the main stage, because in many situations the data warehouse must be the engine that controls or analysis the web experience. In order to step up to this new responsibility, the data warehouse must adjust. The nature of the data warehouse needs to be somewhat different. As a result, our data warehouses are becoming data webhouses. The data warehouse is becoming the infrastructure that supports customer relationship management (CRM. And the data warehouse is being asked to make the customer clickstream available for analysis. This rebirth of data warehousing architecture is called the data webhouse.

  3. Contextualizing Data Warehouses with Documents

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    2008-01-01

    warehouse with a document warehouse, resulting in a contextualized warehouse. Thus, the user first selects an analysis context by supplying some keywords. Then, the analysis is performed on a novel type of OLAP cube, called an R-cube, which is materialized by retrieving and ranking the documents...

  4. Study on resources and environmental data integration towards data warehouse construction covering trans-boundary area of China, Russia and Mongolia

    Science.gov (United States)

    Wang, J.; Song, J.; Gao, M.; Zhu, L.

    2014-02-01

    The trans-boundary area between Northern China, Mongolia and eastern Siberia of Russia is a continuous geographical area located in north eastern Asia. Many common issues in this region need to be addressed based on a uniform resources and environmental data warehouse. Based on the practice of joint scientific expedition, the paper presented a data integration solution including 3 steps, i.e., data collection standards and specifications making, data reorganization and process, data warehouse design and development. A series of data collection standards and specifications were drawn up firstly covering more than 10 domains. According to the uniform standard, 20 resources and environmental survey databases in regional scale, and 11 in-situ observation databases were reorganized and integrated. North East Asia Resources and Environmental Data Warehouse was designed, which included 4 layers, i.e., resources layer, core business logic layer, internet interoperation layer, and web portal layer. The data warehouse prototype was developed and deployed initially. All the integrated data in this area can be accessed online.

  5. Study on resources and environmental data integration towards data warehouse construction covering trans-boundary area of China, Russia and Mongolia

    International Nuclear Information System (INIS)

    Wang, J; Song, J; Gao, M; Zhu, L

    2014-01-01

    The trans-boundary area between Northern China, Mongolia and eastern Siberia of Russia is a continuous geographical area located in north eastern Asia. Many common issues in this region need to be addressed based on a uniform resources and environmental data warehouse. Based on the practice of joint scientific expedition, the paper presented a data integration solution including 3 steps, i.e., data collection standards and specifications making, data reorganization and process, data warehouse design and development. A series of data collection standards and specifications were drawn up firstly covering more than 10 domains. According to the uniform standard, 20 resources and environmental survey databases in regional scale, and 11 in-situ observation databases were reorganized and integrated. North East Asia Resources and Environmental Data Warehouse was designed, which included 4 layers, i.e., resources layer, core business logic layer, internet interoperation layer, and web portal layer. The data warehouse prototype was developed and deployed initially. All the integrated data in this area can be accessed online

  6. Fire detection in warehouse facilities

    CERN Document Server

    Dinaburg, Joshua

    2013-01-01

    Automatic sprinklers systems are the primary fire protection system in warehouse and storage facilities. The effectiveness of this strategy has come into question due to the challenges presented by modern warehouse facilities, including increased storage heights and areas, automated storage retrieval systems (ASRS), limitations on water supplies, and changes in firefighting strategies. The application of fire detection devices used to provide early warning and notification of incipient warehouse fire events is being considered as a component of modern warehouse fire protection.Fire Detection i

  7. Building a Data Warehouse.

    Science.gov (United States)

    Levine, Elliott

    2002-01-01

    Describes how to build a data warehouse, using the Schools Interoperability Framework (www.sifinfo.org), that supports data-driven decision making and complies with the Freedom of Information Act. Provides several suggestions for building and maintaining a data warehouse. (PKP)

  8. Clinical research data warehouse governance for distributed research networks in the USA: a systematic review of the literature.

    Science.gov (United States)

    Holmes, John H; Elliott, Thomas E; Brown, Jeffrey S; Raebel, Marsha A; Davidson, Arthur; Nelson, Andrew F; Chung, Annie; La Chance, Pierre; Steiner, John F

    2014-01-01

    To review the published, peer-reviewed literature on clinical research data warehouse governance in distributed research networks (DRNs). Medline, PubMed, EMBASE, CINAHL, and INSPEC were searched for relevant documents published through July 31, 2013 using a systematic approach. Only documents relating to DRNs in the USA were included. Documents were analyzed using a classification framework consisting of 10 facets to identify themes. 6641 documents were retrieved. After screening for duplicates and relevance, 38 were included in the final review. A peer-reviewed literature on data warehouse governance is emerging, but is still sparse. Peer-reviewed publications on UK research network governance were more prevalent, although not reviewed for this analysis. All 10 classification facets were used, with some documents falling into two or more classifications. No document addressed costs associated with governance. Even though DRNs are emerging as vehicles for research and public health surveillance, understanding of DRN data governance policies and procedures is limited. This is expected to change as more DRN projects disseminate their governance approaches as publicly available toolkits and peer-reviewed publications. While peer-reviewed, US-based DRN data warehouse governance publications have increased, DRN developers and administrators are encouraged to publish information about these programs. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Modelling of Data Warehouse on Food Distribution Center and Reserves in the Ministry of Agriculture

    Directory of Open Access Journals (Sweden)

    Edi Purnomo Putra

    2015-09-01

    Full Text Available The purpose of this study is to perform database’s planning that supports Prototype Modeling Data Warehouse in the Ministry of Agriculture, especially in the Distribution Center and Reserves in the field of distribution, reserve and price. With the prototype of Data Warehouse, the process of data analysis anddecision-making process by the top management will be easier and more accurate. Research’s method used was data collection and design method. Data warehouse’s design method was done by using Kimball’s nine stepsmethodology. Database design was done by using the ERD (Entity Relationship Diagram and activity diagram. The data used for the analysis was obtained from an interview with the head of Distribution, Reserve and Food Price. The results obtained through the analysis incorporated into the Data Warehouse Prototype have been designed to support decision-making. To conclude, Prototype Data Warehouse facilitates the analysis of data, the searching of history data and decision-making by the top management.

  10. Implementasi Data Warehouse dan Data Mining: Studi Kasus Analisis Peminatan Studi Siswa

    Directory of Open Access Journals (Sweden)

    Eka Miranda

    2011-06-01

    Full Text Available This paper discusses the implementation of data mining and their role in helping decision-making related to students’ specialization program selection. Currently, the university uses a database to store records of transactions which can not directly be used to assist analysis and decision making. Based on these issues then made the data warehouse design used to store large amounts of data and also has the potential to gain new data distribution perspectives and allows to answer the ad hoc question as well as to perform data analysis. The method used consists of: record analysis related to students’ academic achievement, designing data warehouse and data mining. The paper’s results are in a form of data warehouse and data mining design and its implementation with the classification techniques and association rules. From these results can be seen the students’ tendency and pattern background in choosing the specialization, to help them make decisions. 

  11. Development of prostate cancer research database with the clinical data warehouse technology for direct linkage with electronic medical record system.

    Science.gov (United States)

    Choi, In Young; Park, Seungho; Park, Bumjoon; Chung, Byung Ha; Kim, Choung-Soo; Lee, Hyun Moo; Byun, Seok-Soo; Lee, Ji Youl

    2013-01-01

    In spite of increased prostate cancer patients, little is known about the impact of treatments for prostate cancer patients and outcome of different treatments based on nationwide data. In order to obtain more comprehensive information for Korean prostate cancer patients, many professionals urged to have national system to monitor the quality of prostate cancer care. To gain its objective, the prostate cancer database system was planned and cautiously accommodated different views from various professions. This prostate cancer research database system incorporates information about a prostate cancer research including demographics, medical history, operation information, laboratory, and quality of life surveys. And, this system includes three different ways of clinical data collection to produce a comprehensive data base; direct data extraction from electronic medical record (EMR) system, manual data entry after linking EMR documents like magnetic resonance imaging findings and paper-based data collection for survey from patients. We implemented clinical data warehouse technology to test direct EMR link method with St. Mary's Hospital system. Using this method, total number of eligible patients were 2,300 from 1997 until 2012. Among them, 538 patients conducted surgery and others have different treatments. Our database system could provide the infrastructure for collecting error free data to support various retrospective and prospective studies.

  12. Creation of Warehouse Models for Different Layout Designs

    OpenAIRE

    Köhler, Mirko; Lukić, Ivica; Nenadić, Krešimir

    2014-01-01

    Warehouse is one of the most important components in logistics of the supply chain network. Efficiency of warehouse operations is influenced by many different factors. One of the key factors is the racks layout configuration. A warehouse with good racks layout may significantly reduce the cost of warehouse servicing. The objective of this paper is to give a scheme for building warehouses models with one-block and two-block layout for future research in warehouse optimization. An algorithm ...

  13. University Accreditation using Data Warehouse

    Science.gov (United States)

    Sinaga, A. S.; Girsang, A. S.

    2017-01-01

    The accreditation aims assuring the quality the quality of the institution education. The institution needs the comprehensive documents for giving the information accurately before reviewed by assessor. Therefore, academic documents should be stored effectively to ease fulfilling the requirement of accreditation. However, the data are generally derived from various sources, various types, not structured and dispersed. This paper proposes designing a data warehouse to integrate all various data to prepare a good academic document for accreditation in a university. The data warehouse is built using nine steps that was introduced by Kimball. This method is applied to produce a data warehouse based on the accreditation assessment focusing in academic part. The data warehouse shows that it can analyse the data to prepare the accreditation assessment documents.

  14. Building the Readiness Data Warehouse

    National Research Council Canada - National Science Library

    Tysor, Sue

    2000-01-01

    .... This is the role of the data warehouse. The data warehouse will deliver business intelligence based on operational data, decision support data and external data to all business units in the organization...

  15. Evaluating a healthcare data warehouse for cancer diseases

    OpenAIRE

    Sheta, Dr. Osama E.; Eldeen, Ahmed Nour

    2013-01-01

    This paper presents the evaluation of the architecture of healthcare data warehouse specific to cancer diseases. This data warehouse containing relevant cancer medical information and patient data. The data warehouse provides the source for all current and historical health data to help executive manager and doctors to improve the decision making process for cancer patients. The evaluation model based on Bill Inmon's definition of data warehouse is proposed to evaluate the Cancer data warehouse.

  16. Event-Entity-Relationship Modeling in Data Warehouse Environments

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    We use the event-entity-relationship model (EVER) to illustrate the use of entity-based modeling languages for conceptual schema design in data warehouse environments. EVER is a general-purpose information modeling language that supports the specification of both general schema structures and multi......-dimensional schemes that are customized to serve specific information needs. EVER is based on an event concept that is very well suited for multi-dimensional modeling because measurement data often represent events in multi-dimensional databases...

  17. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  18. Commercial Building Energy Saver: An energy retrofit analysis toolkit

    International Nuclear Information System (INIS)

    Hong, Tianzhen; Piette, Mary Ann; Chen, Yixing; Lee, Sang Hoon; Taylor-Lange, Sarah C.; Zhang, Rongpeng; Sun, Kaiyu; Price, Phillip

    2015-01-01

    Highlights: • Commercial Building Energy Saver is a powerful toolkit for energy retrofit analysis. • CBES provides benchmarking, load shape analysis, and model-based retrofit assessment. • CBES covers 7 building types, 6 vintages, 16 climates, and 100 energy measures. • CBES includes a web app, API, and a database of energy efficiency performance. • CBES API can be extended and integrated with third party energy software tools. - Abstract: Small commercial buildings in the United States consume 47% of the total primary energy of the buildings sector. Retrofitting small and medium commercial buildings poses a huge challenge for owners because they usually lack the expertise and resources to identify and evaluate cost-effective energy retrofit strategies. This paper presents the Commercial Building Energy Saver (CBES), an energy retrofit analysis toolkit, which calculates the energy use of a building, identifies and evaluates retrofit measures in terms of energy savings, energy cost savings and payback. The CBES Toolkit includes a web app (APP) for end users and the CBES Application Programming Interface (API) for integrating CBES with other energy software tools. The toolkit provides a rich set of features including: (1) Energy Benchmarking providing an Energy Star score, (2) Load Shape Analysis to identify potential building operation improvements, (3) Preliminary Retrofit Analysis which uses a custom developed pre-simulated database and, (4) Detailed Retrofit Analysis which utilizes real-time EnergyPlus simulations. CBES includes 100 configurable energy conservation measures (ECMs) that encompass IAQ, technical performance and cost data, for assessing 7 different prototype buildings in 16 climate zones in California and 6 vintages. A case study of a small office building demonstrates the use of the toolkit for retrofit analysis. The development of CBES provides a new contribution to the field by providing a straightforward and uncomplicated decision

  19. 27 CFR 24.141 - Bonded wine warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Bonded wine warehouse. 24..., DEPARTMENT OF THE TREASURY LIQUORS WINE Establishment and Operations Permanent Discontinuance of Operations § 24.141 Bonded wine warehouse. Where all operations at a bonded wine warehouse are to be permanently...

  20. 7 CFR 735.302 - Paper warehouse receipts.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Paper warehouse receipts. 735.302 Section 735.302... § 735.302 Paper warehouse receipts. Paper warehouse receipts must be issued as follows: (a) On distinctive paper specified by DACO; (b) Printed by a printer authorized by DACO; and (c) Issued, identified...

  1. Tribal Green Building Toolkit

    Science.gov (United States)

    This Tribal Green Building Toolkit (Toolkit) is designed to help tribal officials, community members, planners, developers, and architects develop and adopt building codes to support green building practices. Anyone can use this toolkit!

  2. Protection of warehouses and plants under capacity constraint

    International Nuclear Information System (INIS)

    Bricha, Naji; Nourelfath, Mustapha

    2015-01-01

    While warehouses may be subjected to less protection effort than plants, their unavailability may have substantial impact on the supply chain performance. This paper presents a method for protection of plants and warehouses against intentional attacks in the context of the capacitated plant and warehouses location and capacity acquisition problem. A non-cooperative two-period game is developed to find the equilibrium solution and the optimal defender strategy under capacity constraints. The defender invests in the first period to minimize the expected damage and the attacker moves in the second period to maximize the expected damage. Extra-capacity of neighboring functional plants and warehouses is used after attacks, to satisfy all customers demand and to avoid the backorders. The contest success function is used to evaluate success probability of an attack of plants and warehouses. A numerical example is presented to illustrate an application of the model. The defender strategy obtained by our model is compared to the case where warehouses are subjected to less protection effort than the plants. This comparison allows us to measure how much our method is better, and illustrates the effect of direct investments in protection and indirect protection by warehouse extra-capacities to reduce the expected damage. - Highlights: • Protection of warehouses and plants against intentional attacks. • Capacitated plant and warehouse location and capacity acquisition problem. • A non-cooperative two-period game between the defender and the attacker. • A method to evaluate the utilities and determine the optimal defender strategy. • Using warehouse extra-capacities to reduce the expected damage

  3. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    Science.gov (United States)

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  4. Data Warehouse Discovery Framework: The Foundation

    Science.gov (United States)

    Apanowicz, Cas

    The cost of building an Enterprise Data Warehouse Environment runs usually in millions of dollars and takes years to complete. The cost, as big as it is, is not the primary problem for a given corporation. The risk that all money allocated for planning, design and implementation of the Data Warehouse and Business Intelligence Environment may not bring the result expected, fare out way the cost of entire effort [2,10]. The combination of the two above factors is the main reason that Data Warehouse/Business Intelligence is often single most expensive and most risky IT endeavor for companies [13]. That situation was the main author's inspiration behind founding of Infobright Corp and later on the concept of Data Warehouse Discovery Framework.

  5. Application of XML in real-time data warehouse

    Science.gov (United States)

    Zhao, Yanhong; Wang, Beizhan; Liu, Lizhao; Ye, Su

    2009-07-01

    At present, XML is one of the most widely-used technologies of data-describing and data-exchanging, and the needs for real-time data make real-time data warehouse a popular area in the research of data warehouse. What effects can we have if we apply XML technology to the research of real-time data warehouse? XML technology solves many technologic problems which are impossible to be addressed in traditional real-time data warehouse, and realize the integration of OLAP (On-line Analytical Processing) and OLTP (Online transaction processing) environment. Then real-time data warehouse can truly be called "real time".

  6. Provider perceptions of an integrated primary care quality improvement strategy: The PPAQ toolkit.

    Science.gov (United States)

    Beehler, Gregory P; Lilienthal, Kaitlin R

    2017-02-01

    The Primary Care Behavioral Health (PCBH) model of integrated primary care is challenging to implement with high fidelity. The Primary Care Behavioral Health Provider Adherence Questionnaire (PPAQ) was designed to assess provider adherence to essential model components and has recently been adapted into a quality improvement toolkit. The aim of this pilot project was to gather preliminary feedback on providers' perceptions of the acceptability and utility of the PPAQ toolkit for making beneficial practice changes. Twelve mental health providers working in Department of Veterans Affairs integrated primary care clinics participated in semistructured interviews to gather quantitative and qualitative data. Descriptive statistics and qualitative content analysis were used to analyze data. Providers identified several positive features of the PPAQ toolkit organization and structure that resulted in high ratings of acceptability, while also identifying several toolkit components in need of modification to improve usability. Toolkit content was considered highly representative of the (PCBH) model and therefore could be used as a diagnostic self-assessment of model adherence. The toolkit was considered to be high in applicability to providers regardless of their degree of prior professional preparation or current clinical setting. Additionally, providers identified several system-level contextual factors that could impact the usefulness of the toolkit. These findings suggest that frontline mental health providers working in (PCBH) settings may be receptive to using an adherence-focused toolkit for ongoing quality improvement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. The impact of e-commerce on warehouse operations

    Directory of Open Access Journals (Sweden)

    Wiktor Żuchowski

    2016-03-01

    Full Text Available Background: We often encounter opinions concerning the unusual nature of warehouses used for the purposes of e-commerce, most often spread by providers of modern technological equipment and designers of such solutions. Of course, in the case of newly built facilities, it is advisable to consider innovative technologies, especially in terms of order picking. However, in many cases, the differences between "standard" warehouses, serving, for example, the vehicle spare parts market, and warehouses that are ready to handle retail orders placed electronically (defined as e-commerce are negligible. The scale of the differences between the existing "standard" warehouses and those adapted to handle e-commerce is dependent on the industry and supported of customers' structure. Methods: On the basis of experiences and on examples of enterprises two cases of the impact of a hypothetical e-commerce implementation for the warehouse organization and technology have been analysed. Results: The introduction of e-commerce into warehouses entails respective changes to previously handled orders. Warehouses serving the retail market are in principle prepared to process electronic orders. In this case, the introduction of (direct electronic sales is justified and feasible with relatively little effort. Conclusions: It cannot be said with certainty that the introduction of e-commerce in the warehouse is a revolution for its employees and managers. It depends on the markets in which the company operates, and on customers served by the warehouse prior to the introduction of e-commerce.

  8. Warehouse order-picking process. Order-picker routing problem

    Directory of Open Access Journals (Sweden)

    E. V. Korobkov

    2015-01-01

    Full Text Available This article continues “Warehouse order-picking process” cycle and describes order-picker routing sub-problem of a warehouse order-picking process. It draws analogies between the orderpickers’ routing problem and traveling salesman’s problem, shows differences between the standard problem statement of a traveling salesman and routing problem of warehouse orderpickers, and gives the particular Steiner’s problem statement of a traveling salesman.Warehouse layout with a typical order is represented by a graph, with some its vertices corresponding to mandatory order-picker’s visits and some other ones being noncompulsory. The paper describes an optimal Ratliff-Rosenthal algorithm to solve order-picker’s routing problem for the single-block warehouses, i.e. warehouses with only two crossing aisles, defines seven equivalent classes of partial routing sub-graphs and five transitions used to have an optimal routing sub-graph of a order-picker. An extension of optimal Ratliff-Rosenthal order-picker routing algorithm for multi-block warehouses is presented and also reasons for using the routing heuristics instead of exact optimal algorithms are given. The paper offers algorithmic description of the following seven routing heuristics: S-shaped, return, midpoint, largest gap, aisle-by-aisle, composite, and combined as well as modification of combined heuristics. The comparison of orderpicker routing heuristics for one- and two-block warehouses is to be described in the next article of the “Warehouse order-picking process” cycle.

  9. Congestion-Aware Warehouse Flow Analysis and Optimization

    KAUST Repository

    AlHalawani, Sawsan

    2015-12-18

    Generating realistic configurations of urban models is a vital part of the modeling process, especially if these models are used for evaluation and analysis. In this work, we address the problem of assigning objects to their storage locations inside a warehouse which has a great impact on the quality of operations within a warehouse. Existing storage policies aim to improve the efficiency by minimizing travel time or by classifying the items based on some features. We go beyond existing methods as we analyze warehouse layout network in an attempt to understand the factors that affect traffic within the warehouse. We use simulated annealing based sampling to assign items to their storage locations while reducing traffic congestion and enhancing the speed of order picking processes. The proposed method enables a range of applications including efficient storage assignment, warehouse reliability evaluation and traffic congestion estimation.

  10. Congestion-Aware Warehouse Flow Analysis and Optimization

    KAUST Repository

    AlHalawani, Sawsan; Mitra, Niloy J.

    2015-01-01

    Generating realistic configurations of urban models is a vital part of the modeling process, especially if these models are used for evaluation and analysis. In this work, we address the problem of assigning objects to their storage locations inside a warehouse which has a great impact on the quality of operations within a warehouse. Existing storage policies aim to improve the efficiency by minimizing travel time or by classifying the items based on some features. We go beyond existing methods as we analyze warehouse layout network in an attempt to understand the factors that affect traffic within the warehouse. We use simulated annealing based sampling to assign items to their storage locations while reducing traffic congestion and enhancing the speed of order picking processes. The proposed method enables a range of applications including efficient storage assignment, warehouse reliability evaluation and traffic congestion estimation.

  11. PEMODELAN INTEGRASI NEARLY REAL TIME DATA WAREHOUSE DENGAN SERVICE ORIENTED ARCHITECTURE UNTUK MENUNJANG SISTEM INFORMASI RETAIL

    Directory of Open Access Journals (Sweden)

    I Made Dwi Jendra Sulastra

    2015-12-01

    Full Text Available Updates the data in the data warehouse is not traditionally done every transaction. Retail information systems require the latest data and can be accessed from anywhere for business analysis needs. Therefore, in this study will be made data warehouse model that is able to produce the information near real time, and can be accessed from anywhere by end users application. Modeling design integration of nearly real time data warehouse (NRTDWH with a service oriented architecture (SOA to support the retail information system is done in two stages. In the first stage will be designed modeling NRTDWH using Change Data Capture (CDC based Transaction Log. In the second stage will be designed modeling NRTDWH integration with SOA-based web service. Tests conducted by a simulation test applications. Test applications used retail information systems, web-based web service client, desktop, and mobile. Results of this study were (1 ETL-based CDC captures changes to the source table and then store it in the database NRTDWH with the help of a scheduler; (2 Middleware web service makes 6 service based on data contained in the database NRTDWH, and each of these services accessible and implemented by the web service client.

  12. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    Science.gov (United States)

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  13. Handling Imprecision in Qualitative Data Warehouse: Urban Building Sites Annoyance Analysis Use Case

    Science.gov (United States)

    Amanzougarene, F.; Chachoua, M.; Zeitouni, K.

    2013-05-01

    Data warehouse means a decision support database allowing integration, organization, historisation, and management of data from heterogeneous sources, with the aim of exploiting them for decision-making. Data warehouses are essentially based on multidimensional model. This model organizes data into facts (subjects of analysis) and dimensions (axes of analysis). In classical data warehouses, facts are composed of numerical measures and dimensions which characterize it. Dimensions are organized into hierarchical levels of detail. Based on the navigation and aggregation mechanisms offered by OLAP (On-Line Analytical Processing) tools, facts can be analyzed according to the desired level of detail. In real world applications, facts are not always numerical, and can be of qualitative nature. In addition, sometimes a human expert or learned model such as a decision tree provides a qualitative evaluation of phenomenon based on its different parameters i.e. dimensions. Conventional data warehouses are thus not adapted to qualitative reasoning and have not the ability to deal with qualitative data. In previous work, we have proposed an original approach of qualitative data warehouse modeling, which permits integrating qualitative measures. Based on computing with words methodology, we have extended classical multidimensional data model to allow the aggregation and analysis of qualitative data in OLAP environment. We have implemented this model in a Spatial Decision Support System to help managers of public spaces to reduce annoyances and improve the quality of life of the citizens. In this paper, we will focus our study on the representation and management of imprecision in annoyance analysis process. The main objective of this process consists in determining the least harmful scenario of urban building sites, particularly in dense urban environments.

  14. Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives.

    Science.gov (United States)

    Chelico, John D; Wilcox, Adam B; Vawdrey, David K; Kuperman, Gilad J

    2016-01-01

    Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.

  15. Outpatient health care statistics data warehouse--implementation.

    Science.gov (United States)

    Zilli, D

    1999-01-01

    Data warehouse implementation is assumed to be a very knowledge-demanding, expensive and long-lasting process. As such it requires senior management sponsorship, involvement of experts, a big budget and probably years of development time. Presented Outpatient Health Care Statistics Data Warehouse implementation research provides ample evidence against the infallibility of the above statements. New, inexpensive, but powerful technology, which provides outstanding platform for On-Line Analytical Processing (OLAP), has emerged recently. Presumably, it will be the basis for the estimated future growth of data warehouse market, both in the medical and in other business fields. Methods and tools for building, maintaining and exploiting data warehouses are also briefly discussed in the paper.

  16. Practical computational toolkits for dendrimers and dendrons structure design

    Science.gov (United States)

    Martinho, Nuno; Silva, Liana C.; Florindo, Helena F.; Brocchini, Steve; Barata, Teresa; Zloh, Mire

    2017-09-01

    Dendrimers and dendrons offer an excellent platform for developing novel drug delivery systems and medicines. The rational design and further development of these repetitively branched systems are restricted by difficulties in scalable synthesis and structural determination, which can be overcome by judicious use of molecular modelling and molecular simulations. A major difficulty to utilise in silico studies to design dendrimers lies in the laborious generation of their structures. Current modelling tools utilise automated assembly of simpler dendrimers or the inefficient manual assembly of monomer precursors to generate more complicated dendrimer structures. Herein we describe two novel graphical user interface toolkits written in Python that provide an improved degree of automation for rapid assembly of dendrimers and generation of their 2D and 3D structures. Our first toolkit uses the RDkit library, SMILES nomenclature of monomers and SMARTS reaction nomenclature to generate SMILES and mol files of dendrimers without 3D coordinates. These files are used for simple graphical representations and storing their structures in databases. The second toolkit assembles complex topology dendrimers from monomers to construct 3D dendrimer structures to be used as starting points for simulation using existing and widely available software and force fields. Both tools were validated for ease-of-use to prototype dendrimer structure and the second toolkit was especially relevant for dendrimers of high complexity and size.

  17. Energy Finance Data Warehouse Manual

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangkeun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chinthavali, Supriya [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shankar, Mallikarjun [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zeng, Claire [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hendrickson, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-30

    The Office of Energy Policy and Systems Analysis s finance team (EPSA-50) requires a suite of automated applications that can extract specific data from a flexible data warehouse (where datasets characterizing energy-related finance, economics and markets are maintained and integrated), perform relevant operations and creatively visualize them to provide a better understanding of what policy options affect various operators/sectors of the electricity system. In addition, the underlying data warehouse should be structured in the most effective and efficient way so that it can become increasingly valuable over time. This report describes the Energy Finance Data Warehouse (EFDW) framework that has been developed to accomplish the defined requirement above. We also specifically dive into the Sankey generator use-case scenario to explain the components of the EFDW framework and their roles. An excel-based data warehouse was used in the creation of the energy finance Sankey diagram and other detailed data finance visualizations to support energy policy analysis. The framework also captures the methodology, calculations and estimations analysts used for the calculation as well as relevant sources so newer analysts can build on work done previously.

  18. Work prioritization by using data warehouse solution; Priorizacao de obras usando solucao de data warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Grupelli Junior, Fernando Antonio; Azoni, Edivar Garcia [Companhia Paranaense de Energia (COPEL), Curitiba, PR (Brazil)

    2000-07-01

    This work proposes the utilization of data warehouse technology for helping of gathering adequate and reliable information, and allows the calculation of cost-benefits ratios of work in the distribution primary network. The paper also intends to suggest a better integration and the utilization of the possibility of a data warehouse and his future integration with a geo processing system.

  19. The Implementation of Data Warehouse and OLAP for Rehabilitation Outcome Evaluation: ReDWinE System

    Science.gov (United States)

    Guo, Fei-Ran; Parmanto, Bambang; Irrgang, James J.; Wang, Jiunjie; Fang, Huaijin

    2000-01-01

    We created a data warehouse and OLAP system on the web for outcome evaluation of rehabilitation services. Thirteen outcome indicators were use in this research. Efficiency of therapists and clinics, expected utility of treatments and graphic patterns were generated for data exploration, data mining and decision support. Users can retrieve plenty of graphs and statistical tables without knowing database structure or attributes. Our experiences showed that multi-dimensional database and OLAP could serve as a decision support system.

  20. Implementation of Lean Warehouse to Minimize Wastes in Finished Goods Warehouse of PT Charoen Pokphand Indonesia Semarang

    Directory of Open Access Journals (Sweden)

    Nia Budi Puspitasari

    2016-03-01

    Full Text Available PT. Charoen Pokphand Indonesia Semarang is one of the largest poultry feed companies in Indonesia. To store the finished products that are ready to be distributed, it needs a finished goods warehouse. To minimize the wastes that occur in the process of warehousing the finished goods, the implementation of lean warehouse is required. The core process of finished goods warehouse is the process of putting bag that has been through the process of pallets packing, and then transporting the pallets contained bags of feed at finished goods warehouses and the process of unloading food from the finished goods warehouse to the distribution truck. With the implementation of the lean warehouse, we can know whether the activities are value added or not, to be identified later which type of waste happened. Opinions of stakeholders regarding the waste that must be eliminated first need to be determined by questionnaires. Based on the results of the questionnaires, three top wastes are selected to be identified the cause by using fishbone diagram. They can be repaired by using the implementation of 5S, namely Seiri, Seiton, Seiso, Seiketsu, and Shitsuke. Defect waste can be minimized by selecting pallet, putting sack correctly, forklift line clearance, applying working procedures, and creating cleaning schedule. Next, overprocessing waste is minimized by removing unnecessary items, putting based on the date of manufacture, and manufacture of feed plan. Inventory waste is minimized by removing junks, putting feed based on the expired date, and cleaning the barn

  1. Benchmarking distributed data warehouse solutions for storing genomic variant information

    Science.gov (United States)

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require

  2. Integrating Data Warehouses with Web Data

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Berlanga, Rafael; Aramburu, Maria Jose

    This paper surveys the most relevant research on combining Data Warehouse (DW) and Web data. It studies the XML technologies that are currently being used to integrate, store, query and retrieve web data, and their application to data warehouses. The paper addresses the problem of integrating...

  3. Information Architecture: The Data Warehouse Foundation.

    Science.gov (United States)

    Thomas, Charles R.

    1997-01-01

    Colleges and universities are initiating data warehouse projects to provide integrated information for planning and reporting purposes. A survey of 40 institutions with active data warehouse projects reveals the kinds of tools, contents, data cycles, and access currently used. Essential elements of an integrated information architecture are…

  4. Development of Auto-Stacking Warehouse Truck

    Directory of Open Access Journals (Sweden)

    Kuo-Hsien Hsia

    2018-03-01

    Full Text Available Warehouse automation is a very important issue for the promotion of traditional industries. For the production of larger and stackable products, it is usually necessary to operate a fork-lifter for the stacking and storage of the products by a skilled person. The general autonomous warehouse-truck does not have the ability of stacking objects. In this paper, we develop a prototype of auto-stacking warehouse-truck that can work without direct operation by a skill person. With command made by an RFID card, the stacker truck can take the packaged product to the warehouse on the prior-planned route and store it in a stacking way in the designated storage area, or deliver the product to the shipping area or into the container from the storage area. It can significantly reduce the manpower requirements of the skilled-person of forklift technician and improve the safety of the warehousing area.

  5. Establishment of the Integrated Plant Data Warehouse

    International Nuclear Information System (INIS)

    Oota, Yoshimi; Yoshinaga, Toshiaki

    1999-01-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  6. Establishment of the Integrated Plant Data Warehouse

    Energy Technology Data Exchange (ETDEWEB)

    Oota, Yoshimi; Yoshinaga, Toshiaki [Hitachi Works, Hitachi Ltd., hitachi, Ibaraki (Japan)

    1999-07-01

    This paper presents 'The Establishment of the Integrated Plant Data Warehouse and Verification Tests on Inter-corporate Electronic Commerce based on the Data Warehouse (PDWH)', one of the 'Shared Infrastructure for the Electronic Commerce Consolidation Project', promoted by the Ministry of International Trade and Industry (MITI) through the Information-Technology Promotion Agency (IPA), Japan. A study group called Japan Plant EC (PlantEC) was organized to perform relevant activities. One of the main activities of plantEC involves the construction of the Integrated (including manufacturers, engineering companies, plant construction companies, and machinery and parts manufacturers, etc.) Data Warehouse which is an essential part of the infrastructure necessary for a system to share information on industrial life cycle ranging from planning/designing to operation/maintenance. Another activity is the utilization of this warehouse for the purpose of conducting verification tests to prove its usefulness. Through these verification tests, PlantEC will endeavor to establish a warehouse with standardized data which can be used for the infrastructure of EC in the process plant industry. (author)

  7. Importance of public warehouse system for financing agribusiness sector

    Directory of Open Access Journals (Sweden)

    Zakić Vladimir

    2014-01-01

    Full Text Available The aim of this study was to determine the economic viability of the use of warehouse receipts for the storage of wheat and corn, based on the analysis of trends in product prices, storage costs in public warehouses and interest rate of loans against warehouse receipts. Agricultural producers are urged to sell grain at the harvest time when the price of agricultural products is usually lowest, mostly because of their needs for financial sources. Instead of selling products, farmers can store them in the public warehouses and use short-time financing by lending against warehouse receipt with usually lowest interest rate. In following months, farmers can sell products at higher price and repay short-term loan. This study showed that strategy of using public warehouses and postponing the sale of grains after harvest is profitable strategy for agricultural producers.

  8. Building a Data Warehouse step by step

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Data warehouses have been developed to answer the increasing demands of quality information required by the top managers and economic analysts of organizations. Their importance in now a day business area is unanimous recognized, being the foundation for developing business intelligence systems. Data warehouses offer support for decision-making process, allowing complex analyses which cannot be properly achieved from operational systems. This paper presents the ways in which a data warehouse may be developed and the stages of building it.

  9. The Virtual Physiological Human ToolKit.

    Science.gov (United States)

    Cooper, Jonathan; Cervenansky, Frederic; De Fabritiis, Gianni; Fenner, John; Friboulet, Denis; Giorgino, Toni; Manos, Steven; Martelli, Yves; Villà-Freixa, Jordi; Zasada, Stefan; Lloyd, Sharon; McCormack, Keith; Coveney, Peter V

    2010-08-28

    The Virtual Physiological Human (VPH) is a major European e-Science initiative intended to support the development of patient-specific computer models and their application in personalized and predictive healthcare. The VPH Network of Excellence (VPH-NoE) project is tasked with facilitating interaction between the various VPH projects and addressing issues of common concern. A key deliverable is the 'VPH ToolKit'--a collection of tools, methodologies and services to support and enable VPH research, integrating and extending existing work across Europe towards greater interoperability and sustainability. Owing to the diverse nature of the field, a single monolithic 'toolkit' is incapable of addressing the needs of the VPH. Rather, the VPH ToolKit should be considered more as a 'toolbox' of relevant technologies, interacting around a common set of standards. The latter apply to the information used by tools, including any data and the VPH models themselves, and also to the naming and categorizing of entities and concepts involved. Furthermore, the technologies and methodologies available need to be widely disseminated, and relevant tools and services easily found by researchers. The VPH-NoE has thus created an online resource for the VPH community to meet this need. It consists of a database of tools, methods and services for VPH research, with a Web front-end. This has facilities for searching the database, for adding or updating entries, and for providing user feedback on entries. Anyone is welcome to contribute.

  10. The Data Warehouse: Keeping It Simple. MIT Shares Valuable Lessons Learned from a Successful Data Warehouse Implementation.

    Science.gov (United States)

    Thorne, Scott

    2000-01-01

    Explains why the data warehouse is important to the Massachusetts Institute of Technology community, describing its basic functions and technical design points; sharing some non-technical aspects of the school's data warehouse implementation that have proved to be important; examining the importance of proper training in a successful warehouse…

  11. Designing a Data Warehouse for Cyber Crimes

    Directory of Open Access Journals (Sweden)

    Il-Yeol Song

    2006-09-01

    Full Text Available One of the greatest challenges facing modern society is the rising tide of cyber crimes. These crimes, since they rarely fit the model of conventional crimes, are difficult to investigate, hard to analyze, and difficult to prosecute. Collecting data in a unified framework is a mandatory step that will assist the investigator in sorting through the mountains of data. In this paper, we explore designing a dimensional model for a data warehouse that can be used in analyzing cyber crime data. We also present some interesting queries and the types of cyber crime analyses that can be performed based on the data warehouse. We discuss several ways of utilizing the data warehouse using OLAP and data mining technologies. We finally discuss legal issues and data population issues for the data warehouse.

  12. Perl Template Toolkit

    CERN Document Server

    Chamberlain, Darren; Cross, David; Torkington, Nathan; Diaz, tatiana Apandi

    2004-01-01

    Among the many different approaches to "templating" with Perl--such as Embperl, Mason, HTML::Template, and hundreds of other lesser known systems--the Template Toolkit is widely recognized as one of the most versatile. Like other templating systems, the Template Toolkit allows programmers to embed Perl code and custom macros into HTML documents in order to create customized documents on the fly. But unlike the others, the Template Toolkit is as facile at producing HTML as it is at producing XML, PDF, or any other output format. And because it has its own simple templating language, templates

  13. Public Refrigerated Warehouses

    Data.gov (United States)

    Department of Homeland Security — The International Association of Refrigerated Warehouses (IARW) came into existence in 1891 when a number of conventional warehousemen took on the demands of storing...

  14. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  15. Protecting privacy in a clinical data warehouse.

    Science.gov (United States)

    Kong, Guilan; Xiao, Zhichun

    2015-06-01

    Peking University has several prestigious teaching hospitals in China. To make secondary use of massive medical data for research purposes, construction of a clinical data warehouse is imperative in Peking University. However, a big concern for clinical data warehouse construction is how to protect patient privacy. In this project, we propose to use a combination of symmetric block ciphers, asymmetric ciphers, and cryptographic hashing algorithms to protect patient privacy information. The novelty of our privacy protection approach lies in message-level data encryption, the key caching system, and the cryptographic key management system. The proposed privacy protection approach is scalable to clinical data warehouse construction with any size of medical data. With the composite privacy protection approach, the clinical data warehouse can be secure enough to keep the confidential data from leaking to the outside world. © The Author(s) 2014.

  16. Managing dual warehouses with an incentive policy for deteriorating items

    Science.gov (United States)

    Yu, Jonas C. P.; Wang, Kung-Jeng; Lin, Yu-Siang

    2016-02-01

    Distributors in a supply chain usually limit their own warehouse in finite capacity for cost reduction and excess stock is held in a rent warehouse. In this study, we examine inventory control for deteriorating items in a two-warehouse setting. Assuming that there is an incentive offered by a rent warehouse that allows the rental fee to decrease over time, the objective of this study is to maximise the joint profit of the manufacturer and the distributor. An optimisation procedure is developed to derive the optimal joint economic lot size policy. Several criteria are identified to select the most appropriate warehouse configuration and inventory policy on the basis of storage duration of materials in a rent warehouse. Sensitivity analysis is done to examine the results of model robustness. The proposed model enables a manufacturer with a channel distributor to coordinate the use of alternative warehouses, and to maximise the joint profit of the manufacturer and the distributor.

  17. What Academia Can Gain from Building a Data Warehouse.

    Science.gov (United States)

    Wierschem, David; McMillen, Jeremy; McBroom, Randy

    2003-01-01

    Describes how, when used effectively, data warehouses can be a significant component of strategic decision making on campus. Discusses what a data warehouse is and what its informational contents may include, environmental drivers and obstacles, and strategies to justify developing a data warehouse for an academic institution. (EV)

  18. Subcritical calculation of the nuclear material warehouse

    International Nuclear Information System (INIS)

    Garcia M, T.; Mazon R, R.

    2009-01-01

    In this work the subcritical calculation of the nuclear material warehouse of the Reactor TRIGA Mark III labyrinth in the Mexico Nuclear Center is presented. During the adaptation of the nuclear warehouse (vault I), the fuel was temporarily changed to the warehouse (vault II) and it was also carried out the subcritical calculation for this temporary arrangement. The code used for the calculation of the effective multiplication factor, it was the Monte Carlo N-Particle Extended code known as MCNPX, developed by the National Laboratory of Los Alamos, for the particles transport. (Author)

  19. Integrating Brazilian health information systems in order to support the building of data warehouses

    Directory of Open Access Journals (Sweden)

    Sergio Miranda Freire

    Full Text Available AbstractIntroductionThis paper's aim is to develop a data warehouse from the integration of the files of three Brazilian health information systems concerned with the production of ambulatory and hospital procedures for cancer care, and cancer mortality. These systems do not have a unique patient identification, which makes their integration difficult even within a single system.MethodsData from the Brazilian Public Hospital Information System (SIH-SUS, the Oncology Module for the Outpatient Information System (APAC-ONCO and the Mortality Information System (SIM for the State of Rio de Janeiro, in the period from January 2000 to December 2004 were used. Each of the systems has the monthly data production compiled in dbase files (dbf. All the files pertaining to the same system were then read into a corresponding table in a MySQL Server 5.1. The SIH-SUS and APAC-ONCO tables were linked internally and with one another through record linkage methods. The APAC-ONCO table was linked to the SIM table. Afterwards a data warehouse was built using Pentaho and the MySQL database management system.ResultsThe sensitivities and specificities of the linkage processes were above 95% and close to 100% respectively. The data warehouse provided several analytical views that are accessed through the Pentaho Schema Workbench.ConclusionThis study presented a proposal for the integration of Brazilian Health Systems to support the building of data warehouses and provide information beyond those currently available with the individual systems.

  20. The PAZAR database of gene regulatory information coupled to the ORCA toolkit for the study of regulatory sequences

    Science.gov (United States)

    Portales-Casamar, Elodie; Arenillas, David; Lim, Jonathan; Swanson, Magdalena I.; Jiang, Steven; McCallum, Anthony; Kirov, Stefan; Wasserman, Wyeth W.

    2009-01-01

    The PAZAR database unites independently created and maintained data collections of transcription factor and regulatory sequence annotation. The flexible PAZAR schema permits the representation of diverse information derived from experiments ranging from biochemical protein–DNA binding to cellular reporter gene assays. Data collections can be made available to the public, or restricted to specific system users. The data ‘boutiques’ within the shopping-mall-inspired system facilitate the analysis of genomics data and the creation of predictive models of gene regulation. Since its initial release, PAZAR has grown in terms of data, features and through the addition of an associated package of software tools called the ORCA toolkit (ORCAtk). ORCAtk allows users to rapidly develop analyses based on the information stored in the PAZAR system. PAZAR is available at http://www.pazar.info. ORCAtk can be accessed through convenient buttons located in the PAZAR pages or via our website at http://www.cisreg.ca/ORCAtk. PMID:18971253

  1. Using features of local densities, statistics and HMM toolkit (HTK for offline Arabic handwriting text recognition

    Directory of Open Access Journals (Sweden)

    El Moubtahij Hicham

    2017-12-01

    Full Text Available This paper presents an analytical approach of an offline handwritten Arabic text recognition system. It is based on the Hidden Markov Models (HMM Toolkit (HTK without explicit segmentation. The first phase is preprocessing, where the data is introduced in the system after quality enhancements. Then, a set of characteristics (features of local densities and features statistics are extracted by using the technique of sliding windows. Subsequently, the resulting feature vectors are injected to the Hidden Markov Model Toolkit (HTK. The simple database “Arabic-Numbers” and IFN/ENIT are used to evaluate the performance of this system. Keywords: Hidden Markov Models (HMM Toolkit (HTK, Sliding windows

  2. Nigerian Concept Of Bonded Warehouses And Dry Ports | Ndikom ...

    African Journals Online (AJOL)

    The bonded warehouse in Nigeria is a strategic expansion of ordinary warehouses that are usually developed in the ports and related cities, strictly meant for safe-keeping of cargoes for owners before final take-over by consignees after payment of some customs duties. A bonded warehouse and ICDs are seen as ...

  3. Automatic generation of warehouse mediators using an ontology engine

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  4. An efficiency improvement in warehouse operation using simulation analysis

    Science.gov (United States)

    Samattapapong, N.

    2017-11-01

    In general, industry requires an efficient system for warehouse operation. There are many important factors that must be considered when designing an efficient warehouse system. The most important is an effective warehouse operation system that can help transfer raw material, reduce costs and support transportation. By all these factors, researchers are interested in studying about work systems and warehouse distribution. We start by collecting the important data for storage, such as the information on products, information on size and location, information on data collection and information on production, and all this information to build simulation model in Flexsim® simulation software. The result for simulation analysis found that the conveyor belt was a bottleneck in the warehouse operation. Therefore, many scenarios to improve that problem were generated and testing through simulation analysis process. The result showed that an average queuing time was reduced from 89.8% to 48.7% and the ability in transporting the product increased from 10.2% to 50.9%. Thus, it can be stated that this is the best method for increasing efficiency in the warehouse operation.

  5. HRSA Data Warehouse

    Data.gov (United States)

    U.S. Department of Health & Human Services — The HRSA Data Warehouse is the go-to source for data, maps, reports, locators, and dashboards on HRSA's public health programs. This website provides a wide variety...

  6. Konsolidasi Data Warehouse untuk Aplikasi Business Intelligence

    Directory of Open Access Journals (Sweden)

    Rudy Rudy

    2012-12-01

    Full Text Available As the business competition is getting strong, corporate leaders need complete data that as a basis for determining future business strategies. Similarly with management of company "A", a pharmaceutical company which has three distribution companies. Each distribution company already has a data warehouse to generate reports for each of them. For business operational and corporate strategies, chairman PT "A" requires an integrated report, so analysis of data owned by the three distribution companies can be done in a full reportto answer the problems faced by the managemet. Thus, data warehouse consilidation can be used as a solution for company "A". Methodology starts with analysis of information needs to be displayed on the application ofbusiness intelligence, data warehouse consolidation, ETL (extract, transform and load, data warehousing, OLAP and Dashboard. Using data warehouse consolidation, information access by management of company "A" can be done in a single presentation, which can display data comparison between the three distribution companies.

  7. Toolkit Design for Interactive Structured Graphics

    National Research Council Canada - National Science Library

    Bederson, Benjamin B; Grosjean, Jesse; Meyer, Jon

    2003-01-01

    .... We compare hand-crafted custom code to polylithic and monolithic toolkit-based solutions. Polylithic toolkits follow a design philosophy similar to 3D scene graphs supported by toolkits including Java3D and OpenInventor...

  8. Analisis Dan Perancangan Data Warehouse Pada PT Gajah Tunggal Prakarsa

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2010-12-01

    Full Text Available The purpose of this helpful in making decisions more quickly and precisely. Research methodology includes analysis study was to analyze the data base support in helping decisions making, identifying needs and designing a data warehouse. With the support of data warehouse, company leaders can be more of current systems, library research, designing a data warehouse using star schema. The result of this research is the availability of a data warehouse that can generate information quickly and precisely, thus helping the company in making decisions. The conclusion of this research is the application of data warehouse can be a media aide related parties on PT. Gajah Tunggal initiative in decision making. 

  9. 7 CFR 1421.106 - Warehouse-stored marketing assistance loan collateral.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Warehouse-stored marketing assistance loan collateral... Marketing Assistance Loans § 1421.106 Warehouse-stored marketing assistance loan collateral. (a) A commodity may be pledged as collateral for a warehouse-stored marketing assistance loan in the quantity...

  10. Automated Data Aggregation for Time-Series Analysis: Study Case on Anaesthesia Data Warehouse.

    Science.gov (United States)

    Lamer, Antoine; Jeanne, Mathieu; Ficheur, Grégoire; Marcilly, Romaric

    2016-01-01

    Data stored in operational databases are not reusable directly. Aggregation modules are necessary to facilitate secondary use. They decrease volume of data while increasing the number of available information. In this paper, we present four automated engines of aggregation, integrated into an anaesthesia data warehouse. Four instances of clinical questions illustrate the use of those engines for various improvements of quality of care: duration of procedure, drug administration, assessment of hypotension and its related treatment.

  11. The importance of data warehouses for physician executives.

    Science.gov (United States)

    Ruffin, M

    1994-11-01

    Soon, most physicians will begin to learn about data warehouses and clinical and financial data about their patients stored in them. What is a data warehouse? Why are we seeing their emergence in health care only now? How does a hospital, or group practice, or health plan acquire or create a data warehouse? Who should be responsible for it, and what sort of training is needed by those in charge of using it for the edification of the sponsoring organization? I'll try to answer these questions in this article.

  12. Data warehouse til elbilers opladning og elpriser

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Torp, Kristian

    Denne rapport præsenterer, hvordan GPS og CAN bus målinger fra opladning af elbilerne er renset for typiske fejl og gemt i et data warehouse. GPS og CAN bus målingerne er i data warehouset integreret med priserne fra det Nordeuropæiske el spotmarked Nord Pool Spot. Denne integration muliggør...... målinger om opladningen af elbiler er sammen med priserne fra el spotmarkedet indlæst i et data warehouse, som er fuldt ud implementeret. Den logiske data model for dette data warehouse præsenteres i detaljer. Håndteringen af GPS og CAN bus målingerne er generisk og kan udvides til nye data kilder...

  13. Expanding Post-Harvest Finance Through Warehouse Receipts and Related Instruments

    OpenAIRE

    Baldwin, Marisa; Bryla, Erin; Langenbucher, Anja

    2006-01-01

    Warehouse receipt financing and similar types of collateralized lending provide an alternative to traditional lending requirements of banks and other financiers and could provide opportunities to expand this lending in emerging economies for agricultural trade. The main contents include: what is warehouse receipt financing; what is the value of warehouse receipt financing; other collater...

  14. Warehouse Order-Picking Process. Review

    Directory of Open Access Journals (Sweden)

    E. V. Korobkov

    2015-01-01

    Full Text Available This article describes basic warehousing activities, namely: movement, information storage and transfer, as well as connections between typical warehouse operations (reception, transfer, assigning storage position and put-away, order-picking, hoarding and sorting, cross-docking, shipping. It presents a classification of the warehouse order-picking systems in terms of manual labor on offer as well as external (marketing channels, consumer’s demand structure, supplier’s replenishment structure and inventory level, total production demand, economic situation and internal (mechanization level, information accessibility, warehouse dimensionality, method of dispatch for shipping, zoning, batching, storage assignment method, routing method factors affecting the designing systems complexity. Basic optimization considerations are described. There is a literature review on the following sub-problems of planning and control of orderpicking processes.A layout design problem has been taken in account at two levels — external (facility layout problem and internal (aisle configuration problem. For a problem of distributing goods or stock keeping units the following methods are emphasized: random, nearest open storage position, and dedicated (COI-based, frequency-based distribution, as well as class-based and familygrouped (complimentary- and contact-based one. Batching problem can be solved by two main methods, i.e. proximity order batching (seed and saving algorithms and time-window order batching. There are two strategies for a zoning problem: progressive and synchronized, and also a special case of zoning — bucket brigades method. Hoarding/sorting problem is briefly reviewed. Order-picking routing problem will be thoroughly described in the next article of the cycle “Warehouse order-picking process”.

  15. Minimizing Warehouse Space through Inventory Reduction at Reckitt Benckiser

    OpenAIRE

    KILINC, IZGI SELEN

    2009-01-01

    This dissertation represents a ten week internship at pharmaceutical plant of Reckitt Benckiser for the Warehouse Stock Reduction Project. Due to foreseeable growth by the factory, there is increasing pressure to utilise existing warehouse space by reducing the existing stock level by 50 %. Therefore, this study aims to identify the opportunities to reduce the physical stock held in raw/pack materials in the warehouse and save space for additional manufacturing resources. The analysis demo...

  16. Decision method for optimal selection of warehouse material handling strategies by production companies

    Science.gov (United States)

    Dobos, P.; Tamás, P.; Illés, B.

    2016-11-01

    Adequate establishment and operation of warehouse logistics determines the companies’ competitiveness significantly because it effects greatly the quality and the selling price of the goods that the production companies produce. In order to implement and manage an adequate warehouse system, adequate warehouse position, stock management model, warehouse technology, motivated work force committed to process improvement and material handling strategy are necessary. In practical life, companies have paid small attantion to select the warehouse strategy properly. Although it has a major influence on the production in the case of material warehouse and on smooth costumer service in the case of finished goods warehouse because this can happen with a huge loss in material handling. Due to the dynamically changing production structure, frequent reorganization of warehouse activities is needed, on what the majority of the companies react basically with no reactions. This work presents a simulation test system frames for eligible warehouse material handling strategy selection and also the decision method for selection.

  17. Design and Applications of a Multimodality Image Data Warehouse Framework

    Science.gov (United States)

    Wong, Stephen T.C.; Hoo, Kent Soo; Knowlton, Robert C.; Laxer, Kenneth D.; Cao, Xinhau; Hawkins, Randall A.; Dillon, William P.; Arenson, Ronald L.

    2002-01-01

    A comprehensive data warehouse framework is needed, which encompasses imaging and non-imaging information in supporting disease management and research. The authors propose such a framework, describe general design principles and system architecture, and illustrate a multimodality neuroimaging data warehouse system implemented for clinical epilepsy research. The data warehouse system is built on top of a picture archiving and communication system (PACS) environment and applies an iterative object-oriented analysis and design (OOAD) approach and recognized data interface and design standards. The implementation is based on a Java CORBA (Common Object Request Broker Architecture) and Web-based architecture that separates the graphical user interface presentation, data warehouse business services, data staging area, and backend source systems into distinct software layers. To illustrate the practicality of the data warehouse system, the authors describe two distinct biomedical applications—namely, clinical diagnostic workup of multimodality neuroimaging cases and research data analysis and decision threshold on seizure foci lateralization. The image data warehouse framework can be modified and generalized for new application domains. PMID:11971885

  18. Multidimensi Pada Data Warehouse Dengan Menggunakan Rumus Kombinasi

    OpenAIRE

    Hendric, Spits Warnars Harco Leslie

    2006-01-01

    Multidimensional in data warehouse is a compulsion and become the most important for information delivery, without multidimensional data warehouse is incomplete. Multidimensional give the able to analyze business measurement in many different ways. Multidimensional is also synonymous with online analytical processing (OLAP).

  19. A Multidimensional Data Warehouse for Community Health Centers.

    Science.gov (United States)

    Kunjan, Kislaya; Toscos, Tammy; Turkcan, Ayten; Doebbeling, Brad N

    2015-01-01

    Community health centers (CHCs) play a pivotal role in healthcare delivery to vulnerable populations, but have not yet benefited from a data warehouse that can support improvements in clinical and financial outcomes across the practice. We have developed a multidimensional clinic data warehouse (CDW) by working with 7 CHCs across the state of Indiana and integrating their operational, financial and electronic patient records to support ongoing delivery of care. We describe in detail the rationale for the project, the data architecture employed, the content of the data warehouse, along with a description of the challenges experienced and strategies used in the development of this repository that may help other researchers, managers and leaders in health informatics. The resulting multidimensional data warehouse is highly practical and is designed to provide a foundation for wide-ranging healthcare data analytics over time and across the community health research enterprise.

  20. Chronic Condition Data Warehouse

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Chronic Condition Data Warehouse (CCW) provides researchers with Medicare and Medicaid beneficiary, claims, and assessment data linked by beneficiary across...

  1. Combining information from a clinical data warehouse and a pharmaceutical database to generate a framework to detect comorbidities in electronic health records.

    Science.gov (United States)

    Sylvestre, Emmanuelle; Bouzillé, Guillaume; Chazard, Emmanuel; His-Mahier, Cécil; Riou, Christine; Cuggia, Marc

    2018-01-24

    Medical coding is used for a variety of activities, from observational studies to hospital billing. However, comorbidities tend to be under-reported by medical coders. The aim of this study was to develop an algorithm to detect comorbidities in electronic health records (EHR) by using a clinical data warehouse (CDW) and a knowledge database. We enriched the Theriaque pharmaceutical database with the French national Comorbidities List to identify drugs associated with at least one major comorbid condition and diagnoses associated with a drug indication. Then, we compared the drug indications in the Theriaque database with the ICD-10 billing codes in EHR to detect potentially missing comorbidities based on drug prescriptions. Finally, we improved comorbidity detection by matching drug prescriptions and laboratory test results. We tested the obtained algorithm by using two retrospective datasets extracted from the Rennes University Hospital (RUH) CDW. The first dataset included all adult patients hospitalized in the ear, nose, throat (ENT) surgical ward between October and December 2014 (ENT dataset). The second included all adult patients hospitalized at RUH between January and February 2015 (general dataset). We reviewed medical records to find written evidence of the suggested comorbidities in current or past stays. Among the 22,132 Common Units of Dispensation (CUD) codes present in the Theriaque database, 19,970 drugs (90.2%) were associated with one or several ICD-10 diagnoses, based on their indication, and 11,162 (50.4%) with at least one of the 4878 comorbidities from the comorbidity list. Among the 122 patients of the ENT dataset, 75.4% had at least one drug prescription without corresponding ICD-10 code. The comorbidity diagnoses suggested by the algorithm were confirmed in 44.6% of the cases. Among the 4312 patients of the general dataset, 68.4% had at least one drug prescription without corresponding ICD-10 code. The comorbidity diagnoses suggested by the

  2. CHOmine: an integrated data warehouse for CHO systems biology and modeling.

    Science.gov (United States)

    Gerstl, Matthias P; Hanscho, Michael; Ruckerbauer, David E; Zanghellini, Jürgen; Borth, Nicole

    2017-01-01

    The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. http://www.chogenome.org. © The Author(s) 2017. Published by Oxford University Press.

  3. Business/Employers Influenza Toolkit

    Centers for Disease Control (CDC) Podcasts

    This podcast promotes the "Make It Your Business To Fight The Flu" toolkit for Businesses and Employers. The toolkit provides information and recommended strategies to help businesses and employers promote the seasonal flu vaccine. Additionally, employers will find flyers, posters, and other materials to post and distribute in the workplace.

  4. Operational management system for warehouse logistics of metal trading companies

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2014-07-01

    Full Text Available Logistics is an effective tool in business management. Metal trading business is a part of metal promotion chain from producer to consumer. It's designed to serve as a link connecting the interests of steel producers and end users. We should account for the specifics warehousing trading. The specificity of warehouse metal trading consists primarily in the fact that the purchase is made in large lots, and the sale - in medium and small parties. Loading and unloading of cars and trucks is produced by overhead cranes. Some part of the purchased goods are shipped in relatively large lots without presales preparation. Another part of the goods undergoes presale preparation. Indoor and outdoor warehouses are used with the address storage system. In the process of prolonged storage the metal rusts. Some part of the goods is subjected to final completion (cutting, welding, coloration in service centers and small factories, usually located at the warehouse. The quantity of simultaneously shipped cars, and the quantity of the loader workers brigade can reach few dozens. So it is necessary to control the loading workers, to coordinate and monitor the performance of loading and unloading operations, to make the daily analysis of their work, to evaluate the warehouse operations as a whole. There is a need to manage and control movement of cars and trucks on the warehouse territory to reduce storage and transport costs and improve customer service. ERP-systems and WMS-systems, which are widely used, do not cover fully the functions and processes of the warehouse trading, and do not effectively manage all logistics processes. In this paper the specialized software is proposed. The software is intended for operational logistics management in warehouse metal products trading. The basic functions and processes of metal warehouse trading are described. The effectiveness indices for logistics processes and key effective indicators of warehouse trading are proposed

  5. Implemetasi Data Warehouse pada Bagian Pemasaran Perguruan Tinggi

    Directory of Open Access Journals (Sweden)

    Eka Miranda

    2012-06-01

    Full Text Available Transactional data are widely owned by higher education institutes, but the utilization of the data to support decision making has not functioned maximally. Therefore, higher education institutes need analysis tools to maximize decision making processes. Based on the issue, then data warehouse design was created to: (1 store large-amount data; (2 potentially gain new perspectives of distributed data; (3 provide reports and answers to users’ ad hoc questions; (4 perform data analysis of external conditions and transactional data from the marketing activities of universities, since marketing is one supporting field as well as the cutting edge of higher education institutes. The methods used to design and implement data warehouse are analysis of records related to the marketing activities of higher education institutes and data warehouse design. This study results in a data warehouse design and its implementation to analyze the external data and transactional data from the marketing activities of universities to support decision making.

  6. Warehouse receipts functioning to reduce market risk

    Directory of Open Access Journals (Sweden)

    Jovičić Daliborka

    2014-01-01

    Full Text Available Cereal production underlies the market risk to a great extent due to its elastic demand. Prices of grain have cyclic movements and significant decline in the harvest periods as a result of insufficient supply and high demand. The very specificity of agricultural production leads to the fact that agricultures are forced to sell their products at unfavorable conditions in order to resume production. The Public Warehouses System allows the agriculturers, who were previously unable to use the bank loans to finance the continuation of their production, to efficiently acquire the necessary funds, by the support of the warehouse receipts which serve as collaterals. Based on the results obtained by applying statistical methods (variance and standard deviation, as a measure of market risk under the assumption that warehouse receipts' prices will approximately follow the overall consumer price index, it can be concluded that the warehouse receipts trade will have a significant impact on risk reduction in cereal production. Positive effects can be manifested through the stabilization of prices, reduction of cyclic movements in the production of basic grains and, in the final stage, on the country's food security.

  7. Information Support of Processes in Warehouse Logistics

    Directory of Open Access Journals (Sweden)

    Gordei Kirill

    2013-11-01

    Full Text Available In the conditions of globalization and the world economic communications, the role of information support of business processes increases in various branches and fields of activity. There is not an exception for the warehouse activity. Such information support is realized in warehouse logistic systems. In relation to territorial administratively education, the warehouse logistic system gets a format of difficult social and economic structure which controls the economic streams covering the intermediary, trade and transport organizations and the enterprises of other branches and spheres. Spatial movement of inventory items makes new demands to participants of merchandising. Warehousing (in the meaning – storage – is one of the operations entering into logistic activity, on the organization of a material stream, as a requirement. Therefore, warehousing as "management of spatial movement of stocks" – is justified. Warehousing, in such understanding, tries to get rid of the perception as to containing stocks – a business expensive. This aspiration finds reflection in the logistic systems working by the principle: "just in time", "economical production" and others. Therefore, the role of warehouses as places of storage is transformed to understanding of warehousing as an innovative logistic system.

  8. The Best Ever Alarm System Toolkit

    International Nuclear Information System (INIS)

    Kasemir, Kay; Chen, Xihui; Danilova, Ekaterina N.

    2009-01-01

    Learning from our experience with the standard Experimental Physics and Industrial Control System (EPICS) alarm handler (ALH) as well as a similar intermediate approach based on script-generated operator screens, we developed the Best Ever Alarm System Toolkit (BEAST). It is based on Java and Eclipse on the Control System Studio (CSS) platform, using a relational database (RDB) to store the configuration and log actions. It employs a Java Message Service (JMS) for communication between the modular pieces of the toolkit, which include an Alarm Server to maintain the current alarm state, an arbitrary number of Alarm Client user interfaces (GUI), and tools to annunciate alarms or log alarm related actions. Web reports allow us to monitor the alarm system performance and spot deficiencies in the alarm configuration. The Alarm Client GUI not only gives the end users various ways to view alarms in tree and table, but also makes it easy to access the guidance information, the related operator displays and other CSS tools. It also allows online configuration to be simply modified from the GUI. Coupled with a good 'alarm philosophy' on how to provide useful alarms, we can finally improve the configuration to achieve an effective alarm system.

  9. The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.

    Science.gov (United States)

    Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen

    2010-12-21

    There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases

  10. Harvesting Information from a Library Data Warehouse

    Directory of Open Access Journals (Sweden)

    Siew-Phek T. Su

    2017-09-01

    Full Text Available Data warehousing technology has been defined by John Ladley as "a set of methods, techniques, and tools that are leveraged together and used to produce a vehicle that delivers data to end users on an integrated platform." (1 This concept h s been applied increasingly by industries worldwide to develop data warehouses for decision support and knowledge discovery. In the academic sector, several universities have developed data warehouses containing the universities' financial, payroll, personnel, budget, and student data. (2 These data warehouses across all industries and academia have met with varying degrees of success. Data warehousing technology and its related issues have been widely discussed and published. (3 Little has been done, however, on the application of this cutting edge technology in the library environment using library data.

  11. 19 CFR 19.1 - Classes of customs warehouses.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Classes of customs warehouses. 19.1 Section 19.1 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE THEREIN § 19.1 Classes of...

  12. NGS QC Toolkit: a toolkit for quality control of next generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Ravi K Patel

    Full Text Available Next generation sequencing (NGS technologies provide a high-throughput means to generate large amount of sequence data. However, quality control (QC of sequence data generated from these technologies is extremely important for meaningful downstream analysis. Further, highly efficient and fast processing tools are required to handle the large volume of datasets. Here, we have developed an application, NGS QC Toolkit, for quality check and filtering of high-quality data. This toolkit is a standalone and open source application freely available at http://www.nipgr.res.in/ngsqctoolkit.html. All the tools in the application have been implemented in Perl programming language. The toolkit is comprised of user-friendly tools for QC of sequencing data generated using Roche 454 and Illumina platforms, and additional tools to aid QC (sequence format converter and trimming tools and analysis (statistics tools. A variety of options have been provided to facilitate the QC at user-defined parameters. The toolkit is expected to be very useful for the QC of NGS data to facilitate better downstream analysis.

  13. A study on building data warehouse of hospital information system.

    Science.gov (United States)

    Li, Ping; Wu, Tao; Chen, Mu; Zhou, Bin; Xu, Wei-guo

    2011-08-01

    Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is

  14. Warehouse Performance Improvement at Linfox Logistics Indonesia

    OpenAIRE

    Pratama, Riyan Galuh; Simatupang, Togar M

    2013-01-01

    The objective of this research is to provide alternative solutions for Linfox Logistics Indonesia (LLI) in facing warehouse performance issues. The main warehouse performance indicators called Customer Case Filling on Time (CCFOT) and Case Picking Productivity failed to achieve the target. Several analyses were carried out regarding current dispatch process, value stream mapping, and root causes identification. The results find that much waste occurred in dispatch process. Proposed improvemen...

  15. Column-Oriented Databases, an Alternative for Analytical Environment

    Directory of Open Access Journals (Sweden)

    Gheorghe MATEI

    2010-12-01

    Full Text Available It is widely accepted that a data warehouse is the central place of a Business Intelligence system. It stores all data that is relevant for the company, data that is acquired both from internal and external sources. Such a repository stores data from more years than a transactional system can do, and offer valuable information to its users to make the best decisions, based on accurate and reliable data. As the volume of data stored in an enterprise data warehouse becomes larger and larger, new approaches are needed to make the analytical system more efficient. This paper presents column-oriented databases, which are considered an element of the new generation of DBMS technology. The paper emphasizes the need and the advantages of these databases for an analytical environment and make a short presentation of two of the DBMS built in a columnar approach.

  16. STAR: Software Toolkit for Analysis Research

    International Nuclear Information System (INIS)

    Doak, J.E.; Prommel, J.M.; Whiteson, R.; Hoffbauer, B.L.; Thomas, T.R.; Helman, P.

    1993-01-01

    Analyzing vast quantities of data from diverse information sources is an increasingly important element for nonproliferation and arms control analysis. Much of the work in this area has used human analysts to assimilate, integrate, and interpret complex information gathered from various sources. With the advent of fast computers, we now have the capability to automate this process thereby shifting this burden away from humans. In addition, there now exist huge data storage capabilities which have made it possible to formulate large integrated databases comprising many thereabouts of information spanning a variety of subjects. We are currently designing a Software Toolkit for Analysis Research (STAR) to address these issues. The goal of STAR is to Produce a research tool that facilitates the development and interchange of algorithms for locating phenomena of interest to nonproliferation and arms control experts. One major component deals with the preparation of information. The ability to manage and effectively transform raw data into a meaningful form is a prerequisite for analysis by any methodology. The relevant information to be analyzed can be either unstructured text structured data, signals, or images. Text can be numerical and/or character, stored in raw data files, databases, streams of bytes, or compressed into bits in formats ranging from fixed, to character-delimited, to a count followed by content The data can be analyzed in real-time or batch mode. Once the data are preprocessed, different analysis techniques can be applied. Some are built using expert knowledge. Others are trained using data collected over a period of time. Currently, we are considering three classes of analyzers for use in our software toolkit: (1) traditional machine learning techniques, (2) the purely statistical system, and (3) expert systems

  17. Deteksi Outlier Transaksi Menggunakan Visualisasi-Olap Pada Data Warehouse Perguruan Tinggi Swasta

    Directory of Open Access Journals (Sweden)

    Gusti Ngurah Mega Nata

    2016-07-01

    Full Text Available Mendeteksi outlier pada data warehouse merupakan hal penting. Data pada data warehouse sudah diagregasi dan memiliki model multidimensional. Agregasi pada data warehouse dilakukan karena data warehouse digunakan untuk menganalisis data secara cepat pada top level manajemen. Sedangkan, model data multidimensional digunakan untuk melihat data dari berbagai dimensi objek bisnis. Jadi, Mendeteksi outlier pada data warehouse membutuhkan teknik yang dapat melihat outlier pada data yang sudah diagregasi dan dapat melihat dari berbagai dimensi objek bisnis. Mendeteksi outlier pada data warehouse akan menjadi tantangan baru.        Di lain hal, Visualisasi On-line Analytic process (OLAP merupakan tugas penting dalam menyajikan informasi trend (report pada data warehouse dalam bentuk visualisasi data. Pada penelitian ini, visualisasi OLAP digunakan untuk deteksi outlier transaksi. Maka, dalam penelitian ini melakukan analisis untuk mendeteksi outlier menggunakan visualisasi-OLAP. Operasi OLAP yang digunakan yaitu operasi drill-down. Jenis visualisasi yang akan digunakan yaitu visualisasi satu dimensi, dua dimensi dan multi dimensi menggunakan tool weave desktop. Pembangunan data warehouse dilakukan secara button-up. Studi kasus dilakukan pada perguruan tinggi swasta. Kasus yang diselesaikan yaitu mendeteksi outlier transaki pembayaran mahasiswa pada setiap semester. Deteksi outlier pada visualisasi data menggunakan satu tabel dimensional lebih mudah dianalisis dari pada deteksi outlier pada visualisasi data menggunakan dua atau multi tabel dimensional. Dengan kata lain semakin banyak tabel dimensi yang terlibat semakin sulit analisis deteksi outlier yang dilakukan. Kata kunci — Deteksi Outlier,  Visualisasi OLAP, Data warehouse

  18. Data warehouse based decision support system in nuclear power plants

    International Nuclear Information System (INIS)

    Nadinic, B.

    2004-01-01

    Safety is an important element in business decision making processes in nuclear power plants. Information about component reliability, structures and systems, data recorded during the nuclear power plant's operation and outage periods, as well as experiences from other power plants are located in different database systems throughout the power plant. It would be possible to create a decision support system which would collect data, transform it into a standardized form and store it in a single location in a format more suitable for analyses and knowledge discovery. This single location where the data would be stored would be a data warehouse. Such data warehouse based decision support system could help make decision making processes more efficient by providing more information about business processes and predicting possible consequences of different decisions. Two main functionalities in this decision support system would be an OLAP (On Line Analytical Processing) and a data mining system. An OLAP system would enable the users to perform fast, simple and efficient multidimensional analysis of existing data and identify trends. Data mining techniques and algorithms would help discover new, previously unknown information from the data as well as hidden dependencies between various parameters. Data mining would also enable analysts to create relevant prediction models that could predict behaviour of different systems during operation and inspection results during outages. The basic characteristics and theoretical foundations of such decision support system are described and the reasons for choosing a data warehouse as the underlying structure are explained. The article analyzes obvious business benefits of such system as well as potential uses of OLAP and data mining technologies. Possible implementation methodologies and problems that may arise, especially in the field of data integration, are discussed and analyzed.(author)

  19. Logistics Cost Calculation of Implementation Warehouse Management System: A Case Study

    Directory of Open Access Journals (Sweden)

    Kučera Tomáš

    2017-01-01

    Full Text Available Warehouse management system can take full advantage of the resources and provide efficient warehousing services. The paper aims to show advantages and disadvantages of the warehouse management system in a chosen enterprise, which is focused on logistics services and transportation. The paper can bring new innovative approach for warehousing and presents how logistics enterprise can reduce logistics costs. This approach includes cost reduction of the establishment, operation and savings in the overall assessment of the implementation of the warehouse management system. The innovative warehouse management system will be demonstrated as the case study, which is classified as a qualitative scientific method, in the chosen logistics enterprise. The paper is based on the research of the world literature, analyses of the internal logistics processes, data and finally enterprise documents. The paper discovers costs related to personnel costs, handling equipment costs and costs for material identification. Implementation of the warehouse management system will reduce overall logistics costs of warehousing and extend the warehouse management system to other parts of the logistics chain.

  20. 27 CFR 46.236 - Articles in a warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false Articles in a warehouse... Tubes Held for Sale on April 1, 2009 Filing Requirements § 46.236 Articles in a warehouse. (a) Articles... articles will be offered for sale. (b) Articles offered for sale at several locations must be reported on a...

  1. Wetland Resources Action Planning (WRAP) toolkit

    DEFF Research Database (Denmark)

    Bunting, Stuart W.; Smith, Kevin G.; Lund, Søren

    2013-01-01

    The Wetland Resources Action Planning (WRAP) toolkit is a toolkit of research methods and better management practices used in HighARCS (Highland Aquatic Resources Conservation and Sustainable Development), an EU-funded project with field experiences in China, Vietnam and India. It aims to communi......The Wetland Resources Action Planning (WRAP) toolkit is a toolkit of research methods and better management practices used in HighARCS (Highland Aquatic Resources Conservation and Sustainable Development), an EU-funded project with field experiences in China, Vietnam and India. It aims...... to communicate best practices in conserving biodiversity and sustaining ecosystem services to potential users and to promote the wise-use of aquatic resources, improve livelihoods and enhance policy information....

  2. Demand Response Opportunities in Industrial Refrigerated Warehouses in California

    Energy Technology Data Exchange (ETDEWEB)

    Goli, Sasank; McKane, Aimee; Olsen, Daniel

    2011-06-14

    Industrial refrigerated warehouses that implemented energy efficiency measures and have centralized control systems can be excellent candidates for Automated Demand Response (Auto-DR) due to equipment synergies, and receptivity of facility managers to strategies that control energy costs without disrupting facility operations. Auto-DR utilizes OpenADR protocol for continuous and open communication signals over internet, allowing facilities to automate their Demand Response (DR). Refrigerated warehouses were selected for research because: They have significant power demand especially during utility peak periods; most processes are not sensitive to short-term (2-4 hours) lower power and DR activities are often not disruptive to facility operations; the number of processes is limited and well understood; and past experience with some DR strategies successful in commercial buildings may apply to refrigerated warehouses. This paper presents an overview of the potential for load sheds and shifts from baseline electricity use in response to DR events, along with physical configurations and operating characteristics of refrigerated warehouses. Analysis of data from two case studies and nine facilities in Pacific Gas and Electric territory, confirmed the DR abilities inherent to refrigerated warehouses but showed significant variation across facilities. Further, while load from California's refrigerated warehouses in 2008 was 360 MW with estimated DR potential of 45-90 MW, actual achieved was much less due to low participation. Efforts to overcome barriers to increased participation may include, improved marketing and recruitment of potential DR sites, better alignment and emphasis on financial benefits of participation, and use of Auto-DR to increase consistency of participation.

  3. Supporting LGBT Communities: Police ToolKit

    OpenAIRE

    Vasquez del Aguila, Ernesto; Franey, Paul

    2013-01-01

    This toolkit provides police forces with practical educational tools, which can be used as part of a comprehensive LGBT strategy centred on diversity, equality, and non-discrimination. These materials are based on lessons learned through real life policing experiences with LGBT persons. The Toolkit is divided into seven scenarios where police awareness of LGBT issues has been identified as important. The toolkit employs a practical, scenario-based, problem-solving approach to help police offi...

  4. Health Claims Data Warehouse (HCDW)

    Data.gov (United States)

    Office of Personnel Management — The Health Claims Data Warehouse (HCDW) will receive and analyze health claims data to support management and administrative purposes. The Federal Employee Health...

  5. THE DEVELOPMENT OF THE APPLICATION OF A DATA WAREHOUSE AT PT JKL

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2012-05-01

    Full Text Available One rapidly evolving technology today is information technology, which can help decision-making in an organization or a company. The data warehouse is one form of information technology that supports those needs, as one of the right solutions for companies in decision-making. The objective of this research is the development of a data warehouse at PT JKL in order to support executives in analyzing the organization and support the decision-making process. Methodology of this research is conducting interview with related units, literature study and document examination. This research also used the Nine Step Methodology developed by Kimball to design the data warehouse. The results obtained is an application that can summarize the data warehouse, integrating and presenting historical data in multidimensional. The conclusion from this research is the data warehouse can help companies to analyze data in a flexible, fast, and effective data access.Keywords: Data Warehouse; Inventory; Contract Approval; Inventory; Dashboard

  6. DATA WAREHOUSES SECURITY IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Burtescu Emil

    2009-05-01

    Full Text Available Data warehouses were initially implemented and developed by the big firms and they were used for working out the managerial problems and for making decisions. Later on, because of the economic tendencies and of the technological progress, the data warehou

  7. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    Science.gov (United States)

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  8. Selecting materialized views in a data warehouse

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Liu, Daxin

    2003-01-01

    A Data Warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. In this paper, we have addressed and designed algorithm to select a set of views to materialize in order to answer the most queries under the constraint of a given space. The algorithm presented in this paper aim at making out a minimum set of views, by which we can directly respond to as many as possible user"s query requests. We use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithm shows that the proposed algorithm gives a less complexity and higher speeds and feasible expandability.

  9. Toolkit Design for Interactive Structured Graphics

    National Research Council Canada - National Science Library

    Bederson, Benjamin B; Grosjean, Jesse; Meyer, Jon

    2003-01-01

    .... We describe Jazz (a polylithic toolkit) and Piccolo (a monolithic toolkit), each of which we built to support interactive 2D structured graphics applications in general, and Zoomable User Interface applications in particular...

  10. A PROPOSAL OF DATA QUALITY FOR DATA WAREHOUSES ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Leo Willyanto Santoso

    2006-01-01

    Full Text Available The quality of the data provided is critical to the success of data warehousing initiatives. There is strong evidence that many organisations have significant data quality problems, and that these have substantial social and economic impacts. This paper describes a study which explores modeling of the dynamic parts of the data warehouse. This metamodel enables data warehouse management, design and evolution based on a high level conceptual perspective, which can be linked to the actual structural and physical aspects of the data warehouse architecture. Moreover, this metamodel is capable of modeling complex activities, their interrelationships, the relationship of activities with data sources and execution details.

  11. Business/Employers Influenza Toolkit

    Centers for Disease Control (CDC) Podcasts

    2011-09-06

    This podcast promotes the "Make It Your Business To Fight The Flu" toolkit for Businesses and Employers. The toolkit provides information and recommended strategies to help businesses and employers promote the seasonal flu vaccine. Additionally, employers will find flyers, posters, and other materials to post and distribute in the workplace.  Created: 9/6/2011 by Office of Infectious Diseases, Office of the Director (OD).   Date Released: 9/7/2011.

  12. Building a data warehouse with examples in SQL server

    CERN Document Server

    Rainardi, Vincent

    2008-01-01

    ""Building a Data Warehouse: With Examples in SQL Server"" describes how to build a data warehouse completely from scratch and shows practical examples on how to do it. Author Rainardi also describes some practical issues that developers are likely to encounter in their first data warehousing project, along with solutions and advice.

  13. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  14. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    International Nuclear Information System (INIS)

    Rit, S; Vila Oliva, M; Sarrut, D; Brousmiche, S; Labarbe, R; Sharp, G C

    2014-01-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  15. Quality assessment of digital annotated ECG data from clinical trials by the FDA ECG Warehouse.

    Science.gov (United States)

    Sarapa, Nenad

    2007-09-01

    The FDA mandates that digital electrocardiograms (ECGs) from 'thorough' QTc trials be submitted into the ECG Warehouse in Health Level 7 extended markup language format with annotated onset and offset points of waveforms. The FDA did not disclose the exact Warehouse metrics and minimal acceptable quality standards. The author describes the Warehouse scoring algorithms and metrics used by FDA, points out ways to improve FDA review and suggests Warehouse benefits for pharmaceutical sponsors. The Warehouse ranks individual ECGs according to their score for each quality metric and produces histogram distributions with Warehouse-specific thresholds that identify ECGs of questionable quality. Automatic Warehouse algorithms assess the quality of QT annotation and duration of manual QT measurement by the central ECG laboratory.

  16. MouseMine: a new data warehouse for MGI.

    Science.gov (United States)

    Motenko, H; Neuhauser, S B; O'Keefe, M; Richardson, J E

    2015-08-01

    MouseMine (www.mousemine.org) is a new data warehouse for accessing mouse data from Mouse Genome Informatics (MGI). Based on the InterMine software framework, MouseMine supports powerful query, reporting, and analysis capabilities, the ability to save and combine results from different queries, easy integration into larger workflows, and a comprehensive Web Services layer. Through MouseMine, users can access a significant portion of MGI data in new and useful ways. Importantly, MouseMine is also a member of a growing community of online data resources based on InterMine, including those established by other model organism databases. Adopting common interfaces and collaborating on data representation standards are critical to fostering cross-species data analysis. This paper presents a general introduction to MouseMine, presents examples of its use, and discusses the potential for further integration into the MGI interface.

  17. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  18. The Data Warehouse in Service Oriented Architectures and Network Centric Warfare

    National Research Council Canada - National Science Library

    Lenahan, Jack

    2005-01-01

    ... at all policy and command levels to support superior decision making? Analyzing the anticipated massive amount of GIG data will almost certainly require data warehouses and federated data warehouses...

  19. Optimal time policy for deteriorating items of two-warehouse

    Indian Academy of Sciences (India)

    ... goods in which the first is rented warehouse and the second is own warehouse that deteriorates with two different rates. The aim of this study is to determine the optimal order quantity to maximize the profit of the projected model. Finally, some numerical examples and sensitivity analysis of parameters are made to validate ...

  20. A novel approach for intelligent distribution of data warehouses

    Directory of Open Access Journals (Sweden)

    Abhay Kumar Agarwal

    2016-07-01

    Full Text Available With the continuous growth in the amount of data, data storage systems have come a long way from flat files systems to RDBMS, Data Warehousing (DW and Distributed Data Warehousing systems. This paper proposes a new distributed data warehouse model. The model is built on a novel approach, for the intelligent distribution of data warehouse. Overall the model is named as Intelligent and Distributed Data Warehouse (IDDW. The proposed model has N-levels and is based on top-down hierarchical design approach of building distributed data warehouse. The building process of IDDW starts with the identification of various locations where DW may be built. Initially, a single location is considered at top-most level of IDDW where DW is built. Thereafter, DW at any other location of any level may be built. A method, to transfer concerned data from any upper level DW to concerned lower level DW, is also presented in the paper. The paper also presents IDDW modeling, its architecture based on modeling, the internal organization of IDDW via which all the operations within IDDW are performed.

  1. Computer modeling of commercial refrigerated warehouse facilities

    International Nuclear Information System (INIS)

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-01-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented

  2. The SpeX Prism Library Analysis Toolkit: Design Considerations and First Results

    Science.gov (United States)

    Burgasser, Adam J.; Aganze, Christian; Escala, Ivana; Lopez, Mike; Choban, Caleb; Jin, Yuhui; Iyer, Aishwarya; Tallis, Melisa; Suarez, Adrian; Sahi, Maitrayee

    2016-01-01

    Various observational and theoretical spectral libraries now exist for galaxies, stars, planets and other objects, which have proven useful for classification, interpretation, simulation and model development. Effective use of these libraries relies on analysis tools, which are often left to users to develop. In this poster, we describe a program to develop a combined spectral data repository and Python-based analysis toolkit for low-resolution spectra of very low mass dwarfs (late M, L and T dwarfs), which enables visualization, spectral index analysis, classification, atmosphere model comparison, and binary modeling for nearly 2000 library spectra and user-submitted data. The SpeX Prism Library Analysis Toolkit (SPLAT) is being constructed as a collaborative, student-centered, learning-through-research model with high school, undergraduate and graduate students and regional science teachers, who populate the database and build the analysis tools through quarterly challenge exercises and summer research projects. In this poster, I describe the design considerations of the toolkit, its current status and development plan, and report the first published results led by undergraduate students. The combined data and analysis tools are ideal for characterizing cool stellar and exoplanetary atmospheres (including direct exoplanetary spectra observations by Gemini/GPI, VLT/SPHERE, and JWST), and the toolkit design can be readily adapted for other spectral datasets as well.This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NNX15AI75G. SPLAT code can be found at https://github.com/aburgasser/splat.

  3. The visit-data warehouse: enabling novel secondary use of health information exchange data.

    Science.gov (United States)

    Fleischman, William; Lowry, Tina; Shapiro, Jason

    2014-01-01

    Health Information Exchange (HIE) efforts face challenges with data quality and performance, and this becomes especially problematic when data is leveraged for uses beyond primary clinical use. We describe a secondary data infrastructure focusing on patient-encounter, nonclinical data that was built on top of a functioning HIE platform to support novel secondary data uses and prevent potentially negative impacts these uses might have otherwise had on HIE system performance. HIE efforts have generally formed for the primary clinical use of individual clinical providers searching for data on individual patients under their care, but many secondary uses have been proposed and are being piloted to support care management, quality improvement, and public health. This infrastructure review describes a module built into the Healthix HIE. Healthix, based in the New York metropolitan region, comprises 107 participating organizations with 29,946 acute-care beds in 383 facilities, and includes more than 9.2 million unique patients. The primary infrastructure is based on the InterSystems proprietary Caché data model distributed across servers in multiple locations, and uses a master patient index to link individual patients' records across multiple sites. We built a parallel platform, the "visit data warehouse," of patient encounter data (demographics, date, time, and type of visit) using a relational database model to allow accessibility using standard database tools and flexibility for developing secondary data use cases. These four secondary use cases include the following: (1) tracking encounter-based metrics in a newly established geriatric emergency department (ED), (2) creating a dashboard to provide a visual display as well as a tabular output of near-real-time de-identified encounter data from the data warehouse, (3) tracking frequent ED users as part of a regional-approach to case management intervention, and (4) improving an existing quality improvement program

  4. 19 CFR 19.14 - Materials for use in manufacturing warehouse.

    Science.gov (United States)

    2010-04-01

    ... warehouse is located under an immediate transportation without appraisement entry or warehouse withdrawal for transportation, whichever is applicable. (b) Bond required. Before the transfer of the merchandise... the manufacture of articles as authorized by law. Port Director (d) Domestic spirits and wines. For...

  5. 27 CFR 28.286 - Receipt in customs bonded warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Receipt in customs bonded... in Customs Bonded Warehouse § 28.286 Receipt in customs bonded warehouse. On receipt of the distilled spirits or wine and the related TTB Form 5100.11 or 5110.30 as the case may be, the customs officer in...

  6. A qualitative study of clinic and community member perspectives on intervention toolkits: "Unless the toolkit is used it won't help solve the problem".

    Science.gov (United States)

    Davis, Melinda M; Howk, Sonya; Spurlock, Margaret; McGinnis, Paul B; Cohen, Deborah J; Fagnan, Lyle J

    2017-07-18

    Intervention toolkits are common products of grant-funded research in public health and primary care settings. Toolkits are designed to address the knowledge translation gap by speeding implementation and dissemination of research into practice. However, few studies describe characteristics of effective intervention toolkits and their implementation. Therefore, we conducted this study to explore what clinic and community-based users want in intervention toolkits and to identify the factors that support application in practice. In this qualitative descriptive study we conducted focus groups and interviews with a purposive sample of community health coalition members, public health experts, and primary care professionals between November 2010 and January 2012. The transdisciplinary research team used thematic analysis to identify themes and a cross-case comparative analysis to explore variation by participant role and toolkit experience. Ninety six participants representing primary care (n = 54, 56%) and community settings (n = 42, 44%) participated in 18 sessions (13 focus groups, five key informant interviews). Participants ranged from those naïve through expert in toolkit development; many reported limited application of toolkits in actual practice. Participants wanted toolkits targeted at the right audience and demonstrated to be effective. Well organized toolkits, often with a quick start guide, with tools that were easy to tailor and apply were desired. Irrespective of perceived quality, participants experienced with practice change emphasized that leadership, staff buy-in, and facilitative support was essential for intervention toolkits to be translated into changes in clinic or public -health practice. Given the emphasis on toolkits in supporting implementation and dissemination of research and clinical guidelines, studies are warranted to determine when and how toolkits are used. Funders, policy makers, researchers, and leaders in primary care and

  7. Criticality calculation of the nuclear material warehouse of the ININ

    International Nuclear Information System (INIS)

    Garcia, T.; Angeles, A.; Flores C, J.

    2013-10-01

    In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)

  8. Energy retrofit analysis toolkits for commercial buildings: A review

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Hong, Tianzhen; Piette, Mary Ann; Taylor-Lange, Sarah C.

    2015-01-01

    Retrofit analysis toolkits can be used to optimize energy or cost savings from retrofit strategies, accelerating the adoption of ECMs (energy conservation measures) in buildings. This paper provides an up-to-date review of the features and capabilities of 18 energy retrofit toolkits, including ECMs and the calculation engines. The fidelity of the calculation techniques, a driving component of retrofit toolkits, were evaluated. An evaluation of the issues that hinder effective retrofit analysis in terms of accessibility, usability, data requirement, and the application of efficiency measures, provides valuable insights into advancing the field forward. Following this review the general concepts were determined: (1) toolkits developed primarily in the private sector use empirically data-driven methods or benchmarking to provide ease of use, (2) almost all of the toolkits which used EnergyPlus or DOE-2 were freely accessible, but suffered from complexity, longer data input and simulation run time, (3) in general, there appeared to be a fine line between having too much detail resulting in a long analysis time or too little detail which sacrificed modeling fidelity. These insights provide an opportunity to enhance the design and development of existing and new retrofit toolkits in the future. - Highlights: • Retrofit analysis toolkits can accelerate the adoption of energy efficiency measures. • A comprehensive review of 19 retrofit analysis toolkits was conducted. • Retrofit toolkits have diverse features, data requirement and computing methods. • Empirical data-driven, normative and detailed energy modeling methods are used. • Identified immediate areas for improvement for retrofit analysis toolkits

  9. Design and Control of Warehouse Order Picking: a literature review

    NARCIS (Netherlands)

    M.B.M. de Koster (René); T. Le-Duc (Tho); K.J. Roodbergen (Kees-Jan)

    2006-01-01

    textabstractOrder picking has long been identified as the most labour-intensive and costly activity for almost every warehouse; the cost of order picking is estimated to be as much as 55% of the total warehouse operating expense. Any underperformance in order picking can lead to unsatisfactory

  10. Promotion bureau warehouse system design. Case study in University of AA

    Science.gov (United States)

    Parwati, N.; Qibtiyah, M.

    2017-12-01

    The warehouse becomes one of the important parts in an industry. By having a good warehousing system, an industry can improve the effectiveness of its performance, so that profits for the company can continue to increase. Meanwhile, if it has a poorly organized warehouse system, it is feared there will be a decrease in the level of effectiveness of the industry itself. In this research, the object was warehousing system in promotion bureau of University AA. To improve the effectiveness of warehousing system, warehouse layout design is done by specifying categories of goods based on the flow of goods in and out of warehouse with ABC analysis method. In addition, the design of information systems to assist in controlling the system to support all the demand for every burreau and department in the university.

  11. SIGKit: Software for Introductory Geophysics Toolkit

    Science.gov (United States)

    Kruse, S.; Bank, C. G.; Esmaeili, S.; Jazayeri, S.; Liu, S.; Stoikopoulos, N.

    2017-12-01

    The Software for Introductory Geophysics Toolkit (SIGKit) affords students the opportunity to create model data and perform simple processing of field data for various geophysical methods. SIGkit provides a graphical user interface built with the MATLAB programming language, but can run even without a MATLAB installation. At this time SIGkit allows students to pick first arrivals and match a two-layer model to seismic refraction data; grid total-field magnetic data, extract a profile, and compare this to a synthetic profile; and perform simple processing steps (subtraction of a mean trace, hyperbola fit) to ground-penetrating radar data. We also have preliminary tools for gravity, resistivity, and EM data representation and analysis. SIGkit is being built by students for students, and the intent of the toolkit is to provide an intuitive interface for simple data analysis and understanding of the methods, and act as an entrance to more sophisticated software. The toolkit has been used in introductory courses as well as field courses. First reactions from students are positive. Think-aloud observations of students using the toolkit have helped identify problems and helped shape it. We are planning to compare the learning outcomes of students who have used the toolkit in a field course to students in a previous course to test its effectiveness.

  12. A toolkit for computerized operating procedure of complex industrial systems with IVI-COM technology

    International Nuclear Information System (INIS)

    Zhou Yangping; Dong Yujie; Huang Xiaojing; Ye Jingliang; Yoshikawa, Hidekazu

    2013-01-01

    A human interface toolkit is proposed to help the user develop computerized operating procedure of complex industrial system such as Nuclear Power Plants (NPPs). Coupled with a friendly graphical interface, this integrated tool includes a database, a procedure editor and a procedure executor. A three layer hierarchy is adopted to express the complexity of operating procedure, which includes mission, process and node. There are 10 kinds of node: entrance, exit, hint, manual input, detector, actuator, data treatment, branch, judgment and plug-in. The computerized operating procedure will sense and actuate the actual industrial systems with the interface based on IVI-COM (Interchangeable Virtual Instrumentation-Component Object Model) technology. A prototype system of this human interface toolkit has been applied to develop a simple computerized operating procedure for a simulated NPP. (author)

  13. A Performance Evaluation of Online Warehouse Update Algorithms

    Science.gov (United States)

    1998-01-01

    able to present a fully consistent ver- sion of the warehouse to the queries while the warehouse is being updated. Multiversioning has been used...LST97]). Special- ized multiversion access structures have also been proposed ([LS89, LS90, dBS96, BC97, VV97, MOPW98]) In the context of OLTP systems...collection processes. 2.1 Multiversioning MVNL supports multiple versions by using Time Travel ([Sto87]). Each row has two extra at- tributes, Tmin

  14. Data warehouse governance programs in healthcare settings: a literature review and a call to action.

    Science.gov (United States)

    Elliott, Thomas E; Holmes, John H; Davidson, Arthur J; La Chance, Pierre-Andre; Nelson, Andrew F; Steiner, John F

    2013-01-01

    Given the extensive data stored in healthcare data warehouses, data warehouse governance policies are needed to ensure data integrity and privacy. This review examines the current state of the data warehouse governance literature as it applies to healthcare data warehouses, identifies knowledge gaps, provides recommendations, and suggests approaches for further research. A comprehensive literature search using five data bases, journal article title-search, and citation searches was conducted between 1997 and 2012. Data warehouse governance documents from two healthcare systems in the USA were also reviewed. A modified version of nine components from the Data Governance Institute Framework for data warehouse governance guided the qualitative analysis. Fifteen articles were retrieved. Only three were related to healthcare settings, each of which addressed only one of the nine framework components. Of the remaining 12 articles, 10 addressed between one and seven framework components and the remainder addressed none. Each of the two data warehouse governance plans obtained from healthcare systems in the USA addressed a subset of the framework components, and between them they covered all nine. While published data warehouse governance policies are rare, the 15 articles and two healthcare organizational documents reviewed in this study may provide guidance to creating such policies. Additional research is needed in this area to ensure that data warehouse governance polices are feasible and effective. The gap between the development of data warehouses in healthcare settings and formal governance policies is substantial, as evidenced by the sparse literature in this domain.

  15. Financing agribusiness: Insurance coverage as protection against credit risk of warehouse receipt collateral

    Directory of Open Access Journals (Sweden)

    Jovičić Daliborka

    2017-01-01

    Full Text Available Financing agribusiness by warehouse receipts allows the agricultural producers to obtain working capital on the basis of agricultural products stored in licensed warehouses, as collateral. The implementation of the system of licensed warehouses and issuance of warehouse receipts as collateral for obtaining a bank loan is supported by the European Bank for Reconstruction and Development and it has had positive results in the neighbouring countries. The precondition for financing this project was to establish a Compensation Fund for providing insurance coverage for licensed warehouses against professional liability. However, in the lack of an adequate legal framework, the operational risk is possible to occur. Bearing in mind that Serbia has a tradition in insurance industry and a number of operating insurance companies, the issue is that of the economic benefit and the method of insuring against this risk. The paper will present a detailed analysis of the operation of the Fund, capital requirement, solvency margin and a critical review of the Law on Public Warehouses which regulates the rights and obligations of the Compensation Fund in the case of loss occurrence.

  16. Data Warehouse for support to the electric energy commercialization; Data Warehouse para apoio a comercializacao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Lanzotti, Carla R.; Correia, Paulo B. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Engenharia Mecanica]. E-mails: clanzotti@yahoo.com; pcorreia@fem.unicamp.br

    2006-07-01

    This paper specifies data base using a data warehouse containing the energy market, the electric system, and the economy information allowing the visualization and analysis of the data through tables and dynamic charts. This data warehouse corresponds to the module 'Information base from Platform helping the electric power commercialization'. The platform is a computer program viewing to help the interested agents in commercializing energy and is formed by three modules as follows: Information Base, Contracting Strategies and Contracting Process. It is expected that the use os these data base, joint to Platform establishes positive conditions to agents from the interested in electric energy commercialization.

  17. Network Science Research Laboratory (NSRL) Discrete Event Toolkit

    Science.gov (United States)

    2016-01-01

    ARL-TR-7579 ● JAN 2016 US Army Research Laboratory Network Science Research Laboratory (NSRL) Discrete Event Toolkit by...Laboratory (NSRL) Discrete Event Toolkit by Theron Trout and Andrew J Toth Computational and Information Sciences Directorate, ARL...Research Laboratory (NSRL) Discrete Event Toolkit 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Theron Trout

  18. Data Warehouse Architecture for Army Installations

    National Research Council Canada - National Science Library

    Reddy, Prameela

    1999-01-01

    .... A data warehouse is a single store of information to answer complex queries from management using cross-functional data to perform advanced data analysis methods and to compare with historical data...

  19. Measuring employee satisfaction in new offices - the WODI toolkit

    NARCIS (Netherlands)

    Maarleveld, M.; Volker, L.; van der Voordt, Theo

    2009-01-01

    Purpose: This paper presents a toolkit to measure employee satisfaction and perceived labour productivity as affected by different workplace strategies. The toolkit is being illustrated by a case study of the Dutch Revenue Service.
    Methodology: The toolkit has been developed by a review of

  20. 27 CFR 28.28 - Withdrawal of wine and distilled spirits from customs bonded warehouses.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Withdrawal of wine and... Miscellaneous Provisions Customs Bonded Warehouses § 28.28 Withdrawal of wine and distilled spirits from customs bonded warehouses. Wine and bottled distilled spirits entered into customs bonded warehouses as provided...

  1. Analisis Dan Perancangan Data Warehouse Pada PT Pelita Tatamas Jaya

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2010-12-01

    Full Text Available The purpose of this research is to assist in providing information to support decision-making processes in sales, purchasing and inventory control at PT Tatamas Pelita Jaya. With the support of data warehouse, business leaders can be more helpful in making decisions more quickly and precisely. Research methodology includes analysis of current systems, library research, designing a data warehousing schema using bintang. The result of this research is the availability of a data warehouse that can generate information quickly and precisely, thus helping the company in making decisions. The conclusion of this research is the application of data warehouse can be a media aide related parties on PT Tatamas Pelita Jaya in decision making. 

  2. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  3. Mastering data warehouse design relational and dimensional techniques

    CERN Document Server

    Imhoff, Claudia; Geiger, Jonathan G

    2003-01-01

    A cutting-edge response to Ralph Kimball''s challenge to the data warehouse community that answers some tough questions about the effectiveness of the relational approach to data warehousingWritten by one of the best-known exponents of the Bill Inmon approach to data warehousingAddresses head-on the tough issues raised by Kimball and explains how to choose the best modeling technique for solving common data warehouse design problemsWeighs the pros and cons of relational vs. dimensional modeling techniquesFocuses on tough modeling problems, including creating and maintaining keys and modeling c

  4. Development of a clinical data warehouse from an intensive care clinical information system.

    Science.gov (United States)

    de Mul, Marleen; Alons, Peter; van der Velde, Peter; Konings, Ilse; Bakker, Jan; Hazelzet, Jan

    2012-01-01

    There are relatively few institutions that have developed clinical data warehouses, containing patient data from the point of care. Because of the various care practices, data types and definitions, and the perceived incompleteness of clinical information systems, the development of a clinical data warehouse is a challenge. In order to deal with managerial and clinical information needs, as well as educational and research aims that are important in the setting of a university hospital, Erasmus Medical Center Rotterdam, The Netherlands, developed a data warehouse incrementally. In this paper we report on the in-house development of an integral part of the data warehouse specifically for the intensive care units (ICU-DWH). It was modeled using Atos Origin Metadata Frame method. The paper describes the methodology, the development process and the content of the ICU-DWH, and discusses the need for (clinical) data warehouses in intensive care. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Transportation librarian's toolkit

    Science.gov (United States)

    2007-12-01

    The Transportation Librarians Toolkit is a product of the Transportation Library Connectivity pooled fund study, TPF- 5(105), a collaborative, grass-roots effort by transportation libraries to enhance information accessibility and professional expert...

  6. Selection of Forklift Unit for Warehouse Operation by Applying Multi-Criteria Analysis

    Directory of Open Access Journals (Sweden)

    Predrag Atanasković

    2013-07-01

    Full Text Available This paper presents research related to the choice of the criteria that can be used to perform an optimal selection of the forklift unit for warehouse operation. The analysis has been done with the aim of exploring the requirements and defining relevant criteria that are important when investment decision is made for forklift procurement, and based on the conducted research by applying multi-criteria analysis, to determine the appropriate parameters and their relative weights that form the input data and database for selection of the optimal handling unit. This paper presents an example of choosing the optimal forklift based on the selected criteria for the purpose of making the relevant investment decision.

  7. 7 CFR 1427.16 - Movement and protection of warehouse-stored cotton.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Movement and protection of warehouse-stored cotton. 1427.16 Section 1427.16 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY... Cotton Loan and Loan Deficiency Payments § 1427.16 Movement and protection of warehouse-stored cotton. (a...

  8. Warehouse Plan for the Multi-Canister Overpacks (MC0) and Baskets

    International Nuclear Information System (INIS)

    MARTIN, M.K.

    2000-01-01

    The Multi-Canister Overpacks (MCO) will contain spent nuclear fuel (SNF) removed from the K East and West Basins. The SNF will be placed in fuel storage baskets that will be stacked inside the MCOs. Approximately 400 MCOs and 21 70 baskets will be fabricated for this purpose. These MCOs, loaded with SNF, will be placed in interim storage in the Canister Storage Building (CSB) located in the 200 Area of the Hanford Site. The MCOs consist of different components/sub-assemblies that will be manufactured by one or more vendors. All component/sub-assemblies will be shipped to the Hanford Site Central Stores Warehouse, 2355 Stevens Drive, Building 1163 in the 1100 Area, for inspection and storage until these components are required at the CSB and K Basins. The MCO fuel storage baskets will be manufactured in the MCO basket fabrication shop located in Building 328 of the Hanford Site 300 Area. The MCO baskets will be inspected at the fabrication shop before shipment to the Central Stores Warehouse for storage. The MCO components and baskets will be stored as received from the manufacturer with specified protective coatings, wrappings, and packaging intact to maintain mechanical integrity of the components and to prevent corrosion. The components and baskets will be shipped as needed from the warehouse to the CSB and K Basins. This warehouse plan includes the requirements for receipt of MCO components and baskets from the manufacturers and storage at the Hanford Site Central Stores Warehouse. Transportation of the MCO components and baskets from the warehouse, unwrapping, and assembly of the MCOs are the responsibility of SNF Operations and are not included in this plan

  9. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  10. Aspects of Data Warehouse Technologies for Complex Web Data

    DEFF Research Database (Denmark)

    Thomsen, Christian

    This thesis is about aspects of specification and development of data warehouse technologies for complex web data. Today, large amounts of data exist in different web resources and in different formats. But it is often hard to analyze and query the often big and complex data or data about the data...... (i.e., metadata). It is therefore interesting to apply Data Warehouse (DW) technology to the data. But to apply DW technology to complex web data is not straightforward and the DW community faces new and exciting challenges. This thesis considers some of these challenges. The work leading...... to this thesis has primarily been done in relation to the project European Internet Accessibility Observatory (EIAO) where a data warehouse for accessibility data (roughly data about how usable web resources are for disabled users) has been specified and developed. But the results of the thesis can also...

  11. 27 CFR 28.27 - Entry of wine into customs bonded warehouses.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Entry of wine into customs... TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS EXPORTATION OF ALCOHOL Miscellaneous Provisions Customs Bonded Warehouses § 28.27 Entry of wine into customs bonded warehouses. Upon filing of the application or...

  12. A Novel Optimization Method on Logistics Operation for Warehouse & Port Enterprises Based on Game Theory

    Directory of Open Access Journals (Sweden)

    Junyang Li

    2013-09-01

    Full Text Available Purpose: The following investigation aims to deal with the competitive relationship among different warehouses & ports in the same company. Design/methodology/approach: In this paper, Game Theory is used in carrying out the optimization model. Genetic Algorithm is used to solve the model. Findings: Unnecessary competition will rise up if there is little internal communication among different warehouses & ports in one company. This paper carries out a novel optimization method on warehouse & port logistics operation model. Originality/value: Warehouse logistics business is a combination of warehousing services and terminal services which is provided by port logistics through the existing port infrastructure on the basis of a port. The newly proposed method can help to optimize logistics operation model for warehouse & port enterprises effectively. We set Sinotrans Guangdong Company as an example to illustrate the newly proposed method. Finally, according to the case study, this paper gives some responses and suggestions on logistics operation in Sinotrans Guangdong warehouse & port for its future development.

  13. Protocol for a national blood transfusion data warehouse from donor to recipient

    NARCIS (Netherlands)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-01-01

    INTRODUCTION: Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion

  14. 27 CFR 28.244a - Shipment to a customs bonded warehouse.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Shipment to a customs... Export Consignment § 28.244a Shipment to a customs bonded warehouse. Distilled spirits and wine withdrawn for shipment to a customs bonded warehouse shall be consigned in care of the customs officer in charge...

  15. A Modular Toolkit for Distributed Interactions

    Directory of Open Access Journals (Sweden)

    Julien Lange

    2011-10-01

    Full Text Available We discuss the design, architecture, and implementation of a toolkit which supports some theories for distributed interactions. The main design principles of our architecture are flexibility and modularity. Our main goal is to provide an easily extensible workbench to encompass current algorithms and incorporate future developments of the theories. With the help of some examples, we illustrate the main features of our toolkit.

  16. Study on managing EPICS database using ORACLE

    International Nuclear Information System (INIS)

    Liu Shu; Wang Chunhong; Zhao Jijiu

    2007-01-01

    EPICS is used as a development toolkit of BEPCII control system. The core of EPICS is a distributed database residing in front-end machines. The distributed database is usually created by tools such as VDCT and text editor in the host, then loaded to front-end target IOCs through the network. In BEPCII control system there are about 20,000 signals, which are distributed in more than 20 IOCs. All the databases are developed by device control engineers using VDCT or text editor. There's no uniform tools providing transparent management. The paper firstly presents the current status on EPICS database management issues in many labs. Secondly, it studies EPICS database and the interface between ORACLE and EPICS database. finally, it introduces the software development and application is BEPCII control system. (authors)

  17. Development of a public health reporting data warehouse: lessons learned.

    Science.gov (United States)

    Rizi, Seyed Ali Mussavi; Roudsari, Abdul

    2013-01-01

    Data warehouse projects are perceived to be risky and prone to failure due to many organizational and technical challenges. However, often iterative and lengthy processes of implementation of data warehouses at an enterprise level provide an opportunity for formative evaluation of these solutions. This paper describes lessons learned from successful development and implementation of the first phase of an enterprise data warehouse to support public health surveillance at British Columbia Centre for Disease Control. Iterative and prototyping approach to development, overcoming technical challenges of extraction and integration of data from large scale clinical and ancillary systems, a novel approach to record linkage, flexible and reusable modeling of clinical data, and securing senior management support at the right time were the main factors that contributed to the success of the data warehousing project.

  18. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  19. CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer.

    Science.gov (United States)

    Wong, Wing Chung; Kim, Dewey; Carter, Hannah; Diekhans, Mark; Ryan, Michael C; Karchin, Rachel

    2011-08-01

    Thousands of cancer exomes are currently being sequenced, yielding millions of non-synonymous single nucleotide variants (SNVs) of possible relevance to disease etiology. Here, we provide a software toolkit to prioritize SNVs based on their predicted contribution to tumorigenesis. It includes a database of precomputed, predictive features covering all positions in the annotated human exome and can be used either stand-alone or as part of a larger variant discovery pipeline. MySQL database, source code and binaries freely available for academic/government use at http://wiki.chasmsoftware.org, Source in Python and C++. Requires 32 or 64-bit Linux system (tested on Fedora Core 8,10,11 and Ubuntu 10), 2.5*≤ Python 5.0, 60 GB available hard disk space (50 MB for software and data files, 40 GB for MySQL database dump when uncompressed), 2 GB of RAM.

  20. Integrated Systems Health Management (ISHM) Toolkit

    Science.gov (United States)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  1. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  2. Microsoft BizTalk ESB Toolkit 2.1

    CERN Document Server

    Benito, Andrés Del Río

    2013-01-01

    A practical guide into the architecture and features that make up the services and components of the ESB Toolkit.This book is for experienced BizTalk developers, administrators, and architects, as well as IT managers and BizTalk business analysts. Knowledge and experience with the Toolkit is not a requirement.

  3. Scale out databases for CERN use cases

    International Nuclear Information System (INIS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database. (paper)

  4. Adding Impacts and Mitigation Measures to OpenEI's RAPID Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Erin

    2017-05-01

    The Open Energy Information platform hosts the Regulatory and Permitting Information Desktop (RAPID) Toolkit to provide renewable energy permitting information on federal and state regulatory processes. One of the RAPID Toolkit's functions is to help streamline the geothermal permitting processes outlined in the National Environmental Policy Act (NEPA). This is particularly important in the geothermal energy sector since each development phase requires separate land analysis to acquire exploration, well field drilling, and power plant construction permits. Using the Environmental Assessment documents included in RAPID's NEPA Database, the RAPID team identified 37 resource categories that a geothermal project may impact. Examples include impacts to geology and minerals, nearby endangered species, or water quality standards. To provide federal regulators, project developers, consultants, and the public with typical impacts and mitigation measures for geothermal projects, the RAPID team has provided overview webpages of each of these 37 resource categories with a sidebar query to reference related NEPA documents in the NEPA Database. This project is an expansion of a previous project that analyzed the time to complete NEPA environmental review for various geothermal activities. The NEPA review not only focused on geothermal projects within the Bureau of Land Management and U.S. Forest Service managed lands, but also projects funded by the Department of Energy. Timeline barriers found were: extensive public comments and involvement; content overlap in NEPA documents, and discovery of impacted resources such as endangered species or cultural sites.

  5. Clinical Use of an Enterprise Data Warehouse

    Science.gov (United States)

    Evans, R. Scott; Lloyd, James F.; Pierce, Lee A.

    2012-01-01

    The enormous amount of data being collected by electronic medical records (EMR) has found additional value when integrated and stored in data warehouses. The enterprise data warehouse (EDW) allows all data from an organization with numerous inpatient and outpatient facilities to be integrated and analyzed. We have found the EDW at Intermountain Healthcare to not only be an essential tool for management and strategic decision making, but also for patient specific clinical decision support. This paper presents the structure and two case studies of a framework that has provided us the ability to create a number of decision support applications that are dependent on the integration of previous enterprise-wide data in addition to a patient’s current information in the EMR. PMID:23304288

  6. An Industrial Physics Toolkit

    Science.gov (United States)

    Cummings, Bill

    2004-03-01

    Physicists possess many skills highly valued in industrial companies. However, with the exception of a decreasing number of positions in long range research at large companies, job openings in industry rarely say "Physicist Required." One key to a successful industrial career is to know what subset of your physics skills is most highly valued by a given industry and to continue to build these skills while working. This combination of skills from both academic and industrial experience becomes your "Industrial Physics Toolkit" and is a transferable resource when you change positions or companies. This presentation will describe how one builds and sells your own "Industrial Physics Toolkit" using concrete examples from the speaker's industrial experience.

  7. 27 CFR 24.126 - Change in proprietorship involving a bonded wine warehouse.

    Science.gov (United States)

    2010-04-01

    ... involving a bonded wine warehouse. 24.126 Section 24.126 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS WINE Establishment and Operations Changes Subsequent to Original Establishment § 24.126 Change in proprietorship involving a bonded wine warehouse...

  8. Ontology-Based Big Dimension Modeling in Data Warehouse Schema Design

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem

    2013-01-01

    During data warehouse schema design, designers often encounter how to model big dimensions that typically contain a large number of attributes and records. To investigate effective approaches for modeling big dimensions is necessary in order to achieve better query performance, with respect...... partitioning, vertical partitioning and their hybrid. We formalize the design methods and propose an algorithm that describes the modeling process from an OWL ontology to a data warehouse schema. In addition, this paper also presents an effective ontology-based tool to automate the modeling process. The tool...... can automatically generate the data warehouse schema from the ontology of describing the terms and business semantics for the big dimension. In case of any change in the requirements, we only need to modify the ontology, and re-generate the schema using the tool. This paper also evaluates the proposed...

  9. CRISPR-Cas9 Toolkit for Actinomycete Genome Editing

    DEFF Research Database (Denmark)

    Tong, Yaojun; Robertsen, Helene Lunde; Blin, Kai

    2018-01-01

    engineering approaches for boosting known and discovering novel natural products. In order to facilitate the genome editing for actinomycetes, we developed a CRISPR-Cas9 toolkit with high efficiency for actinomyces genome editing. This basic toolkit includes a software for spacer (sgRNA) identification......, a system for in-frame gene/gene cluster knockout, a system for gene loss-of-function study, a system for generating a random size deletion library, and a system for gene knockdown. For the latter, a uracil-specific excision reagent (USER) cloning technology was adapted to simplify the CRISPR vector...... construction process. The application of this toolkit was successfully demonstrated by perturbation of genomes of Streptomyces coelicolor A3(2) and Streptomyces collinus Tü 365. The CRISPR-Cas9 toolkit and related protocol described here can be widely used for metabolic engineering of actinomycetes....

  10. Some Considerations about Modern Database Machines

    Directory of Open Access Journals (Sweden)

    Manole VELICANU

    2010-01-01

    Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.

  11. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  12. WAREHOUSE PERFORMANCE MEASUREMENT - A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Crisan Emil

    2009-05-01

    Full Text Available Companies could gain cost advantage using their logistics area of the business. Warehouse management is a possible source of cost improvements from logistics that companies could use during this economic crisis. The goal of this article is to expose a few

  13. PENGEMBANGAN DATA WAREHOUSE DAN ON-LINE ANALYTICAL PROCESSING (OLAP UNTUK PENEMUAN INFORMASI DAN ANALISIS DATA (Studi Kasus : Sistem Informasi Penerimaan Mahasiswa Baru STMIK AMIKOM PURWOKERTO

    Directory of Open Access Journals (Sweden)

    Giat Karyono

    2011-08-01

    databases (OLTP SIPMB process of creating a data warehouse is done on different machines by performing replicate of the database used SIPMB. For the needs of the application is made data analysis OLAP PMB. These applications could present multidimensional data in grid view. Analysis of these data may include analysis of specialization in new student enrollment based on the period of new admissions, enrollment surge, home school, home province and district, Pengembangan Data Warehouse dan On-line Analytical Processing (OLAP Untuk Penemuan Informasi Dan Analisis DataJurnal Telematika Vol. 4 No.2 Agustus 2011 14registration information, day, month, or year. So that the results of data analysis can be the management in determining the appropriate marketing strategies to increase the number of freshmen applicants either currently running or for admission in the coming year.

  14. Hydropower Regulatory and Permitting Information Desktop (RAPID) Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Aaron L [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-12-19

    Hydropower Regulatory and Permitting Information Desktop (RAPID) Toolkit presentation from the WPTO FY14-FY16 Peer Review. The toolkit is aimed at regulatory agencies, consultants, project developers, the public, and any other party interested in learning more about the hydropower regulatory process.

  15. Metodología crisp para la implementación Data Warehouse

    Directory of Open Access Journals (Sweden)

    Octavio José Salcedo

    2010-06-01

    Full Text Available Currently, the generation of crystal clear reports, concise and above all based on true corporate information is a fundamental element in decision making, because this imminent need arises data warehouse as an essential resource for conducting the process, primarily founded on the philosophy using the concept OLAP and EIS and DSS for the completion of reports. Within the processes carried out for construction of the data warehouse is mainly involving the extraction, processing and handling Information for further definition of the metadata which in turn are used to define the data warehouse as an integrated system. The trend towards pointing BI, is to the dissemination of information both management and to all who need it from different dimensions and levels associated in order to obtainconsolidated or detailed reports to facilitate the synthesis of certain business process that directly impact the decision-making, which at last is the same purpose of the data warehouse. To carry out the implementation of this process is necessary to have an appropriate methodology, so that the project was designed under the structure ofinternational standards, which are the foundation for obtaining excellent results on project implementation.

  16. Improving warehouse responsiveness by job priority management : A European distribution centre field study

    NARCIS (Netherlands)

    T.Y. Kim (Thai Young)

    2018-01-01

    textabstractWarehouses employ order cut-off times to ensure sufficient time for fulfilment. To satisfy higher consumer expectations, these cut-off times are gradually postponed to improve order responsiveness. Warehouses therefore have to allocate jobs more efficiently to meet compressed response

  17. Development of a clinical data warehouse for hospital infection control.

    Science.gov (United States)

    Wisniewski, Mary F; Kieszkowski, Piotr; Zagorski, Brandon M; Trick, William E; Sommers, Michael; Weinstein, Robert A

    2003-01-01

    Existing data stored in a hospital's transactional servers have enormous potential to improve performance measurement and health care quality. Accessing, organizing, and using these data to support research and quality improvement projects are evolving challenges for hospital systems. The authors report development of a clinical data warehouse that they created by importing data from the information systems of three affiliated public hospitals. They describe their methodology; difficulties encountered; responses from administrators, computer specialists, and clinicians; and the steps taken to capture and store patient-level data. The authors provide examples of their use of the clinical data warehouse to monitor antimicrobial resistance, to measure antimicrobial use, to detect hospital-acquired bloodstream infections, to measure the cost of infections, and to detect antimicrobial prescribing errors. In addition, they estimate the amount of time and money saved and the increased precision achieved through the practical application of the data warehouse.

  18. Development of a Clinical Data Warehouse for Hospital Infection Control

    Science.gov (United States)

    Wisniewski, Mary F.; Kieszkowski, Piotr; Zagorski, Brandon M.; Trick, William E.; Sommers, Michael; Weinstein, Robert A.

    2003-01-01

    Existing data stored in a hospital's transactional servers have enormous potential to improve performance measurement and health care quality. Accessing, organizing, and using these data to support research and quality improvement projects are evolving challenges for hospital systems. The authors report development of a clinical data warehouse that they created by importing data from the information systems of three affiliated public hospitals. They describe their methodology; difficulties encountered; responses from administrators, computer specialists, and clinicians; and the steps taken to capture and store patient-level data. The authors provide examples of their use of the clinical data warehouse to monitor antimicrobial resistance, to measure antimicrobial use, to detect hospital-acquired bloodstream infections, to measure the cost of infections, and to detect antimicrobial prescribing errors. In addition, they estimate the amount of time and money saved and the increased precision achieved through the practical application of the data warehouse. PMID:12807807

  19. A new open-source Python-based Space Weather data access, visualization, and analysis toolkit

    Science.gov (United States)

    de Larquier, S.; Ribeiro, A.; Frissell, N. A.; Spaleta, J.; Kunduri, B.; Thomas, E. G.; Ruohoniemi, J.; Baker, J. B.

    2013-12-01

    Space weather research relies heavily on combining and comparing data from multiple observational platforms. Current frameworks exist to aggregate some of the data sources, most based on file downloads via web or ftp interfaces. Empirical models are mostly fortran based and lack interfaces with more useful scripting languages. In an effort to improve data and model access, the SuperDARN community has been developing a Python-based Space Science Data Visualization Toolkit (DaViTpy). At the center of this development was a redesign of how our data (from 30 years of SuperDARN radars) was made available. Several access solutions are now wrapped into one convenient Python interface which probes local directories, a new remote NoSQL database, and an FTP server to retrieve the requested data based on availability. Motivated by the efficiency of this interface and the inherent need for data from multiple instruments, we implemented similar modules for other space science datasets (POES, OMNI, Kp, AE...), and also included fundamental empirical models with Python interfaces to enhance data analysis (IRI, HWM, MSIS...). All these modules and more are gathered in a single convenient toolkit, which is collaboratively developed and distributed using Github and continues to grow. While still in its early stages, we expect this toolkit will facilitate multi-instrument space weather research and improve scientific productivity.

  20. Scale out databases for CERN use cases

    CERN Document Server

    Baranowski, Zbigniew; Canali, Luca; Garcia, Daniel Lanza; Surdy, Kacper

    2015-01-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log dat...

  1. Validating the extract, transform, load process used to populate a large clinical research database.

    Science.gov (United States)

    Denney, Michael J; Long, Dustin M; Armistead, Matthew G; Anderson, Jamie L; Conway, Baqiyyah N

    2016-10-01

    Informaticians at any institution that are developing clinical research support infrastructure are tasked with populating research databases with data extracted and transformed from their institution's operational databases, such as electronic health records (EHRs). These data must be properly extracted from these source systems, transformed into a standard data structure, and then loaded into the data warehouse while maintaining the integrity of these data. We validated the correctness of the extract, load, and transform (ETL) process of the extracted data of West Virginia Clinical and Translational Science Institute's Integrated Data Repository, a clinical data warehouse that includes data extracted from two EHR systems. Four hundred ninety-eight observations were randomly selected from the integrated data repository and compared with the two source EHR systems. Of the 498 observations, there were 479 concordant and 19 discordant observations. The discordant observations fell into three general categories: a) design decision differences between the IDR and source EHRs, b) timing differences, and c) user interface settings. After resolving apparent discordances, our integrated data repository was found to be 100% accurate relative to its source EHR systems. Any institution that uses a clinical data warehouse that is developed based on extraction processes from operational databases, such as EHRs, employs some form of an ETL process. As secondary use of EHR data begins to transform the research landscape, the importance of the basic validation of the extracted EHR data cannot be underestimated and should start with the validation of the extraction process itself. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Two warehouse inventory model for deteriorating item with exponential demand rate and permissible delay in payment

    Directory of Open Access Journals (Sweden)

    Kaliraman Naresh Kumar

    2017-01-01

    Full Text Available A two warehouse inventory model for deteriorating items is considered with exponential demand rate and permissible delay in payment. Shortage is not allowed and deterioration rate is constant. In the model, one warehouse is rented and the other is owned. The rented warehouse is provided with better facility for the stock than the owned warehouse, but is charged more. The objective of this model is to find the best replenishment policies for minimizing the total appropriate inventory cost. A numerical illustration and sensitivity analysis is provided.

  3. Pyradi: an open-source toolkit for infrared calculation and data processing

    CSIR Research Space (South Africa)

    Willers, CJ

    2012-09-01

    Full Text Available of such a toolkit facilitates and increases productivity during subsequent tool development: “develop once and use many times”. The concept of an extendible toolkit lends itself naturally to the open-source philosophy, where the toolkit user-base develops...

  4. Multidimensional Databases and Data Warehousing

    CERN Document Server

    Jensen, Christian

    2010-01-01

    The present book's subject is multidimensional data models and data modeling concepts as they are applied in real data warehouses. The book aims to present the most important concepts within this subject in a precise and understandable manner. The book's coverage of fundamental concepts includes data cubes and their elements, such as dimensions, facts, and measures and their representation in a relational setting; it includes architecture-related concepts; and it includes the querying of multidimensional databases.The book also covers advanced multidimensional concepts that are considered to b

  5. The Challenges of Data Quality Evaluation in a Joint Data Warehouse.

    Science.gov (United States)

    Bae, Charles J; Griffith, Sandra; Fan, Youran; Dunphy, Cheryl; Thompson, Nicolas; Urchek, John; Parchman, Alandra; Katzan, Irene L

    2015-01-01

    The use of clinically derived data from electronic health records (EHRs) and other electronic clinical systems can greatly facilitate clinical research as well as operational and quality initiatives. One approach for making these data available is to incorporate data from different sources into a joint data warehouse. When using such a data warehouse, it is important to understand the quality of the data. The primary objective of this study was to determine the completeness and concordance of common types of clinical data available in the Knowledge Program (KP) joint data warehouse, which contains feeds from several electronic systems including the EHR. A manual review was performed of specific data elements for 250 patients from an EHR, and these were compared with corresponding elements in the KP data warehouse. Completeness and concordance were calculated for five categories of data including demographics, vital signs, laboratory results, diagnoses, and medications. In general, data elements for demographics, vital signs, diagnoses, and laboratory results were present in more cases in the source EHR compared to the KP. When data elements were available in both sources, there was a high concordance. In contrast, the KP data warehouse documented a higher prevalence of deaths and medications compared to the EHR. Several factors contributed to the discrepancies between data in the KP and the EHR-including the start date and frequency of data feeds updates into the KP, inability to transfer data located in nonstructured formats (e.g., free text or scanned documents), as well as incomplete and missing data variables in the source EHR. When evaluating the quality of a data warehouse with multiple data sources, assessing completeness and concordance between data set and source data may be better than designating one to be a gold standard. This will allow the user to optimize the method and timing of data transfer in order to capture data with better accuracy.

  6. The self-describing data sets file protocol and Toolkit

    International Nuclear Information System (INIS)

    Borland, M.; Emery, L.

    1995-01-01

    The Self-Describing Data Sets (SDDS) file protocol continues to be used extensively in commissioning the Advanced Photon Source (APS) accelerator complex. SDDS protocol has proved useful primarily due to the existence of the SDDS Toolkit, a growing set of about 60 generic commandline programs that read and/or write SDDS files. The SDDS Toolkit is also used extensively for simulation postprocessing, giving physicists a single environment for experiment and simulation. With the Toolkit, new SDDS data is displayed and subjected to complex processing without developing new programs. Data from EPICS, lab instruments, simulation, and other sources are easily integrated. Because the SDDS tools are commandline-based, data processing scripts are readily written using the user's preferred shell language. Since users work within a UNIX shell rather than an application-specific shell or GUI, they may add SDDS-compliant programs and scripts to their personal toolkits without restriction or complication. The SDDS Toolkit has been run under UNIX on SUN OS4, HP-UX, and LINUX. Application of SDDS to accelerator operation is being pursued using Tcl/Tk to provide a GUI

  7. Implementing a user-driven online quality improvement toolkit for cancer care.

    Science.gov (United States)

    Luck, Jeff; York, Laura S; Bowman, Candice; Gale, Randall C; Smith, Nina; Asch, Steven M

    2015-05-01

    Peer-to-peer collaboration within integrated health systems requires a mechanism for sharing quality improvement lessons. The Veterans Health Administration (VA) developed online compendia of tools linked to specific cancer quality indicators. We evaluated awareness and use of the toolkits, variation across facilities, impact of social marketing, and factors influencing toolkit use. A diffusion of innovations conceptual framework guided the collection of user activity data from the Toolkit Series SharePoint site and an online survey of potential Lung Cancer Care Toolkit users. The VA Toolkit Series site had 5,088 unique visitors in its first 22 months; 5% of users accounted for 40% of page views. Social marketing communications were correlated with site usage. Of survey respondents (n = 355), 54% had visited the site, of whom 24% downloaded at least one tool. Respondents' awareness of the lung cancer quality performance of their facility, and facility participation in quality improvement collaboratives, were positively associated with Toolkit Series site use. Facility-level lung cancer tool implementation varied widely across tool types. The VA Toolkit Series achieved widespread use and a high degree of user engagement, although use varied widely across facilities. The most active users were aware of and active in cancer care quality improvement. Toolkit use seemed to be reinforced by other quality improvement activities. A combination of user-driven tool creation and centralized toolkit development seemed to be effective for leveraging health information technology to spread disease-specific quality improvement tools within an integrated health care system. Copyright © 2015 by American Society of Clinical Oncology.

  8. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  9. To the question about the layout of the racks in the warehouse

    Directory of Open Access Journals (Sweden)

    Ilesaliev D.I.

    2017-03-01

    Full Text Available Warehouses, which are located at points of transshipment of cargo from one type of transport on the other, play a sig-nificant role in the transformation of cargo to further the most effective transportation of goods. The location of racks and longitudinal passages are important in the work of transhipment warehouse. Typically, racks and longitudinal pas-sages are perpendicular to each other, the article proposes a radical change with the "euclidean advantage". This is an-other way of designing warehouses for efficiency overload packaged cargo in the supply chain. Purpose is to reduce the mileage for one cycle of the loader from loading and unloading areas to storage areas.

  10. Security Data Warehouse Application

    Science.gov (United States)

    Vernon, Lynn R.; Hennan, Robert; Ortiz, Chris; Gonzalez, Steve; Roane, John

    2012-01-01

    The Security Data Warehouse (SDW) is used to aggregate and correlate all JSC IT security data. This includes IT asset inventory such as operating systems and patch levels, users, user logins, remote access dial-in and VPN, and vulnerability tracking and reporting. The correlation of this data allows for an integrated understanding of current security issues and systems by providing this data in a format that associates it to an individual host. The cornerstone of the SDW is its unique host-mapping algorithm that has undergone extensive field tests, and provides a high degree of accuracy. The algorithm comprises two parts. The first part employs fuzzy logic to derive a best-guess host assignment using incomplete sensor data. The second part is logic to identify and correct errors in the database, based on subsequent, more complete data. Host records are automatically split or merged, as appropriate. The process had to be refined and thoroughly tested before the SDW deployment was feasible. Complexity was increased by adding the dimension of time. The SDW correlates all data with its relationship to time. This lends support to forensic investigations, audits, and overall situational awareness. Another important feature of the SDW architecture is that all of the underlying complexities of the data model and host-mapping algorithm are encapsulated in an easy-to-use and understandable Perl language Application Programming Interface (API). This allows the SDW to be quickly augmented with additional sensors using minimal coding and testing. It also supports rapid generation of ad hoc reports and integration with other information systems.

  11. Development of global data warehouse for beam diagnostics at SSRF

    International Nuclear Information System (INIS)

    Lai Longwei; Leng Yongbin; Yan Yingbing; Chen Zhichu

    2015-01-01

    The beam diagnostic system is adequate during the daily operation and machine study at the Shanghai Synchrotron Radiation Facility (SSRF). Without the effective event detecting mechanism, it is difficult to dump and analyze abnormal phenomena such as the global orbital disturbance, the malfunction of the BPM and the noise of the DCCT. The global beam diagnostic data warehouse was built in order to monitor the status of the accelerator and the beam instruments. The data warehouse was designed as a Soft IOC hosted on an independent server. Once abnormal phenomena happen it will be triggered and will store the relevant data for further analysis. The results show that the data warehouse can detect abnormal phenomena of the machine and the beam diagnostic system effectively, and can be used for calculating confidential indicators of the beam instruments. It provides an efficient tool for the improvement of the beam diagnostic system and accelerator. (authors)

  12. Semantic integration of medication data into the EHOP Clinical Data Warehouse.

    Science.gov (United States)

    Delamarre, Denis; Bouzille, Guillaume; Dalleau, Kevin; Courtel, Denis; Cuggia, Marc

    2015-01-01

    Reusing medication data is crucial for many medical research domains. Semantic integration of such data in clinical data warehouse (CDW) is quite challenging. Our objective was to develop a reliable and scalable method for integrating prescription data into EHOP (a French CDW). PN13/PHAST was used as the semantic interoperability standard during the ETL process, and to store the prescriptions as documents in the CDW. Theriaque was used as a drug knowledge database (DKDB), to annotate the prescription dataset with the finest granularity, and to provide semantic capabilities to the EHOP query workbench. the system was evaluated on a clinical data set. Depending on the use case, the precision ranged from 52% to 100%, Recall was always 100%. interoperability standards and DKDB, document approach, and the finest granularity approach are the key factors for successful drug data integration in CDW.

  13. Expiration of Historical Databases

    DEFF Research Database (Denmark)

    Toman, David

    2001-01-01

    We present a technique for automatic expiration of data in a historical data warehouse that preserves answers to a known and fixed set of first-order queries. In addition, we show that for queries with output size bounded by a function of the active data domain size (the number of values that have...... ever appeared in the warehouse), the size of the portion of the data warehouse history needed to answer the queries is also bounded by a function of the active data do-main size and therefore does not depend on the age of the warehouse (the length of the history)....

  14. The 2016 ACCP Pharmacotherapy Didactic Curriculum Toolkit.

    Science.gov (United States)

    Schwinghammer, Terry L; Crannage, Andrew J; Boyce, Eric G; Bradley, Bridget; Christensen, Alyssa; Dunnenberger, Henry M; Fravel, Michelle; Gurgle, Holly; Hammond, Drayton A; Kwon, Jennifer; Slain, Douglas; Wargo, Kurt A

    2016-11-01

    The 2016 American College of Clinical Pharmacy (ACCP) Educational Affairs Committee was charged with updating and contemporizing ACCP's 2009 Pharmacotherapy Didactic Curriculum Toolkit. The toolkit has been designed to guide schools and colleges of pharmacy in developing, maintaining, and modifying their curricula. The 2016 committee reviewed the recent medical literature and other documents to identify disease states that are responsive to drug therapy. Diseases and content topics were organized by organ system, when feasible, and grouped into tiers as defined by practice competency. Tier 1 topics should be taught in a manner that prepares all students to provide collaborative, patient-centered care upon graduation and licensure. Tier 2 topics are generally taught in the professional curriculum, but students may require additional knowledge or skills after graduation (e.g., residency training) to achieve competency in providing direct patient care. Tier 3 topics may not be taught in the professional curriculum; thus, graduates will be required to obtain the necessary knowledge and skills on their own to provide direct patient care, if required in their practice. The 2016 toolkit contains 276 diseases and content topics, of which 87 (32%) are categorized as tier 1, 133 (48%) as tier 2, and 56 (20%) as tier 3. The large number of tier 1 topics will require schools and colleges to use creative pedagogical strategies to achieve the necessary practice competencies. Almost half of the topics (48%) are tier 2, highlighting the importance of postgraduate residency training or equivalent practice experience to competently care for patients with these disorders. The Pharmacotherapy Didactic Curriculum Toolkit will continue to be updated to provide guidance to faculty at schools and colleges of pharmacy as these academic pharmacy institutions regularly evaluate and modify their curricula to keep abreast of scientific advances and associated practice changes. Access the

  15. Land surface Verification Toolkit (LVT)

    Science.gov (United States)

    Kumar, Sujay V.

    2017-01-01

    LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.

  16. Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit

    Directory of Open Access Journals (Sweden)

    Morley Chris

    2008-03-01

    Full Text Available Abstract Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.

  17. An open source toolkit for medical imaging de-identification

    International Nuclear Information System (INIS)

    Rodriguez Gonzalez, David; Carpenter, Trevor; Wardlaw, Joanna; Hemert, Jano I. van

    2010-01-01

    Medical imaging acquired for clinical purposes can have several legitimate secondary uses in research projects and teaching libraries. No commonly accepted solution for anonymising these images exists because the amount of personal data that should be preserved varies case by case. Our objective is to provide a flexible mechanism for anonymising Digital Imaging and Communications in Medicine (DICOM) data that meets the requirements for deployment in multicentre trials. We reviewed our current de-identification practices and defined the relevant use cases to extract the requirements for the de-identification process. We then used these requirements in the design and implementation of the toolkit. Finally, we tested the toolkit taking as a reference those requirements, including a multicentre deployment. The toolkit successfully anonymised DICOM data from various sources. Furthermore, it was shown that it could forward anonymous data to remote destinations, remove burned-in annotations, and add tracking information to the header. The toolkit also implements the DICOM standard confidentiality mechanism. A DICOM de-identification toolkit that facilitates the enforcement of privacy policies was developed. It is highly extensible, provides the necessary flexibility to account for different de-identification requirements and has a low adoption barrier for new users. (orig.)

  18. Warehouse design and product assignment and allocation: A mathematical programming model

    OpenAIRE

    Geraldes, Carla A. S.; Carvalho, Maria Sameiro; Pereira, Guilherme

    2012-01-01

    Warehouses can be considered one of the most important nodes in supply chains. The dynamic nature of today's markets compels organizations to an incessant reassessment in an effort to respond to continuous challenges. Therefore warehouses must be continually re-evaluated to ensure that they are consistent with both market's demands and management's strategies. In this paper we discuss a mathematical programming model aiming to support product assignment and allocation to the functional areas ...

  19. Warehouse hazardous and toxic waste design in Karingau Balikpapan

    Science.gov (United States)

    Pratama, Bayu Rendy; Kencanawati, Martheana

    2017-11-01

    PT. Balikpapan Environmental Services (PT. BES) is company that having core business in Hazardous and Toxic Waste Management Services which consisting storage and transporter at Balikpapan. This research starting with data collection such as type of waste, quantity of waste, dimension area of existing building, waste packaging (Drum, IBC tank, Wooden Box, & Bulk Bag). Processing data that will be done are redesign for warehouse dimension and layout of position waste, specify of capacity, specify of quantity, type and detector placement, specify of quantity, type and fire extinguishers position which refers to Bapedal Regulation No. 01 In 1995, SNI 03-3985-2000, Employee Minister Regulation RI No. Per-04/Men/1980. Based on research that already done, founded the design for warehouse dimension of waste is 23 m × 22 m × 5 m with waste layout position appropriate with type of waste. The necessary of quantity for detector on this waste warehouse design are 56 each. The type of fire extinguisher that appropriate with this design is dry powder which containing natrium carbonate, alkali salts, with having each weight of 12 Kg about 18 units.

  20. Veterinary Immunology Committee Toolkit Workshop 2010: Progress and plans

    Science.gov (United States)

    The Third Veterinary Immunology Committee (VIC) Toolkit Workshop took place at the Ninth International Veterinary Immunology Symposium (IVIS) in Tokyo, Japan on August 18, 2020. The Workshop built on previous Toolkit Workshops and covered various aspects of reagent development, commercialisation an...

  1. Finding patients using similarity measures in a rare diseases-oriented clinical data warehouse: Dr. Warehouse and the needle in the needle stack.

    Science.gov (United States)

    Garcelon, Nicolas; Neuraz, Antoine; Benoit, Vincent; Salomon, Rémi; Kracker, Sven; Suarez, Felipe; Bahi-Buisson, Nadia; Hadj-Rabia, Smail; Fischer, Alain; Munnich, Arnold; Burgun, Anita

    2017-09-01

    In the context of rare diseases, it may be helpful to detect patients with similar medical histories, diagnoses and outcomes from a large number of cases with automated methods. To reduce the time to find new cases, we developed a method to find similar patients given an index case leveraging data from the electronic health records. We used the clinical data warehouse of a children academic hospital in Paris, France (Necker-Enfants Malades), containing about 400,000 patients. Our model was based on a vector space model (VSM) to compute the similarity distance between an index patient and all the patients of the data warehouse. The dimensions of the VSM were built upon Unified Medical Language System concepts extracted from clinical narratives stored in the clinical data warehouse. The VSM was enhanced using three parameters: a pertinence score (TF-IDF of the concepts), the polarity of the concept (negated/not negated) and the minimum number of concepts in common. We evaluated this model by displaying the most similar patients for five different rare diseases: Lowe Syndrome (LOWE), Dystrophic Epidermolysis Bullosa (DEB), Activated PI3K delta Syndrome (APDS), Rett Syndrome (RETT) and Dowling Meara (EBS-DM), from the clinical data warehouse representing 18, 103, 21, 84 and 7 patients respectively. The percentages of index patients returning at least one true positive similar patient in the Top30 similar patients were 94% for LOWE, 97% for DEB, 86% for APDS, 71% for EBS-DM and 99% for RETT. The mean number of patients with the exact same genetic diseases among the 30 returned patients was 51%. This tool offers new perspectives in a translational context to identify patients for genetic research. Moreover, when new molecular bases are discovered, our strategy will help to identify additional eligible patients for genetic screening. Copyright © 2017. Published by Elsevier Inc.

  2. Refrigerated Warehouse Demand Response Strategy Guide

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Doug [VaCom Technologies, San Luis Obispo, CA (United States); Castillo, Rafael [VaCom Technologies, San Luis Obispo, CA (United States); Larson, Kyle [VaCom Technologies, San Luis Obispo, CA (United States); Dobbs, Brian [VaCom Technologies, San Luis Obispo, CA (United States); Olsen, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-11-01

    This guide summarizes demand response measures that can be implemented in refrigerated warehouses. In an appendix, it also addresses related energy efficiency opportunities. Reducing overall grid demand during peak periods and energy consumption has benefits for facility operators, grid operators, utility companies, and society. State wide demand response potential for the refrigerated warehouse sector in California is estimated to be over 22.1 Megawatts. Two categories of demand response strategies are described in this guide: load shifting and load shedding. Load shifting can be accomplished via pre-cooling, capacity limiting, and battery charger load management. Load shedding can be achieved by lighting reduction, demand defrost and defrost termination, infiltration reduction, and shutting down miscellaneous equipment. Estimation of the costs and benefits of demand response participation yields simple payback periods of 2-4 years. To improve demand response performance, it’s suggested to install air curtains and another form of infiltration barrier, such as a rollup door, for the passageways. Further modifications to increase efficiency of the refrigeration unit are also analyzed. A larger condenser can maintain the minimum saturated condensing temperature (SCT) for more hours of the day. Lowering the SCT reduces the compressor lift, which results in an overall increase in refrigeration system capacity and energy efficiency. Another way of saving energy in refrigerated warehouses is eliminating the use of under-floor resistance heaters. A more energy efficient alternative to resistance heaters is to utilize the heat that is being rejected from the condenser through a heat exchanger. These energy efficiency measures improve efficiency either by reducing the required electric energy input for the refrigeration system, by helping to curtail the refrigeration load on the system, or by reducing both the load and required energy input.

  3. Protocol for a national blood transfusion data warehouse from donor to recipient

    Science.gov (United States)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-01-01

    Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665

  4. Warehouse site selection in an international environment

    Directory of Open Access Journals (Sweden)

    Sebastjan ŠKERLIČ

    2013-01-01

    Full Text Available The changed conditions in the automotive industry as the market and the production are moving from west to east, both at global and at European level, require constant adjustment from Slovenian companies. The companies strive to remain close to their customers and suppliers, as only by maintaining a high quality and streamlined supply chain, their existence within the demanding automotive industry is guaranteed in the long term. Choosing the right location for a warehouse in an international environment is therefore one of the most important strategic decisions that takes into account a number of interrelated factors such as transport networks, transport infrastructure, trade flows and the total cost. This paper aims to explore the important aspects of selecting a location for a warehouse and to identify potential international strategic locations, which could have a significant impact on the future operations of Slovenian companies in the global automotive industry.

  5. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  6. InterMine: a flexible data warehouse system for the integration and analysis of heterogeneous biological data.

    Science.gov (United States)

    Smith, Richard N; Aleksic, Jelena; Butano, Daniela; Carr, Adrian; Contrino, Sergio; Hu, Fengyuan; Lyne, Mike; Lyne, Rachel; Kalderimis, Alex; Rutherford, Kim; Stepan, Radek; Sullivan, Julie; Wakeling, Matthew; Watkins, Xavier; Micklem, Gos

    2012-12-01

    InterMine is an open-source data warehouse system that facilitates the building of databases with complex data integration requirements and a need for a fast customizable query facility. Using InterMine, large biological databases can be created from a range of heterogeneous data sources, and the extensible data model allows for easy integration of new data types. The analysis tools include a flexible query builder, genomic region search and a library of 'widgets' performing various statistical analyses. The results can be exported in many commonly used formats. InterMine is a fully extensible framework where developers can add new tools and functionality. Additionally, there is a comprehensive set of web services, for which client libraries are provided in five commonly used programming languages. Freely available from http://www.intermine.org under the LGPL license. g.micklem@gen.cam.ac.uk Supplementary data are available at Bioinformatics online.

  7. Reconstruction of biological networks based on life science data integration.

    Science.gov (United States)

    Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf

    2010-10-27

    For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH--an integration toolkit for building life science data warehouses, CardioVINEdb--a information system for biological data in cardiovascular-disease and VANESA--a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  8. Quantitative performance of E-Scribe warehouse in detecting quality issues with digital annotated ECG data from healthy subjects.

    Science.gov (United States)

    Sarapa, Nenad; Mortara, Justin L; Brown, Barry D; Isola, Lamberto; Badilini, Fabio

    2008-05-01

    The US Food and Drug Administration recommends submission of digital electrocardiograms in the standard HL7 XML format into the electrocardiogram warehouse to support preapproval review of new drug applications. The Food and Drug Administration scrutinizes electrocardiogram quality by viewing the annotated waveforms and scoring electrocardiogram quality by the warehouse algorithms. Part of the Food and Drug Administration warehouse is commercially available to sponsors as the E-Scribe Warehouse. The authors tested the performance of E-Scribe Warehouse algorithms by quantifying electrocardiogram acquisition quality, adherence to QT annotation protocol, and T-wave signal strength in 2 data sets: "reference" (104 digital electrocardiograms from a phase I study with sotalol in 26 healthy subjects with QT annotations by computer-assisted manual adjustment) and "test" (the same electrocardiograms with an intentionally introduced predefined number of quality issues). The E-Scribe Warehouse correctly detected differences between the 2 sets expected from the number and pattern of errors in the "test" set (except for 1 subject with QT misannotated in different leads of serial electrocardiograms) and confirmed the absence of differences where none was expected. E-Scribe Warehouse scores below the threshold value identified individual electrocardiograms with questionable T-wave signal strength. The E-Scribe Warehouse showed satisfactory performance in detecting electrocardiogram quality issues that may impair reliability of QTc assessment in clinical trials in healthy subjects.

  9. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial.

    Science.gov (United States)

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-07-01

    Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (pdata collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Rancang Bangun Data Warehouse Untuk Analisis Kinerja Penjualan Pada Industri Dengan Model Spa-Dw

    Directory of Open Access Journals (Sweden)

    Randy Oktrima Putra

    2014-02-01

    Full Text Available A company, majorly company that active in commercial (profit orientation need to analyze their sales performance. By analyzing sales performance, company can increase their sales performance. One of method to analyze sales performance is by collecting historical data that relates to sales and then process that data so that produce information that show company sales performance.   A data warehouse is a set of data that has characteristic subject oriented, time variant, integrated, and nonvolatile that help company management in processing of decision making. Design of data warehouse is started from collecting data that relate to sales such as product, customer, sales area, sales transaction, etc. After collecting the data, next is data extraction and transformation. Data extraction is a process f or selecting data that will be loaded into data warehouse. Data transformation is making some change to the data afte r extracted to be more consistent. After transformation processing, data are loaded into data warehouse. Data in data warehouse is processed by OLAP (On Line Analytical Processing to produce information.  Information that are produced from data processing  by OLAP are chart and query reporting. Chart reporting are sales chart based on cement type, sales chart based on sales area, sales chart based on plant, monthly and year ly sales chart, and chart based on customer feedback. Query reporting are sales based on cement type, sales area, plant and customer.Keywords: Data warehouse; OLAP; Sales performance analysis; Ready mix market

  11. Warehouse operations planning model for Bausch & Lomb

    NARCIS (Netherlands)

    Atilgan, Ceren

    2009-01-01

    Operations planning is a major part of the Sales& Operations Planning (S&OP) process. It provides an overview on the operations capacity requirements by considering the supply and demand plan. However, Bausch& Lomb does not have a structured operations planning process for their warehouse

  12. Unexpected levels and movement of radon in a large warehouse

    International Nuclear Information System (INIS)

    Gammage, R.B.; Espinosa, G.

    2004-01-01

    Alpha-track detectors, used in screening for radon, identified a large warehouse with levels of radon as high as 20 p Ci/l. This circumstance was unexpected because large bay doors were left open for much of the day to admit 1 8-wheeler trucks, and exhaust fans in the roof produced good ventilation. More detailed temporal and spatial investigations of radon and air-flow patterns were made with electret chambers, Lucas-cell flow chambers, tracer gas, smoke pencils and pressure sensing micrometers. An oval-dome shaped zone of radon (>4 p Ci/L) persisted in the central region of each of four separate bays composing the warehouse. Detailed studies of air movement in the bay with the highest levels of radon showed clockwise rotation of air near the outer walls with a central dead zone. Sub slab, radon-laden air ingresses the building through expansion joints between the floor slabs to produce the measured radon. The likely source of radon is air within porous, karst bedrock that underlies much of north-central Tennessee where the warehouse is situated

  13. Clinical Data Warehouse: An Effective Tool to Create Intelligence in Disease Management.

    Science.gov (United States)

    Karami, Mahtab; Rahimi, Azin; Shahmirzadi, Ali Hosseini

    Clinical business intelligence tools such as clinical data warehouse enable health care organizations to objectively assess the disease management programs that affect the quality of patients' life and well-being in public. The purpose of these programs is to reduce disease occurrence, improve patient care, and decrease health care costs. Therefore, applying clinical data warehouse can be effective in generating useful information about aspects of patient care to facilitate budgeting, planning, research, process improvement, external reporting, benchmarking, and trend analysis, as well as to enable the decisions needed to prevent the progression or appearance of the illness aligning with maintaining the health of the population. The aim of this review article is to describe the benefits of clinical data warehouse applications in creating intelligence for disease management programs.

  14. Population dynamics of stored maize insect pests in warehouses in two districts of Ghana

    Science.gov (United States)

    Understanding what insect species are present and their temporal and spatial patterns of distribution is important for developing a successful integrated pest management strategy for food storage in warehouses. Maize in many countries in Africa is stored in bags in warehouses, but little monitoring ...

  15. A Simulation Modeling Approach Method Focused on the Refrigerated Warehouses Using Design of Experiment

    Science.gov (United States)

    Cho, G. S.

    2017-09-01

    For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.

  16. Application experiences with the Globus toolkit.

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, S.

    1998-06-09

    The Globus grid toolkit is a collection of software components designed to support the development of applications for high-performance distributed computing environments, or ''computational grids'' [14]. The Globus toolkit is an implementation of a ''bag of services'' architecture, which provides application and tool developers not with a monolithic system but rather with a set of stand-alone services. Each Globus component provides a basic service, such as authentication, resource allocation, information, communication, fault detection, and remote data access. Different applications and tools can combine these services in different ways to construct ''grid-enabled'' systems. The Globus toolkit has been used to construct the Globus Ubiquitous Supercomputing Testbed, or GUSTO: a large-scale testbed spanning 20 sites and included over 4000 compute nodes for a total compute power of over 2 TFLOPS. Over the past six months, we and others have used this testbed to conduct a variety of application experiments, including multi-user collaborative environments (tele-immersion), computational steering, distributed supercomputing, and high throughput computing. The goal of this paper is to review what has been learned from these experiments regarding the effectiveness of the toolkit approach. To this end, we describe two of the application experiments in detail, noting what worked well and what worked less well. The two applications are a distributed supercomputing application, SF-Express, in which multiple supercomputers are harnessed to perform large distributed interactive simulations; and a tele-immersion application, CAVERNsoft, in which the focus is on connecting multiple people to a distributed simulated world.

  17. A Million Cancer Genome Warehouse

    Science.gov (United States)

    2012-11-20

    of a national program for Cancer Information Donors, the American Society for Clinical Oncology (ASCO) has proposed a rapid learning system for...or Scala and Spark; “scrum” organization of small programming teams; calculating “velocity” to predict time to develop new features; and Agile...2012 to 00-00-2012 4. TITLE AND SUBTITLE A Million Cancer Genome Warehouse 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  18. Contextual snowflake modelling for pattern warehouse logical design

    Indian Academy of Sciences (India)

    being managed by the pattern warehouse management system (PWMS) ... The authors pointed out that the necessity to find out the relationship between patterns .... (i) Some customer queries can only be satisfied by specific DM technique.

  19. TRSkit: A Simple Digital Library Toolkit

    Science.gov (United States)

    Nelson, Michael L.; Esler, Sandra L.

    1997-01-01

    This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem.

  20. IR and OLAP in XML document warehouses

    DEFF Research Database (Denmark)

    Perez, Juan Manuel; Pedersen, Torben Bach; Berlanga, Rafael

    2005-01-01

    In this paper we propose to combine IR and OLAP (On-Line Analytical Processing) technologies to exploit a warehouse of text-rich XML documents. In the system we plan to develop, a multidimensional implementation of a relevance modeling document model will be used for interactively querying...

  1. Configurable Web Warehouses construction through BPM Systems

    Directory of Open Access Journals (Sweden)

    Andrea Delgado

    2016-08-01

    Full Text Available The process of building Data Warehouses (DW is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM System allows modeling and executing Business Processes (BPs providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.

  2. Design of data warehouse in teaching state based on OLAP and data mining

    Science.gov (United States)

    Zhou, Lijuan; Wu, Minhua; Li, Shuang

    2009-04-01

    The data warehouse and the data mining technology is one of information technology research hot topics. At present the data warehouse and the data mining technology in aspects and so on commercial, financial industry as well as enterprise's production, market marketing obtained the widespread application, but is relatively less in educational fields' application. Over the years, the teaching and management have been accumulating large amounts of data in colleges and universities, while the data can not be effectively used, in the light of social needs of the university development and the current status of data management, the establishment of data warehouse in university state, the better use of existing data, and on the basis dealing with a higher level of disposal --data mining are particularly important. In this paper, starting from the decision-making needs design data warehouse structure of university teaching state, and then through the design structure and data extraction, loading, conversion create a data warehouse model, finally make use of association rule mining algorithm for data mining, to get effective results applied in practice. Based on the data analysis and mining, get a lot of valuable information, which can be used to guide teaching management, thereby improving the quality of teaching and promoting teaching devotion in universities and enhancing teaching infrastructure. At the same time it can provide detailed, multi-dimensional information for universities assessment and higher education research.

  3. Communities and Spontaneous Urban Planning: A Toolkit for Urban ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    State-led urban planning is often absent, which creates unsustainable environments and hinders the integration of migrants. Communities' prospects of ... This toolkit is expected to be a viable alternative for planning urban expansion wherever it cannot be carried out through traditional means. The toolkit will be tested in ...

  4. Medical Big Data Warehouse: Architecture and System Design, a Case Study: Improving Healthcare Resources Distribution.

    Science.gov (United States)

    Sebaa, Abderrazak; Chikh, Fatima; Nouicer, Amina; Tari, AbdelKamel

    2018-02-19

    The huge increases in medical devices and clinical applications which generate enormous data have raised a big issue in managing, processing, and mining this massive amount of data. Indeed, traditional data warehousing frameworks can not be effective when managing the volume, variety, and velocity of current medical applications. As a result, several data warehouses face many issues over medical data and many challenges need to be addressed. New solutions have emerged and Hadoop is one of the best examples, it can be used to process these streams of medical data. However, without an efficient system design and architecture, these performances will not be significant and valuable for medical managers. In this paper, we provide a short review of the literature about research issues of traditional data warehouses and we present some important Hadoop-based data warehouses. In addition, a Hadoop-based architecture and a conceptual data model for designing medical Big Data warehouse are given. In our case study, we provide implementation detail of big data warehouse based on the proposed architecture and data model in the Apache Hadoop platform to ensure an optimal allocation of health resources.

  5. Two-warehouse partial backlogging inventory model for deteriorating items with linear trend in demand under inflationary conditions

    Science.gov (United States)

    Jaggi, Chandra K.; Khanna, Aditi; Verma, Priyanka

    2011-07-01

    In today's business transactions, there are various reasons, namely, bulk purchase discounts, re-ordering costs, seasonality of products, inflation induced demand, etc., which force the buyer to order more than the warehouse capacity. Such situations call for additional storage space to store the excess units purchased. This additional storage space is typically a rented warehouse. Inflation plays a very interesting and significant role here: It increases the cost of goods. To safeguard from the rising prices, during the inflation regime, the organisation prefers to keep a higher inventory, thereby increasing the aggregate demand. This additional inventory needs additional storage space, which is facilitated by a rented warehouse. Ignoring the effects of the time value of money and inflation might yield misleading results. In this study, a two-warehouse inventory model with linear trend in demand under inflationary conditions having different rates of deterioration has been developed. Shortages at the owned warehouse are also allowed subject to partial backlogging. The solution methodology provided in the model helps to decide on the feasibility of renting a warehouse. Finally, findings have been illustrated with the help of numerical examples. Comprehensive sensitivity analysis has also been provided.

  6. The Pediatrix BabySteps® Data Warehouse--a unique national resource for improving outcomes for neonates.

    Science.gov (United States)

    Spitzer, Alan R; Ellsbury, Dan; Clark, Reese H

    2015-01-01

    The Pediatrix Medical Group Clinical Data Warehouse represents a unique electronic data capture system for the assessment of outcomes, the management of quality improvement (CQI) initiatives, and the resolution of important research questions in the neonatal intensive care unit (NICU). This system is described in detail and the manner in which the Data Warehouse has been used to measure and improve patient outcomes through CQI projects and research is outlined. The Pediatrix Data Warehouse now contains more than 1 million patients, serving as an exceptional tool for evaluating NICU care. Examples are provided of how significant outcome improvement has been achieved and several papers are cited that have used the "Big Data" contained in the Data Warehouse for novel observations that could not be made otherwise.

  7. Warehousing performance improvement using Frazelle Model and per group benchmarking: A case study in retail warehouse in Yogyakarta and Central Java

    Directory of Open Access Journals (Sweden)

    Kusrini Elisa

    2018-01-01

    Full Text Available Warehouse performance management has an important role in improving logistic's business activities. Good warehouse management could increase profit, time delivery, quality and customer service. This study is conducted to assess performance of retail warehouses in some supermarket located in Central Java and Yogyakarta. Performance improvement is proposed base on the warehouse measurement using Frazelle model (2002, that measure on five indicators, namely Financial, Productivity, Utility, Quality and Cycle time along five business process in warehousing, i.e. Receiving, Put Away, Storage, Order picking and shipping. In order to obtain more precise performance, the indicators are weighted using Analytic Hierarchy Analysis (AHP method. Then, warehouse performance are measured and final score is determined using SNORM method. From this study, it is found the final score of each warehouse and opportunity to improve warehouse performance using peer group benchmarking

  8. Order Picking Process in Warehouse: Case Study of Dairy Industry in Croatia

    Directory of Open Access Journals (Sweden)

    Josip Habazin

    2017-02-01

    Full Text Available The proper functioning of warehouse processes is fundamental for operational improvement and overall logistic supply chain improvement. Order picking is considered one of the most important from the group. Throughout picking orders in warehouses, the presence of human work is highly reflected, with the main goal to reduce the process time as much as possible, that is, to the very minimum. There are several different order picking methods, and nowadays, the most common ones are being developed and are significantly dependent on the type of goods, the warehouse equipment, etc., and those that stand out are scanning and picking by voice. This paper will provide information regarding the dairy industry in the Republic of Croatia with the analysis of order picking process in the observed company. Overall research highlighted the problem and resulted in proposals of solutions.

  9. Design-based learning in classrooms using playful digital toolkits

    NARCIS (Netherlands)

    Scheltenaar, K.J.; van der Poel, J.E.C.; Bekker, Tilde

    2015-01-01

    The goal of this paper is to explore how to implement Design Based Learning (DBL) with digital toolkits to teach 21st century skills in (Dutch) schools. It describes the outcomes of a literature study and two design case studies in which such a DBL approach with digital toolkits was iteratively

  10. The JANA calibrations and conditions database API

    International Nuclear Information System (INIS)

    Lawrence, David

    2010-01-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  11. The JANA calibrations and conditions database API

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, David, E-mail: davidl@jlab.or [12000 Jefferson Ave., Suite 8, Newport News, VA 23601 (United States)

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  12. An Overview of the Geant4 Toolkit

    CERN Document Server

    Apostolakis, John

    2007-01-01

    Geant4 is a toolkit for the simulation of the transport of radiation trough matter. With a flexible kernel and choices between different physics modeling choices, it has been tailored to the requirements of a wide range of applications. With the toolkit a user can describe a setup's or detector's geometry and materials, navigate inside it, simulate the physical interactions using a choice of physics engines, underlying physics cross-sections and models, visualise and store results. Physics models describing electromagnetic and hadronic interactions are provided, as are decays and processes for optical photons. Several models, with different precision and performance are available for many processes. The toolkit includes coherent physics model configurations, which are called physics lists. Users can choose an existing physics list or create their own, depending on their requirements and the application area. A clear structure and readable code, enable the user to investigate the origin of physics results. App...

  13. The Lean and Environment Toolkit

    Science.gov (United States)

    This Lean and Environment Toolkit assembles practical experience collected by the U.S. Environmental Protection Agency (EPA) and partner companies and organizations that have experience with coordinating Lean implementation and environmental management.

  14. The Impact of E-Commerce Development on the Warehouse Space Market in Poland

    Directory of Open Access Journals (Sweden)

    Dembińska Izabela

    2016-12-01

    Full Text Available The subject of discussion in the article is the impact of e-commerce sector on the warehouse space market. On the basis of available reports, the development of e-commerce has been characterized in Poland, showing the dynamics and the type of change. The needs of e-commerce sector in the field of logistics, in particular in the area of storage, have been presented in the paper. These needs have been characterized and at the same time, how representatives of the warehouse space market are prepared to support companies in the e-commerce sector is also discussed. The considerations are illustrated by the changes that occur as a result of the development of e-commerce on the warehouse space market in Poland.

  15. Worldwide Warehouse: A Customer Perspective

    Science.gov (United States)

    1994-09-01

    Management Office (PMO) and the customers (returnees and buyers) 23 will be developed or adapted from existing software programs. The hardware could be... customer requirements and desires is the first aspect to be approached. Sections 4.7 to 4.11 were dedicated to inivestigate those relationships and...R x NTIS CRA&I DTIC TAB WORLDWIDE WAREHOUSE: Ju’a-noj1c0[ed 0 A CUSTOMER PERSPECTIVE J-f-c-.tion .......... THESIS By D i s ib , tio

  16. Minimizing Warehouse Space with a Dedicated Storage Policy

    Directory of Open Access Journals (Sweden)

    Andrea Fumi

    2013-07-01

    inevitably be supported by warehouse management system software. On the contrary, the proposed methodology relies upon a dedicated storage policy, which is easily implementable by companies of all sizes without the need for investing in expensive IT tools.

  17. Ontology based heterogeneous materials database integration and semantic query

    Science.gov (United States)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  18. Desain Sistem Semantic Data Warehouse dengan Metode Ontology dan Rule Based untuk Mengolah Data Akademik Universitas XYZ di Bali

    Directory of Open Access Journals (Sweden)

    Made Pradnyana Ambara

    2016-06-01

    Full Text Available Data warehouse pada umumnya yang sering dikenal data warehouse tradisional mempunyai beberapa kelemahan yang mengakibatkan kualitas data yang dihasilkan tidak spesifik dan efektif. Sistem semantic data warehouse merupakan solusi untuk menangani permasalahan pada data warehouse tradisional dengan kelebihan antara lain: manajeman kualitas data yang spesifik dengan format data seragam untuk mendukung laporan OLAP yang baik, dan performance pencarian informasi yang lebih efektif dengan kata kunci bahasa alami. Pemodelan sistem semantic data warehouse menggunakan metode ontology menghasilkan model resource description framework schema (RDFS logic yang akan ditransformasikan menjadi snowflake schema. Laporan akademik yang dibutuhkan dihasilkan melalui metode nine step Kimball dan pencarian semantic menggunakan metode rule based. Pengujian dilakukan menggunakan dua metode uji yaitu pengujian dengan black box testing dan angket kuesioner cheklist. Dari hasil penelitian ini dapat disimpulkan bahwa sistem semantic data warehouse dapat membantu proses pengolahan data akademik yang menghasilkan laporan yang berkualitas untuk mendukung proses pengambilan keputusan.

  19. Lean and Information Technology Toolkit

    Science.gov (United States)

    The Lean and Information Technology Toolkit is a how-to guide which provides resources to environmental agencies to help them use Lean Startup, Lean process improvement, and Agile tools to streamline and automate processes.

  20. The Analytic Information Warehouse (AIW): a platform for analytics using electronic health record data.

    Science.gov (United States)

    Post, Andrew R; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H

    2013-06-01

    To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in 5years of data from our institution's clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Benefits of a clinical data warehouse with data mining tools to collect data for a radiotherapy trial

    International Nuclear Information System (INIS)

    Roelofs, Erik; Persoon, Lucas; Nijsten, Sebastiaan; Wiessler, Wolfgang; Dekker, André; Lambin, Philippe

    2013-01-01

    Introduction: Collecting trial data in a medical environment is at present mostly performed manually and therefore time-consuming, prone to errors and often incomplete with the complex data considered. Faster and more accurate methods are needed to improve the data quality and to shorten data collection times where information is often scattered over multiple data sources. The purpose of this study is to investigate the possible benefit of modern data warehouse technology in the radiation oncology field. Material and methods: In this study, a Computer Aided Theragnostics (CAT) data warehouse combined with automated tools for feature extraction was benchmarked against the regular manual data-collection processes. Two sets of clinical parameters were compiled for non-small cell lung cancer (NSCLC) and rectal cancer, using 27 patients per disease. Data collection times and inconsistencies were compared between the manual and the automated extraction method. Results: The average time per case to collect the NSCLC data manually was 10.4 ± 2.1 min and 4.3 ± 1.1 min when using the automated method (p < 0.001). For rectal cancer, these times were 13.5 ± 4.1 and 6.8 ± 2.4 min, respectively (p < 0.001). In 3.2% of the data collected for NSCLC and 5.3% for rectal cancer, there was a discrepancy between the manual and automated method. Conclusions: Aggregating multiple data sources in a data warehouse combined with tools for extraction of relevant parameters is beneficial for data collection times and offers the ability to improve data quality. The initial investments in digitizing the data are expected to be compensated due to the flexibility of the data analysis. Furthermore, successive investigations can easily select trial candidates and extract new parameters from the existing databases

  2. From capturing nursing knowledge to retrieval of data from a data warehouse.

    Science.gov (United States)

    Thoroddsen, Asta; Guðjónsdóttir, Hanna K; Guðjónsdóttir, Elisabet

    2014-01-01

    The purpose of the project was to capture nursing data and knowledge, represent it for use and re-use by retrieval from a data warehouse, which contains both clinical and financial hospital data. Today nurses at LUH use standardized nursing terminologies to document information related to patients and the nursing care in the EHR at all times. Pre-defined order sets for nursing care have been developed using best practice where available and tacit nursing knowledge has been captured and coded with standardized nursing terminologies and made explicit for dissemination in the EHR. All patient-nursing data is permanently stored in a data repository. Core nursing data elements have been selected for transfer and storage in the data warehouse and patient-nursing data are now captured, stored, can be related to other data elements from the warehouse and be retrieved for use and re-use.

  3. 78 FR 65300 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-10-31

    ... (NOA) for General Purpose Warehouse and Information Technology Center Construction (GPW/IT)--Tracy Site... proposed action to construct a General Purpose Warehouse and Information Technology Center at Defense..., Suite 02G09, Alexandria, VA 22350- 3100. FOR FURTHER INFORMATION CONTACT: Ann Engelberger at (703) 767...

  4. Outage Risk Assessment and Management (ORAM) thermal-hydraulics toolkit

    International Nuclear Information System (INIS)

    Denny, V.E.; Wassel, A.T.; Issacci, F.; Pal Kalra, S.

    2004-01-01

    A PC-based thermal-hydraulic toolkit for use in support of outage optimization, management and risk assessment has been developed. This mechanistic toolkit incorporates simple models of key thermal-hydraulic processes which occur during an outage, such as recovery from or mitigation of outage upsets; this includes heat-up of water pools following loss of shutdown cooling, inadvertent drain down of the RCS, boiloff of coolant inventory, heatup of the uncovered core, and reflux cooling. This paper provides a list of key toolkit elements, briefly describes the technical basis and presents illustrative results for RCS transient behavior during reflux cooling, peak clad temperatures for an uncovered core and RCS response to loss of shutdown cooling. (author)

  5. Migration from relational to NoSQL database

    Science.gov (United States)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  6. SysBioCube: A Data Warehouse and Integrative Data Analysis Platform Facilitating Systems Biology Studies of Disorders of Military Relevance.

    Science.gov (United States)

    Chowbina, Sudhir; Hammamieh, Rasha; Kumar, Raina; Chakraborty, Nabarun; Yang, Ruoting; Mudunuri, Uma; Jett, Marti; Palma, Joseph M; Stephens, Robert

    2013-01-01

    SysBioCube is an integrated data warehouse and analysis platform for experimental data relating to diseases of military relevance developed for the US Army Medical Research and Materiel Command Systems Biology Enterprise (SBE). It brings together, under a single database environment, pathophysio-, psychological, molecular and biochemical data from mouse models of post-traumatic stress disorder and (pre-) clinical data from human PTSD patients.. SysBioCube will organize, centralize and normalize this data and provide an access portal for subsequent analysis to the SBE. It provides new or expanded browsing, querying and visualization to provide better understanding of the systems biology of PTSD, all brought about through the integrated environment. We employ Oracle database technology to store the data using an integrated hierarchical database schema design. The web interface provides researchers with systematic information and option to interrogate the profiles of pan-omics component across different data types, experimental designs and other covariates.

  7. Protocol for a national blood transfusion data warehouse from donor to recipient.

    Science.gov (United States)

    van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W

    2016-08-04

    Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor-recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  8. On sustainable operation of warehouse order picking systems

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2009-01-01

    Sustainable development calls for an efficient utilization of natural and human resources. This issue also arises for warehouse systems, where typically extensive capital investment and labor intensive work are involved. It is therefore important to assess and continuously monitor the performance of

  9. Cinfony – combining Open Source cheminformatics toolkits behind a common interface

    Directory of Open Access Journals (Sweden)

    Hutchison Geoffrey R

    2008-12-01

    Full Text Available Abstract Background Open Source cheminformatics toolkits such as OpenBabel, the CDK and the RDKit share the same core functionality but support different sets of file formats and forcefields, and calculate different fingerprints and descriptors. Despite their complementary features, using these toolkits in the same program is difficult as they are implemented in different languages (C++ versus Java, have different underlying chemical models and have different application programming interfaces (APIs. Results We describe Cinfony, a Python module that presents a common interface to all three of these toolkits, allowing the user to easily combine methods and results from any of the toolkits. In general, the run time of the Cinfony modules is almost as fast as accessing the underlying toolkits directly from C++ or Java, but Cinfony makes it much easier to carry out common tasks in cheminformatics such as reading file formats and calculating descriptors. Conclusion By providing a simplified interface and improving interoperability, Cinfony makes it easy to combine complementary features of OpenBabel, the CDK and the RDKit.

  10. Routing Optimization of Intelligent Vehicle in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yan-cong Zhou

    2014-01-01

    Full Text Available Routing optimization is a key technology in the intelligent warehouse logistics. In order to get an optimal route for warehouse intelligent vehicle, routing optimization in complex global dynamic environment is studied. A new evolutionary ant colony algorithm based on RFID and knowledge-refinement is proposed. The new algorithm gets environmental information timely through the RFID technology and updates the environment map at the same time. It adopts elite ant kept, fallback, and pheromones limitation adjustment strategy. The current optimal route in population space is optimized based on experiential knowledge. The experimental results show that the new algorithm has higher convergence speed and can jump out the U-type or V-type obstacle traps easily. It can also find the global optimal route or approximate optimal one with higher probability in the complex dynamic environment. The new algorithm is proved feasible and effective by simulation results.

  11. chemf: A purely functional chemistry toolkit.

    Science.gov (United States)

    Höck, Stefan; Riedl, Rainer

    2012-12-20

    Although programming in a type-safe and referentially transparent style offers several advantages over working with mutable data structures and side effects, this style of programming has not seen much use in chemistry-related software. Since functional programming languages were designed with referential transparency in mind, these languages offer a lot of support when writing immutable data structures and side-effects free code. We therefore started implementing our own toolkit based on the above programming paradigms in a modern, versatile programming language. We present our initial results with functional programming in chemistry by first describing an immutable data structure for molecular graphs together with a couple of simple algorithms to calculate basic molecular properties before writing a complete SMILES parser in accordance with the OpenSMILES specification. Along the way we show how to deal with input validation, error handling, bulk operations, and parallelization in a purely functional way. At the end we also analyze and improve our algorithms and data structures in terms of performance and compare it to existing toolkits both object-oriented and purely functional. All code was written in Scala, a modern multi-paradigm programming language with a strong support for functional programming and a highly sophisticated type system. We have successfully made the first important steps towards a purely functional chemistry toolkit. The data structures and algorithms presented in this article perform well while at the same time they can be safely used in parallelized applications, such as computer aided drug design experiments, without further adjustments. This stands in contrast to existing object-oriented toolkits where thread safety of data structures and algorithms is a deliberate design decision that can be hard to implement. Finally, the level of type-safety achieved by Scala highly increased the reliability of our code as well as the productivity of

  12. Knowledge information management toolkit and method

    Science.gov (United States)

    Hempstead, Antoinette R.; Brown, Kenneth L.

    2006-08-15

    A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.

  13. Event-Entity-Relationship Modeling in Data Warehouse Environments

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    We use the event-entity-relationship model (EVER) to illustrate the use of entity-based modeling languages for conceptual schema design in data warehouse environments. EVER is a general-purpose information modeling language that supports the specification of both general schema structures and multi...

  14. NBII-SAIN Data Management Toolkit

    Science.gov (United States)

    Burley, Thomas E.; Peine, John D.

    2009-01-01

    percent of the cost of a spatial information system is associated with spatial data collection and management (U.S. General Accounting Office, 2003). These figures indicate that the resources (time, personnel, money) of many agencies and organizations could be used more efficiently and effectively. Dedicated and conscientious data management coordination and documentation is critical for reducing such redundancy. Substantial cost savings and increased efficiency are direct results of a pro-active data management approach. In addition, details of projects as well as data and information are frequently lost as a result of real-world occurrences such as the passing of time, job turnover, and equipment changes and failure. A standardized, well documented database allows resource managers to identify issues, analyze options, and ultimately make better decisions in the context of adaptive management (National Land and Water Resources Audit and the Australia New Zealand Land Information Council on behalf of the Australian National Government, 2003). Many environmentally focused, scientific, or natural resource management organizations collect and create both spatial and non-spatial data in some form. Data management appropriate for those data will be contingent upon the project goal(s) and objectives and thus will vary on a case-by-case basis. This project and the resulting Data Management Toolkit, hereafter referred to as the Toolkit, is therefore not intended to be comprehensive in terms of addressing all of the data management needs of all projects that contain biological, geospatial, and other types of data. The Toolkit emphasizes the idea of connecting a project's data and the related management needs to the defined project goals and objectives from the outset. In that context, the Toolkit presents and describes the fundamental components of sound data and information management that are common to projects involving biological, geospatial, and other related data

  15. Kekule.js: An Open Source JavaScript Chemoinformatics Toolkit.

    Science.gov (United States)

    Jiang, Chen; Jin, Xi; Dong, Ying; Chen, Ming

    2016-06-27

    Kekule.js is an open-source, object-oriented JavaScript toolkit for chemoinformatics. It provides methods for many common tasks in molecular informatics, including chemical data input/output (I/O), two- and three-dimensional (2D/3D) rendering of chemical structure, stereo identification, ring perception, structure comparison, and substructure search. Encapsulated widgets to display and edit chemical structures directly in web context are also supplied. Developed with web standards, the toolkit is ideal for building chemoinformatics applications over the Internet. Moreover, it is highly platform-independent and can also be used in desktop or mobile environments. Some initial applications, such as plugins for inputting chemical structures on the web and uses in chemistry education, have been developed based on the toolkit.

  16. Using Data Warehouses to extract knowledge from Agro-Hydrological simulations

    Science.gov (United States)

    Bouadi, Tassadit; Gascuel-Odoux, Chantal; Cordier, Marie-Odile; Quiniou, René; Moreau, Pierre

    2013-04-01

    In recent years, simulation models have been used more and more in hydrology to test the effect of scenarios and help stakeholders in decision making. Agro-hydrological models have oriented agricultural water management, by testing the effect of landscape structure and farming system changes on water and chemical emission in rivers. Such models generate a large amount of data while few of them, such as daily concentrations at the outlet of the catchment, or annual budgets regarding soil, water and atmosphere emissions, are stored and analyzed. Thus, a great amount of information is lost from the simulation process. This is due to the large volumes of simulated data, but also to the difficulties in analyzing and transforming the data in an usable information. In this talk we illustrate a data warehouse which has been built to store and manage simulation data coming from the agro-hydrological model TNT (Topography-based nitrogen transfer and transformations, (Beaujouan et al., 2002)). This model simulates the transfer and transformation of nitrogen in agricultural catchments. TNT was used over 10 years on the Yar catchment (western France), a 50 km2 square area which present a detailed data set and have to facing to environmental issue (coastal eutrophication). 44 output key simulated variables are stored at a daily time step, i.e, 8 GB of storage size, which allows the users to explore the N emission in space and time, to quantify all the processes of transfer and transformation regarding the cropping systems, their location within the catchment, the emission in water and atmosphere, and finally to get new knowledge and help in making specific and detailed decision in space and time. We present the dimensional modeling process of the Nitrogen in catchment data warehouse (i.e. the snowflake model). After identifying the set of multileveled dimensions with complex hierarchical structures and relationships among related dimension levels, we chose the snowflake model to

  17. An Advanced Data Warehouse for Integrating Large Sets of GPS Data

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Thomsen, Christian

    2014-01-01

    GPS data recorded from driving vehicles is available from many sources and is a very good data foundation for answering traffic related queries. However, most approaches so far have not considered combining GPS data from many sources into a single data warehouse. Further, the integration of GPS...... data with fuel consumption data (from the so-called CAN bus in the vehicles) and weather data has not been done. In this paper, we propose a data warehouse design for handling GPS data, fuel consumption data, and weather data. The design is fully implemented in a running system using the Postgre...

  18. [Construction and realization of real world integrated data warehouse from HIS on re-evaluation of post-maketing traditional Chinese medicine].

    Science.gov (United States)

    Zhuang, Yan; Xie, Bangtie; Weng, Shengxin; Xie, Yanming

    2011-10-01

    To construct real world integrated data warehouse on re-evaluation of post-marketing traditional Chinese medicine for the research on key techniques of clinic re-evaluation which mainly includes indication of traditional Chinese medicine, dosage usage, course of treatment, unit medication, combined disease and adverse reaction, which provides data for reviewed research on its safety,availability and economy,and provides foundation for perspective research. The integrated data warehouse extracts and integrate data from HIS by information collection system and data warehouse technique and forms standard structure and data. The further research is on process based on the data. A data warehouse and several sub data warehouses were built, which focused on patients' main records, doctor orders, diseases diagnoses, laboratory results and economic indications in hospital. These data warehouses can provide research data for re-evaluation of post-marketing traditional Chinese medicine, and it has clinical value. Besides, it points out the direction for further research.

  19. Validation of Power Output for the WIND Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    King, J.; Clifton, A.; Hodge, B. M.

    2014-09-01

    Renewable energy integration studies require wind data sets of high quality with realistic representations of the variability, ramping characteristics, and forecast performance for current wind power plants. The Wind Integration National Data Set (WIND) Toolkit is meant to be an update for and expansion of the original data sets created for the weather years from 2004 through 2006 during the Western Wind and Solar Integration Study and the Eastern Wind Integration Study. The WIND Toolkit expands these data sets to include the entire continental United States, increasing the total number of sites represented, and it includes the weather years from 2007 through 2012. In addition, the WIND Toolkit has a finer resolution for both the temporal and geographic dimensions. Three separate data sets will be created: a meteorological data set, a wind power data set, and a forecast data set. This report describes the validation of the wind power data set.

  20. Diseño, elaboración y explotación de un data warehouse para una institución sanitaria

    OpenAIRE

    Castillo Hernández, Iván

    2014-01-01

    Diseño, elaboración y explotación de un data warehouse para una institución sanitaria. Disseny, elaboració i explotació d'un data warehouse per a una institució sanitària. Bachelor thesis for the Computer Science program on Data warehouse.

  1. Reconstruction of biological networks based on life science data integration

    Directory of Open Access Journals (Sweden)

    Kormeier Benjamin

    2010-06-01

    Full Text Available For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and VANESA- a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  2. ECG-ViEW II, a freely accessible electrocardiogram database

    Science.gov (United States)

    Park, Man Young; Lee, Sukhoon; Jeon, Min Seok; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    The Electrocardiogram Vigilance with Electronic data Warehouse II (ECG-ViEW II) is a large, single-center database comprising numeric parameter data of the surface electrocardiograms of all patients who underwent testing from 1 June 1994 to 31 July 2013. The electrocardiographic data include the test date, clinical department, RR interval, PR interval, QRS duration, QT interval, QTc interval, P axis, QRS axis, and T axis. These data are connected with patient age, sex, ethnicity, comorbidities, age-adjusted Charlson comorbidity index, prescribed drugs, and electrolyte levels. This longitudinal observational database contains 979,273 electrocardiograms from 461,178 patients over a 19-year study period. This database can provide an opportunity to study electrocardiographic changes caused by medications, disease, or other demographic variables. ECG-ViEW II is freely available at http://www.ecgview.org. PMID:28437484

  3. Statewide Transportation Engineering Warehouse for Archived Regional Data (STEWARD).

    Science.gov (United States)

    2009-12-01

    This report documents Phase III of the development and operation of a prototype for the Statewide Transportation : Engineering Warehouse for Archived Regional Data (STEWARD). It reflects the progress on the development and : operation of STEWARD sinc...

  4. Multidimensional Analysis and Location Intelligence Application for Spatial Data Warehouse Hotspot in Indonesia using SpagoBI

    Science.gov (United States)

    Uswatun Hasanah, Gamma; Trisminingsih, Rina

    2016-01-01

    Spatial data warehouse refers to data warehouse which has a spatial component that represents the geographic location of the position or an object on the Earth's surface. Spatial data warehouse can be visualized in the form of a crosstab tables, graphs, and maps. Spatial data warehouse of hotspot in Indonesia has been constructed by researchers from FIRM NASA 2006-2015. This research develops multidimensional analysis module and location intelligence module using SpagoBI. The multidimensional analysis module is able to visualize online analytical processing (OLAP). The location intelligence module creates dynamic map visualization in map zone and map point. Map zone can display the different colors based on the number of hotspot in each region and map point can display different sizes of the point to represent the number of hotspots in each region. This research is expected to facilitate users in the presentation of hotspot data as needed.

  5. ARC Code TI: Crisis Mapping Toolkit

    Data.gov (United States)

    National Aeronautics and Space Administration — The Crisis Mapping Toolkit (CMT) is a collection of tools for processing geospatial data (images, satellite data, etc.) into cartographic products that improve...

  6. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    Science.gov (United States)

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  7. Opportunities for Energy Efficiency and Automated Demand Response in Industrial Refrigerated Warehouses in California

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Thompson, Lisa; McKane, Aimee; Rockoff, Alexandra; Piette, Mary Ann

    2009-05-11

    This report summarizes the Lawrence Berkeley National Laboratory's research to date in characterizing energy efficiency and open automated demand response opportunities for industrial refrigerated warehouses in California. The report describes refrigerated warehouses characteristics, energy use and demand, and control systems. It also discusses energy efficiency and open automated demand response opportunities and provides analysis results from three demand response studies. In addition, several energy efficiency, load management, and demand response case studies are provided for refrigerated warehouses. This study shows that refrigerated warehouses can be excellent candidates for open automated demand response and that facilities which have implemented energy efficiency measures and have centralized control systems are well-suited to shift or shed electrical loads in response to financial incentives, utility bill savings, and/or opportunities to enhance reliability of service. Control technologies installed for energy efficiency and load management purposes can often be adapted for open automated demand response (OpenADR) at little additional cost. These improved controls may prepare facilities to be more receptive to OpenADR due to both increased confidence in the opportunities for controlling energy cost/use and access to the real-time data.

  8. Modelling toolkit for simulation of maglev devices

    Science.gov (United States)

    Peña-Roche, J.; Badía-Majós, A.

    2017-01-01

    A stand-alone App1 has been developed, focused on obtaining information about relevant engineering properties of magnetic levitation systems. Our modelling toolkit provides real time simulations of 2D magneto-mechanical quantities for superconductor (SC)/permanent magnet structures. The source code is open and may be customised for a variety of configurations. Ultimately, it relies on the variational statement of the critical state model for the superconducting component and has been verified against experimental data for YBaCuO/NdFeB assemblies. On a quantitative basis, the values of the arising forces, induced superconducting currents, as well as a plot of the magnetic field lines are displayed upon selection of an arbitrary trajectory of the magnet in the vicinity of the SC. The stability issues related to the cooling process, as well as the maximum attainable forces for a given material and geometry are immediately observed. Due to the complexity of the problem, a strategy based on cluster computing, database compression, and real-time post-processing on the device has been implemented.

  9. Sealed radioactive sources toolkit

    International Nuclear Information System (INIS)

    Mac Kenzie, C.

    2005-09-01

    The IAEA has developed a Sealed Radioactive Sources Toolkit to provide information to key groups about the safety and security of sealed radioactive sources. The key groups addressed are officials in government agencies, medical users, industrial users and the scrap metal industry. The general public may also benefit from an understanding of the fundamentals of radiation safety

  10. The ECVET toolkit customization for the nuclear energy sector

    Energy Technology Data Exchange (ETDEWEB)

    Ceclan, Mihail; Ramos, Cesar Chenel; Estorff, Ulrike von [European Commission, Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport

    2015-04-15

    As part of its support to the introduction of ECVET in the nuclear energy sector, the Institute for Energy and Transport (IET) of the Joint Research Centre (JRC), European Commission (EC), through the ECVET Team of the European Human Resources Observatory for the Nuclear energy sector (EHRO-N), developed in the last six years (2009-2014) a sectorial approach and a road map for ECVET implementation in the nuclear energy sector. In order to observe the road map for the ECVET implementation, the toolkit customization for nuclear energy sector is required. This article describes the outcomes of the toolkit customization, based on ECVET approach, for nuclear qualifications design. The process of the toolkit customization took into account the fact that nuclear qualifications are mostly of higher levels (five and above) of the European Qualifications Framework.

  11. The ECVET toolkit customization for the nuclear energy sector

    International Nuclear Information System (INIS)

    Ceclan, Mihail; Ramos, Cesar Chenel; Estorff, Ulrike von

    2015-01-01

    As part of its support to the introduction of ECVET in the nuclear energy sector, the Institute for Energy and Transport (IET) of the Joint Research Centre (JRC), European Commission (EC), through the ECVET Team of the European Human Resources Observatory for the Nuclear energy sector (EHRO-N), developed in the last six years (2009-2014) a sectorial approach and a road map for ECVET implementation in the nuclear energy sector. In order to observe the road map for the ECVET implementation, the toolkit customization for nuclear energy sector is required. This article describes the outcomes of the toolkit customization, based on ECVET approach, for nuclear qualifications design. The process of the toolkit customization took into account the fact that nuclear qualifications are mostly of higher levels (five and above) of the European Qualifications Framework.

  12. Effect of an educational toolkit on quality of care: a pragmatic cluster randomized trial.

    Science.gov (United States)

    Shah, Baiju R; Bhattacharyya, Onil; Yu, Catherine H Y; Mamdani, Muhammad M; Parsons, Janet A; Straus, Sharon E; Zwarenstein, Merrick

    2014-02-01

    Printed educational materials for clinician education are one of the most commonly used approaches for quality improvement. The objective of this pragmatic cluster randomized trial was to evaluate the effectiveness of an educational toolkit focusing on cardiovascular disease screening and risk reduction in people with diabetes. All 933,789 people aged ≥40 years with diagnosed diabetes in Ontario, Canada were studied using population-level administrative databases, with additional clinical outcome data collected from a random sample of 1,592 high risk patients. Family practices were randomly assigned to receive the educational toolkit in June 2009 (intervention group) or May 2010 (control group). The primary outcome in the administrative data study, death or non-fatal myocardial infarction, occurred in 11,736 (2.5%) patients in the intervention group and 11,536 (2.5%) in the control group (p = 0.77). The primary outcome in the clinical data study, use of a statin, occurred in 700 (88.1%) patients in the intervention group and 725 (90.1%) in the control group (p = 0.26). Pre-specified secondary outcomes, including other clinical events, processes of care, and measures of risk factor control, were also not improved by the intervention. A limitation is the high baseline rate of statin prescribing in this population. The educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Despite being relatively easy and inexpensive to implement, printed educational materials were not effective. The study highlights the need for a rigorous and scientifically based approach to the development, dissemination, and evaluation of quality improvement interventions. http://www.ClinicalTrials.gov NCT01411865 and NCT01026688.

  13. Multimethod evaluation of the VA's peer-to-peer Toolkit for patient-centered medical home implementation.

    Science.gov (United States)

    Luck, Jeff; Bowman, Candice; York, Laura; Midboe, Amanda; Taylor, Thomas; Gale, Randall; Asch, Steven

    2014-07-01

    Effective implementation of the patient-centered medical home (PCMH) in primary care practices requires training and other resources, such as online toolkits, to share strategies and materials. The Veterans Health Administration (VA) developed an online Toolkit of user-sourced tools to support teams implementing its Patient Aligned Care Team (PACT) medical home model. To present findings from an evaluation of the PACT Toolkit, including use, variation across facilities, effect of social marketing, and factors influencing use. The Toolkit is an online repository of ready-to-use tools created by VA clinic staff that physicians, nurses, and other team members may share, download, and adopt in order to more effectively implement PCMH principles and improve local performance on VA metrics. Multimethod evaluation using: (1) website usage analytics, (2) an online survey of the PACT community of practice's use of the Toolkit, and (3) key informant interviews. Survey respondents were PACT team members and coaches (n = 544) at 136 VA facilities. Interview respondents were Toolkit users and non-users (n = 32). For survey data, multivariable logistic models were used to predict Toolkit awareness and use. Interviews and open-text survey comments were coded using a "common themes" framework. The Consolidated Framework for Implementation Research (CFIR) guided data collection and analyses. The Toolkit was used by 6,745 staff in the first 19 months of availability. Among members of the target audience, 80 % had heard of the Toolkit, and of those, 70 % had visited the website. Tools had been implemented at 65 % of facilities. Qualitative findings revealed a range of user perspectives from enthusiastic support to lack of sufficient time to browse the Toolkit. An online Toolkit to support PCMH implementation was used at VA facilities nationwide. Other complex health care organizations may benefit from adopting similar online peer-to-peer resource libraries.

  14. Location of Farmers Warehouse at Adaklu Traditional Area, Volta Region, Ghana

    Directory of Open Access Journals (Sweden)

    Vincent Tulasi

    2016-01-01

    Full Text Available Postharvest loss is one major problem farmers in Adaklu Traditional Area that most Ghanaian farmers face. As a result, many farmers wallow in abject poverty. Warehouses are important facilities that help to reduce postharvest loss. In this research, Beresnev pseudo-Boolean Simple Plant Location Problem (SPLP model is used to locate a warehouse at Adaklu Traditional Area, Volta Region, Ghana. This model was used because it gives a straightforward computation and produces no iteration as compared with other models. The SPLP is a problem of selecting a site from candidate sites to locate a plant so that customers can be supplied from the plant at a minimum cost. The model is made up of fixed cost and transportation cost. Location index ordering matrix was developed from the transportation cost matrix and we used it with the fixed cost and differences between variable costs to formulate the Beresnev function. Linear term developed from the function which was partial is pegged to obtain a complete solution. Of the 14 notable communities considered, Adaklu Waya is found most suitable for the setting of the warehouse. The total cost involved is Gh₵ 78,180.00.

  15. The Analytic Information Warehouse (AIW): a Platform for Analytics using Electronic Health Record Data

    Science.gov (United States)

    Post, Andrew R.; Kurc, Tahsin; Cholleti, Sharath; Gao, Jingjing; Lin, Xia; Bornstein, William; Cantrell, Dedra; Levine, David; Hohmann, Sam; Saltz, Joel H.

    2013-01-01

    Objective To create an analytics platform for specifying and detecting clinical phenotypes and other derived variables in electronic health record (EHR) data for quality improvement investigations. Materials and Methods We have developed an architecture for an Analytic Information Warehouse (AIW). It supports transforming data represented in different physical schemas into a common data model, specifying derived variables in terms of the common model to enable their reuse, computing derived variables while enforcing invariants and ensuring correctness and consistency of data transformations, long-term curation of derived data, and export of derived data into standard analysis tools. It includes software that implements these features and a computing environment that enables secure high-performance access to and processing of large datasets extracted from EHRs. Results We have implemented and deployed the architecture in production locally. The software is available as open source. We have used it as part of hospital operations in a project to reduce rates of hospital readmission within 30 days. The project examined the association of over 100 derived variables representing disease and co-morbidity phenotypes with readmissions in five years of data from our institution’s clinical data warehouse and the UHC Clinical Database (CDB). The CDB contains administrative data from over 200 hospitals that are in academic medical centers or affiliated with such centers. Discussion and Conclusion A widely available platform for managing and detecting phenotypes in EHR data could accelerate the use of such data in quality improvement and comparative effectiveness studies. PMID:23402960

  16. Capturing Complex Multidimensional Data in Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Pedersen, Torben Bach

    2004-01-01

    Motivated by the increasing need to handle complex multidimensional data inlocation-based data warehouses, this paper proposes apowerful data model that is able to capture the complexities of such data. The model provides a foundation for handling complex transportationinfrastructures...

  17. TA-60 Warehouse and Salvage SWPPP Rev 2 Jan 2017-Final

    Energy Technology Data Exchange (ETDEWEB)

    Burgin, Jillian Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-07

    The Stormwater Pollution Prevention Team (PPT) for the TA-60-0002 Salvage and Warehouse Area consists of operations and management personnel from the facility, Multi-Sector General Permitting (MSGP) stormwater personnel from Environmental Compliance Programs (EPC-CP) organization, and Deployed Environmental Professionals. The EPC-CP representative is responsible for Laboratory compliance under the National Pollutant Discharge Elimination System (NPDES) permit regulations. The team members are selected on the basis of their familiarity with the activities at the facility and the potential impacts of those activities on stormwater runoff. The Warehouse and Salvage Yard are a single shift operation; therefore, a member of the PPT is always present during operations.

  18. Antenna toolkit

    CERN Document Server

    Carr, Joseph

    2006-01-01

    Joe Carr has provided radio amateurs and short-wave listeners with the definitive design guide for sending and receiving radio signals with Antenna Toolkit 2nd edition.Together with the powerful suite of CD software, the reader will have a complete solution for constructing or using an antenna - bar the actual hardware! The software provides a simple Windows-based aid to carrying out the design calculations at the heart of successful antenna design. All the user needs to do is select the antenna type and set the frequency - a much more fun and less error prone method than using a con

  19. An Overview of the GEANT4 Toolkit

    International Nuclear Information System (INIS)

    Apostolakis, John; CERN; Wright, Dennis H.

    2007-01-01

    Geant4 is a toolkit for the simulation of the transport of radiation through matter. With a flexible kernel and choices between different physics modeling choices, it has been tailored to the requirements of a wide range of applications. With the toolkit a user can describe a setup's or detector's geometry and materials, navigate inside it, simulate the physical interactions using a choice of physics engines, underlying physics cross-sections and models, visualize and store results. Physics models describing electromagnetic and hadronic interactions are provided, as are decays and processes for optical photons. Several models, with different precision and performance are available for many processes. The toolkit includes coherent physics model configurations, which are called physics lists. Users can choose an existing physics list or create their own, depending on their requirements and the application area. A clear structure and readable code, enable the user to investigate the origin of physics results. Application areas include detector simulation and background simulation in High Energy Physics experiments, simulation of accelerator setups, studies in medical imaging and treatment, and the study of the effects of solar radiation on spacecraft instruments

  20. Design Criteria in Revitalizing Old Warehouse District on the Kalimas Riverbank Area of Surabaya City

    Directory of Open Access Journals (Sweden)

    Endang Titi Sunarti Darjosanjoto

    2015-09-01

    Full Text Available Neglected warehouse buildings along the Kalimas River have created a poor urban façade in terms of visual quality. However the city government is planning to encourage tourism activities that take advantage of Kalimas River and its surrounding environment. If there is no good plan in accordance with the concept of local identity for old city of Surabaya, it will reduce it as a tourist attraction. In reference to the issue above, design criteria needs to be compiled for revitalizing the old warehouse district, which is expected to revive the identity of this district and be able to support the city’s tourism. This study was conducted by recording field observations, and the data was analyzed using the character appraisal method. The character appraisal analysis method is presented in the form of street picture data, which is divided into determined segments. The results show that there are five components including place attachment, sustainable urban design, green open space design, ecological riverfront design, and activity support that should be considered in the revitalization of the warehouse district. Those components are divided into two parts: building and open space at the riverbank. There are 13 design criteria for building at the riverbank, while there are 14 design criteria for open space at the riverbank. These design criteria can enrich the warehouse district’s revitalization by improving the visual quality of the urban environment.Keywords: design criteria; warehouse district; riverbank; Surabaya; revitalization.

  1. Developing and Marketing a Client/Server-Based Data Warehouse.

    Science.gov (United States)

    Singleton, Michele; And Others

    1993-01-01

    To provide better access to information, the University of Arizona information technology center has designed a data warehouse accessible from the desktop computer. A team approach has proved successful in introducing and demonstrating a prototype to the campus community. (Author/MSE)

  2. The development of an artificial organic networks toolkit for LabVIEW.

    Science.gov (United States)

    Ponce, Hiram; Ponce, Pedro; Molina, Arturo

    2015-03-15

    Two of the most challenging problems that scientists and researchers face when they want to experiment with new cutting-edge algorithms are the time-consuming for encoding and the difficulties for linking them with other technologies and devices. In that sense, this article introduces the artificial organic networks toolkit for LabVIEW™ (AON-TL) from the implementation point of view. The toolkit is based on the framework provided by the artificial organic networks technique, giving it the potential to add new algorithms in the future based on this technique. Moreover, the toolkit inherits both the rapid prototyping and the easy-to-use characteristics of the LabVIEW™ software (e.g., graphical programming, transparent usage of other softwares and devices, built-in programming event-driven for user interfaces), to make it simple for the end-user. In fact, the article describes the global architecture of the toolkit, with particular emphasis in the software implementation of the so-called artificial hydrocarbon networks algorithm. Lastly, the article includes two case studies for engineering purposes (i.e., sensor characterization) and chemistry applications (i.e., blood-brain barrier partitioning data model) to show the usage of the toolkit and the potential scalability of the artificial organic networks technique. © 2015 Wiley Periodicals, Inc.

  3. PsyToolkit: a software package for programming psychological experiments using Linux.

    Science.gov (United States)

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  4. NOAA Weather and Climate Toolkit (WCT)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Weather and Climate Toolkit is an application that provides simple visualization and data export of weather and climatological data archived at NCDC. The...

  5. A User Interface Toolkit for a Small Screen Device.

    OpenAIRE

    UOTILA, ALEKSI

    2000-01-01

    The appearance of different kinds of networked mobile devices and network appliances creates special requirements for user interfaces that are not met by existing widget based user interface creation toolkits. This thesis studies the problem domain of user interface creation toolkits for portable network connected devices. The portable nature of these devices places great restrictions on the user interface capabilities. One main characteristic of the devices is that they have small screens co...

  6. Geant4 - A Simulation Toolkit

    International Nuclear Information System (INIS)

    2002-01-01

    Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. it has been designed and constructed to expose the physics models utilized, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics

  7. GEANT4 A Simulation toolkit

    CERN Document Server

    Agostinelli, S; Amako, K; Apostolakis, John; Araújo, H M; Arce, P; Asai, M; Axen, D A; Banerjee, S; Barrand, G; Behner, F; Bellagamba, L; Boudreau, J; Broglia, L; Brunengo, A; Chauvie, S; Chuma, J; Chytracek, R; Cooperman, G; Cosmo, G; Degtyarenko, P V; Dell'Acqua, A; De Paola, G O; Dietrich, D D; Enami, R; Feliciello, A; Ferguson, C; Fesefeldt, H S; Folger, G; Foppiano, F; Forti, A C; Garelli, S; Giani, S; Giannitrapani, R; Gibin, D; Gómez-Cadenas, J J; González, I; Gracía-Abríl, G; Greeniaus, L G; Greiner, W; Grichine, V M; Grossheim, A; Gumplinger, P; Hamatsu, R; Hashimoto, K; Hasui, H; Heikkinen, A M; Howard, A; Hutton, A M; Ivanchenko, V N; Johnson, A; Jones, F W; Kallenbach, Jeff; Kanaya, N; Kawabata, M; Kawabata, Y; Kawaguti, M; Kelner, S; Kent, P; Kodama, T; Kokoulin, R P; Kossov, M; Kurashige, H; Lamanna, E; Lampen, T; Lara, V; Lefébure, V; Lei, F; Liendl, M; Lockman, W; Longo, F; Magni, S; Maire, M; Mecking, B A; Medernach, E; Minamimoto, K; Mora de Freitas, P; Morita, Y; Murakami, K; Nagamatu, M; Nartallo, R; Nieminen, P; Nishimura, T; Ohtsubo, K; Okamura, M; O'Neale, S W; O'Ohata, Y; Perl, J; Pfeiffer, A; Pia, M G; Ranjard, F; Rybin, A; Sadilov, S; Di Salvo, E; Santin, G; Sasaki, T; Savvas, N; Sawada, Y; Scherer, S; Sei, S; Sirotenko, V I; Smith, D; Starkov, N; Stöcker, H; Sulkimo, J; Takahata, M; Tanaka, S; Chernyaev, E; Safai-Tehrani, F; Tropeano, M; Truscott, P R; Uno, H; Urbàn, L; Urban, P; Verderi, M; Walkden, A; Wander, W; Weber, H; Wellisch, J P; Wenaus, T; Williams, D C; Wright, D; Yamada, T; Yoshida, H; Zschiesche, D

    2003-01-01

    Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics.

  8. Geant4 - A Simulation Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Wright, Dennis H

    2002-08-09

    GEANT4 is a toolkit for simulating the passage of particles through matter. it includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. it has been designed and constructed to expose the physics models utilized, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics.

  9. Data Delivery and Mapping Over the Web: National Water-Quality Assessment Data Warehouse

    Science.gov (United States)

    Bell, Richard W.; Williamson, Alex K.

    2006-01-01

    The U.S. Geological Survey began its National Water-Quality Assessment (NAWQA) Program in 1991, systematically collecting chemical, biological, and physical water-quality data from study units (basins) across the Nation. In 1999, the NAWQA Program developed a data warehouse to better facilitate national and regional analysis of data from 36 study units started in 1991 and 1994. Data from 15 study units started in 1997 were added to the warehouse in 2001. The warehouse currently contains and links the following data: -- Chemical concentrations in water, sediment, and aquatic-organism tissues and related quality-control data from the USGS National Water Information System (NWIS), -- Biological data for stream-habitat and ecological-community data on fish, algae, and benthic invertebrates, -- Site, well, and basin information associated with thousands of descriptive variables derived from spatial analysis, like land use, soil, and population density, and -- Daily streamflow and temperature information from NWIS for selected sampling sites.

  10. Roadmap to a Comprehensive Clinical Data Warehouse for Precision Medicine Applications in Oncology.

    Science.gov (United States)

    Foran, David J; Chen, Wenjin; Chu, Huiqi; Sadimin, Evita; Loh, Doreen; Riedlinger, Gregory; Goodell, Lauri A; Ganesan, Shridar; Hirshfield, Kim; Rodriguez, Lorna; DiPaola, Robert S

    2017-01-01

    Leading institutions throughout the country have established Precision Medicine programs to support personalized treatment of patients. A cornerstone for these programs is the establishment of enterprise-wide Clinical Data Warehouses. Working shoulder-to-shoulder, a team of physicians, systems biologists, engineers, and scientists at Rutgers Cancer Institute of New Jersey have designed, developed, and implemented the Warehouse with information originating from data sources, including Electronic Medical Records, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology and Pathology archives, and Next Generation Sequencing services. Innovative solutions were implemented to detect and extract unstructured clinical information that was embedded in paper/text documents, including synoptic pathology reports. Supporting important precision medicine use cases, the growing Warehouse enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information of patient tumors individually or as part of large cohorts to identify changes and patterns that may influence treatment decisions and potential outcomes.

  11. 19 CFR 19.13 - Requirements for establishment of warehouse.

    Science.gov (United States)

    2010-04-01

    ...; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE THEREIN... secured area separated from the remainder of the premises to be used exclusively for the storage of imported merchandise, domestic spirits, and merchandise subject to internal-revenue tax transferred into...

  12. A Clinical Data Warehouse Based on OMOP and i2b2 for Austrian Health Claims Data.

    Science.gov (United States)

    Rinner, Christoph; Gezgin, Deniz; Wendl, Christopher; Gall, Walter

    2018-01-01

    To develop simulation models for healthcare related questions clinical data can be reused. Develop a clinical data warehouse to harmonize different data sources in a standardized manner and get a reproducible interface for clinical data reuse. The Kimball life cycle for the development of data warehouse was used. The development is split into the technical, the data and the business intelligence pathway. Sample data was persisted in the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The i2b2 clinical data warehouse tools were used to query the OMOP CDM by applying the new i2b2 multi-fact table feature. A clinical data warehouse was set up and sample data, data dimensions and ontologies for Austrian health claims data were created. The ability of the standardized data access layer to create and apply simulation models will be evaluated next.

  13. Real Time Business Analytics for Buying or Selling Transaction on Commodity Warehouse Receipt System

    Science.gov (United States)

    Djatna, Taufik; Teniwut, Wellem A.; Hairiyah, Nina; Marimin

    2017-10-01

    The requirement for smooth information such as buying and selling is essential for commodity warehouse receipt system such as dried seaweed and their stakeholders to transact for an operational transaction. Transactions of buying or selling a commodity warehouse receipt system are a risky process due to the fluctuations in dynamic commodity prices. An integrated system to determine the condition of the real time was needed to make a decision-making transaction by the owner or prospective buyer. The primary motivation of this study is to propose computational methods to trace market tendency for either buying or selling processes. The empirical results reveal that feature selection gain ratio and k-NN outperforms other forecasting models, implying that the proposed approach is a promising alternative to the stock market tendency of warehouse receipt document exploration with accurate level rate is 95.03%.

  14. Field tests of a participatory ergonomics toolkit for Total Worker Health.

    Science.gov (United States)

    Nobrega, Suzanne; Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2017-04-01

    Growing interest in Total Worker Health ® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and teamwork skills of participants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Field tests of a participatory ergonomics toolkit for Total Worker Health

    Science.gov (United States)

    Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2018-01-01

    Growing interest in Total Worker Health® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and team-work skills of participants. PMID:28166897

  16. Newnes electronics toolkit

    CERN Document Server

    Phillips, Geoff

    2013-01-01

    Newnes Electronics Toolkit brings together fundamental facts, concepts, and applications of electronic components and circuits, and presents them in a clear, concise, and unambiguous format, to provide a reference book for engineers. The book contains 10 chapters that discuss the following concepts: resistors, capacitors, inductors, semiconductors, circuit concepts, electromagnetic compatibility, sound, light, heat, and connections. The engineer's job does not end when the circuit diagram is completed; the design for the manufacturing process is just as important if volume production is to be

  17. Managing data quality in an existing medical data warehouse using business intelligence technologies.

    Science.gov (United States)

    Eaton, Scott; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti

    2008-11-06

    The Ohio State University Medical Center (OSUMC) Information Warehouse (IW) is a comprehensive data warehousing facility that provides providing data integration, management, mining, training, and development services to a diversity of customers across the clinical, education, and research sectors of the OSUMC. Providing accurate and complete data is a must for these purposes. In order to monitor the data quality of targeted data sets, an online scorecard has been developed to allow visualization of the critical measures of data quality in the Information Warehouse.

  18. ON PROBLEM OF REGIONAL WAREHOUSE AND TRANSPORT INFRASTRUCTURE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    I. Yu. Miretskiy

    2017-01-01

    Full Text Available The article suggests an approach of solving the problem of warehouse and transport infrastructure optimization in a region. The task is to determine the optimal capacity and location of the support network of warehouses in the region, as well as power, composition and location of motor fleets. Optimization is carried out using mathematical models of a regional warehouse network and a network of motor fleets. These models are presented as mathematical programming problems with separable functions. The process of finding the optimal solution of problems is complicated due to high dimensionality, non-linearity of functions, and the fact that a part of variables are constrained to integer, and some variables can take values only from a discrete set. Given the mentioned above complications search for an exact solution was rejected. The article suggests an approximate approach to solving problems. This approach employs effective computational schemes for solving multidimensional optimization problems. We use the continuous relaxation of the original problem to obtain its approximate solution. An approximately optimal solution of continuous relaxation is taken as an approximate solution of the original problem. The suggested solution method implies linearization of the obtained continuous relaxation and use of the separable programming scheme and the scheme of branches and bounds. We describe the use of the simplex method for solving the linearized continuous relaxation of the original problem and the specific moments of the branches and bounds method implementation. The paper shows the finiteness of the algorithm and recommends how to accelerate process of finding a solution.

  19. A Qualitative Evaluation of Web-Based Cancer Care Quality Improvement Toolkit Use in the Veterans Health Administration.

    Science.gov (United States)

    Bowman, Candice; Luck, Jeff; Gale, Randall C; Smith, Nina; York, Laura S; Asch, Steven

    2015-01-01

    Disease severity, complexity, and patient burden highlight cancer care as a target for quality improvement (QI) interventions. The Veterans Health Administration (VHA) implemented a series of disease-specific online cancer care QI toolkits. To describe characteristics of the toolkits, target users, and VHA cancer care facilities that influenced toolkit access and use and assess whether such resources were beneficial for users. Deductive content analysis of detailed notes from 94 telephone interviews with individuals from 48 VHA facilities. We evaluated toolkit access and use across cancer types, participation in learning collaboratives, and affiliation with VHA cancer care facilities. The presence of champions was identified as a strong facilitator of toolkit use, and learning collaboratives were important for spreading information about toolkit availability. Identified barriers included lack of personnel and financial resources and complicated approval processes to support tool use. Online cancer care toolkits are well received across cancer specialties and provider types. Clinicians, administrators, and QI staff may benefit from the availability of toolkits as they become more reliant on rapid access to strategies that support comprehensive delivery of evidence-based care. Toolkits should be considered as a complement to other QI approaches.

  20. Risk control for staff planning in e-commerce warehouses

    NARCIS (Netherlands)

    Wruck, Susanne; Vis, Iris F A; Boter, Jaap

    2016-01-01

    Internet sale supply chains often need to fulfil quickly small orders for many customers. The resulting high demand and planning uncertainties pose new challenges for e-commerce warehouse operations. Here, we develop a decision support tool to assist managers in selecting appropriate risk policies

  1. Overview and Meteorological Validation of the Wind Integration National Dataset toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Draxl, C. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, B. M. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clifton, A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); McCaa, J. [3TIER by VAisala, Seattle, WA (United States)

    2015-04-13

    The Wind Integration National Dataset (WIND) Toolkit described in this report fulfills these requirements, and constitutes a state-of-the-art national wind resource data set covering the contiguous United States from 2007 to 2013 for use in a variety of next-generation wind integration analyses and wind power planning. The toolkit is a wind resource data set, wind forecast data set, and wind power production and forecast data set derived from the Weather Research and Forecasting (WRF) numerical weather prediction model. WIND Toolkit data are available online for over 116,000 land-based and 10,000 offshore sites representing existing and potential wind facilities.

  2. Automated realtime data import for the i2b2 clinical data warehouse: introducing the HL7 ETL cell.

    Science.gov (United States)

    Majeed, Raphael W; Röhrig, Rainer

    2012-01-01

    Clinical data warehouses are used to consolidate all available clinical data from one or multiple organizations. They represent an important source for clinical research, quality management and controlling. Since its introduction, the data warehouse i2b2 gathered a large user base in the research community. Yet, little work has been done on the process of importing clinical data into data warehouses using existing standards. In this article, we present a novel approach of utilizing the clinical integration server as data source, commonly available in most hospitals. As information is transmitted through the integration server, the standardized HL7 message is immediately parsed and inserted into the data warehouse. Evaluation of import speeds suggest feasibility of the provided solution for real-time processing of HL7 messages. By using the presented approach of standardized data import, i2b2 can be used as a plug and play data warehouse, without the hurdle of customized import for every clinical information system or electronic medical record. The provided solution is available for download at http://sourceforge.net/projects/histream/.

  3. The Populist Toolkit

    OpenAIRE

    Ylä-Anttila, Tuukka Salu Santeri

    2017-01-01

    Populism has often been understood as a description of political parties and politicians, who have been labelled either populist or not. This dissertation argues that it is more useful to conceive of populism in action: as something that is done rather than something that is. I propose that the populist toolkit is a collection of cultural practices, which politicians and citizens use to make sense of and do politics, by claiming that ‘the people’ are opposed by a corrupt elite – a powerful cl...

  4. Data-driven warehouse optimization : Deploying skills of order pickers

    NARCIS (Netherlands)

    M. Matusiak (Marek); M.B.M. de Koster (René); J. Saarinen (Jari)

    2015-01-01

    textabstractBatching orders and routing order pickers is a commonly studied problem in many picker-to-parts warehouses. The impact of individual differences in picking skills on performance has received little attention. In this paper, we show that taking into account differences in the skills of

  5. Can Leader–Member Exchange Contribute to Safety Performance in An Italian Warehouse?

    Directory of Open Access Journals (Sweden)

    Marco G. Mariani

    2017-05-01

    Full Text Available Introduction: The research considers safety climate in a warehouse and wants to analyze the Leader–Member Exchange (LMX role in respect to safety performance. Griffin and Neal’s safety model was adopted and Leader-Member Exchange was inserted as moderator in the relationships between safety climate and proximal antecedents (motivation and knowledge of safety performance constructs (compliance and participation.Materials and Methods: Survey data were collected from a sample of 133 full-time employees in an Italian warehouse. The statistical framework of Hayes (2013 was adopted for moderated mediation analysis.Results: Proximal antecedents partially mediated the relationship between Safety climate and safety participation, but not safety compliance. Moreover, the results from the moderation analysis showed that the Leader–Member Exchange moderated the influence of safety climate on proximal antecedents and the mediation exist only at the higher level of LMX.Conclusion: The study shows that the different aspects of leadership processes interact in explaining individual proficiency in safety practices.Practical Implications: Organizations as warehouses should improve the quality of the relationship between a leader and a subordinate based upon the dimensions of respect, trust, and obligation for high level of safety performance.

  6. A toolkit for promoting healthy ageing

    NARCIS (Netherlands)

    Jeroen Knevel; Aly Gruppen

    2016-01-01

    This toolkit therefore focusses on self-management abilities. That means finding and maintaining effective, positive coping methods in relation to our health. We included many common and frequently discussed topics such as drinking, eating, physical exercise, believing in the future, resilience,

  7. WATCHMAN: A Data Warehouse Intelligent Cache Manager

    Science.gov (United States)

    Scheuermann, Peter; Shim, Junho; Vingralek, Radek

    1996-01-01

    Data warehouses store large volumes of data which are used frequently by decision support applications. Such applications involve complex queries. Query performance in such an environment is critical because decision support applications often require interactive query response time. Because data warehouses are updated infrequently, it becomes possible to improve query performance by caching sets retrieved by queries in addition to query execution plans. In this paper we report on the design of an intelligent cache manager for sets retrieved by queries called WATCHMAN, which is particularly well suited for data warehousing environment. Our cache manager employs two novel, complementary algorithms for cache replacement and for cache admission. WATCHMAN aims at minimizing query response time and its cache replacement policy swaps out entire retrieved sets of queries instead of individual pages. The cache replacement and admission algorithms make use of a profit metric, which considers for each retrieved set its average rate of reference, its size, and execution cost of the associated query. We report on a performance evaluation based on the TPC-D and Set Query benchmarks. These experiments show that WATCHMAN achieves a substantial performance improvement in a decision support environment when compared to a traditional LRU replacement algorithm.

  8. Storage of hazardous substances in bonded warehouses

    International Nuclear Information System (INIS)

    Villalobos Artavia, Beatriz

    2008-01-01

    A variety of special regulations exist in Costa Rica for registration and transport of hazardous substances; these set the requirements for entry into the country and the security of transport units. However, the regulations mentioned no specific rules for storing hazardous substances. Tax deposits have been the initial place where are stored the substances that enter the country.The creation of basic rules that would be regulating the storage of hazardous substances has taken place through the analysis of regulations and national and international laws governing hazardous substances. The regulatory domain that currently exists will be established with a field research in fiscal deposits in the metropolitan area. The storage and security measures that have been used by the personnel handling the substances will be identified to be putting the reality with that the hazardous substances have been handled in tax deposits. A rule base for the storage of hazardous substances in tax deposits can be made, protecting the safety of the environment in which are manipulated and avoiding a possible accident causing a mess around. The rule will have the characteristics of the storage warehouses hazardous substances, such as safety standards, labeling standards, infrastructure features, common storage and transitional measures that must possess and meet all bonded warehouses to store hazardous substances. (author) [es

  9. Fragment Impact Toolkit (FIT)

    Energy Technology Data Exchange (ETDEWEB)

    Shevitz, Daniel Wolf [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Garcia, Daniel B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-05

    The Fragment Impact Toolkit (FIT) is a software package used for probabilistic consequence evaluation of fragmenting sources. The typical use case for FIT is to simulate an exploding shell and evaluate the consequence on nearby objects. FIT is written in the programming language Python and is designed as a collection of interacting software modules. Each module has a function that interacts with the other modules to produce desired results.

  10. toxoMine: an integrated omics data warehouse for Toxoplasma gondii systems biology research.

    Science.gov (United States)

    Rhee, David B; Croken, Matthew McKnight; Shieh, Kevin R; Sullivan, Julie; Micklem, Gos; Kim, Kami; Golden, Aaron

    2015-01-01

    Toxoplasma gondii (T. gondii) is an obligate intracellular parasite that must monitor for changes in the host environment and respond accordingly; however, it is still not fully known which genetic or epigenetic factors are involved in regulating virulence traits of T. gondii. There are on-going efforts to elucidate the mechanisms regulating the stage transition process via the application of high-throughput epigenomics, genomics and proteomics techniques. Given the range of experimental conditions and the typical yield from such high-throughput techniques, a new challenge arises: how to effectively collect, organize and disseminate the generated data for subsequent data analysis. Here, we describe toxoMine, which provides a powerful interface to support sophisticated integrative exploration of high-throughput experimental data and metadata, providing researchers with a more tractable means toward understanding how genetic and/or epigenetic factors play a coordinated role in determining pathogenicity of T. gondii. As a data warehouse, toxoMine allows integration of high-throughput data sets with public T. gondii data. toxoMine is also able to execute complex queries involving multiple data sets with straightforward user interaction. Furthermore, toxoMine allows users to define their own parameters during the search process that gives users near-limitless search and query capabilities. The interoperability feature also allows users to query and examine data available in other InterMine systems, which would effectively augment the search scope beyond what is available to toxoMine. toxoMine complements the major community database ToxoDB by providing a data warehouse that enables more extensive integrative studies for T. gondii. Given all these factors, we believe it will become an indispensable resource to the greater infectious disease research community. © The Author(s) 2015. Published by Oxford University Press.

  11. Web-based Toolkit for Dynamic Generation of Data Processors

    Science.gov (United States)

    Patel, J.; Dascalu, S.; Harris, F. C.; Benedict, K. K.; Gollberg, G.; Sheneman, L.

    2011-12-01

    All computation-intensive scientific research uses structured datasets, including hydrology and all other types of climate-related research. When it comes to testing their hypotheses, researchers might use the same dataset differently, and modify, transform, or convert it to meet their research needs. Currently, many researchers spend a good amount of time performing data processing and building tools to speed up this process. They might routinely repeat the same process activities for new research projects, spending precious time that otherwise could be dedicated to analyzing and interpreting the data. Numerous tools are available to run tests on prepared datasets and many of them work with datasets in different formats. However, there is still a significant need for applications that can comprehensively handle data transformation and conversion activities and help prepare the various processed datasets required by the researchers. We propose a web-based application (a software toolkit) that dynamically generates data processors capable of performing data conversions, transformations, and customizations based on user-defined mappings and selections. As a first step, the proposed solution allows the users to define various data structures and, in the next step, can select various file formats and data conversions for their datasets of interest. In a simple scenario, the core of the proposed web-based toolkit allows the users to define direct mappings between input and output data structures. The toolkit will also support defining complex mappings involving the use of pre-defined sets of mathematical, statistical, date/time, and text manipulation functions. Furthermore, the users will be allowed to define logical cases for input data filtering and sampling. At the end of the process, the toolkit is designed to generate reusable source code and executable binary files for download and use by the scientists. The application is also designed to store all data

  12. User's manual for the two-dimensional transputer graphics toolkit

    Science.gov (United States)

    Ellis, Graham K.

    1988-01-01

    The user manual for the 2-D graphics toolkit for a transputer based parallel processor is presented. The toolkit consists of a package of 2-D display routines that can be used for the simulation visualizations. It supports multiple windows, double buffered screens for animations, and simple graphics transformations such as translation, rotation, and scaling. The display routines are written in occam to take advantage of the multiprocessing features available on transputers. The package is designed to run on a transputer separate from the graphics board.

  13. A toolkit for detecting technical surprise.

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Michael Wayne; Foehse, Mark C.

    2010-10-01

    The detection of a scientific or technological surprise within a secretive country or institute is very difficult. The ability to detect such surprises would allow analysts to identify the capabilities that could be a military or economic threat to national security. Sandia's current approach utilizing ThreatView has been successful in revealing potential technological surprises. However, as data sets become larger, it becomes critical to use algorithms as filters along with the visualization environments. Our two-year LDRD had two primary goals. First, we developed a tool, a Self-Organizing Map (SOM), to extend ThreatView and improve our understanding of the issues involved in working with textual data sets. Second, we developed a toolkit for detecting indicators of technical surprise in textual data sets. Our toolkit has been successfully used to perform technology assessments for the Science & Technology Intelligence (S&TI) program.

  14. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  15. Implementación de un piloto del componente comercial del data warehouse de Etapatelecom

    OpenAIRE

    Vélez Iñiguez, Roberto José

    2008-01-01

    El sistema a desarrollar se enmarca en una arquitectura de Data Warehouse, cuyo objetivo es extraer información de los Sistemas Transaccionales disponibles en Etapatelecom para ser usada en una Base de Datos orientada a la toma de decisiones (Data Warehouse) para el Área Comercial de la Empresa. El procedimiento se inicia con un análisis de los requerimientos de los usuarios estratégicos del Área Comercial de Etapatelecom donde se identifican los indicadores de negocio (Medidas) y las Dimensi...

  16. Identifying and Prioritizing Cleaner Production Strategies in Raw Materials’ Warehouse of Yazdbaf Textile Company in 2015

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ghaneian

    2017-03-01

    Full Text Available Introduction: Cleaner productions in textile industry is achieved by reducing water and chemicals’ consumption, saving energy, reducing production of air pollution and solid wastes, reducing toxicity and noise pollution through many solutions. The purpose of the present research was to apply Strengths, Weaknesses, Opportunities, Threats (SWOT and Quality Systems Planning Matrix (QSPM techniques in identifying and prioritizing production in raw materials’ warehouse of Yazdbaf Textile Factory. Materials and Methods: In this research, effective internal and external factors in cleaner production were identified by providing the required information through field visit and interview with industry managers and supervisors of raw materials’ warehouse. Finally, To form matrix of internal and external factors 17 important internal factors and 7 important external factors were identified and selected respectively.Then, QSPM matrix was formed to determine the attractiveness and priority of the selected strategies by using results of internal and external factors and SWOT matrixes. Results: According to the results, the total score of raw materials’ warehouse in Internal Factor Evaluation (IFE matrix is equal to 2.90 which shows the good situation of warehouse than the internal factors. However, the total score in External Factor Evaluation (EFE matrix is 2.14 and indicates the relative weak situation of warehouse than the external factors. Conclusion: Based on the obtained results, continuity, monitor, and improvement of the general plan of qualitative control (QC of raw materials and laboratory as well as more emphasis on quality indexes according to its importance in the production processes were selected as the most important strategies. 

  17. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  18. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical and unc...

  19. A Foundation for Spatial Data Warehouses on the Semantic Web


    DEFF Research Database (Denmark)

    Gur, Nurefsan; Pedersen, Torben Bach; Zimaányi, Esteban

    2017-01-01

    Large volumes of geospatial data are being published on the Semantic Web (SW), yielding a need for advanced analysis of such data. However, existing SW technologies only support advanced analytical concepts such as multidimensional (MD) data warehouses and Online Analytical Processing (OLAP) over...

  20. An interactive toolkit to extract phenological time series data from digital repeat photography

    Science.gov (United States)

    Seyednasrollah, B.; Milliman, T. E.; Hufkens, K.; Kosmala, M.; Richardson, A. D.

    2017-12-01

    Near-surface remote sensing and in situ photography are powerful tools to study how climate change and climate variability influence vegetation phenology and the associated seasonal rhythms of green-up and senescence. The rapidly-growing PhenoCam network has been using in situ digital repeat photography to study phenology in almost 500 locations around the world, with an emphasis on North America. However, extracting time series data from multiple years of half-hourly imagery - while each set of images may contain several regions of interest (ROI's), corresponding to different species or vegetation types - is not always straightforward. Large volumes of data require substantial processing time, and changes (either intentional or accidental) in camera field of view requires adjustment of ROI masks. Here, we introduce and present "DrawROI" as an interactive web-based application for imagery from PhenoCam. DrawROI can also be used offline, as a fully independent toolkit that significantly facilitates extraction of phenological data from any stack of digital repeat photography images. DrawROI provides a responsive environment for phenological scientists to interactively a) delineate ROIs, b) handle field of view (FOV) shifts, and c) extract and export time series data characterizing image color (i.e. red, green and blue channel digital numbers for the defined ROI). The application utilizes artificial intelligence and advanced machine learning techniques and gives user the opportunity to redraw new ROIs every time an FOV shift occurs. DrawROI also offers a quality control flag to indicate noisy data and images with low quality due to presence of foggy weather or snow conditions. The web-based application significantly accelerates the process of creating new ROIs and modifying pre-existing ROI in the PhenoCam database. The offline toolkit is presented as an open source R-package that can be used with similar datasets with time-lapse photography to obtain more data for

  1. A framework for information warehouse development processes

    OpenAIRE

    Holten, Roland

    1999-01-01

    Since the terms Data Warehouse and On-Line Analytical Processing were proposed by Inmon and Codd, Codd, Sally respectively the traditional ideas of creating information systems in support of management¿s decision became interesting again in theory and practice. Today information warehousing is a strategic market for any data base systems vendor. Nevertheless the theoretical discussions of this topic go back to the early years of the 20th century as far as management science and accounting the...

  2. Heuristics for multi-item two-echelon spare parts inventory control problem with batch ordering in the central warehouse

    NARCIS (Netherlands)

    Topan, E.; Bayindir, Z.P.; Tan, T.

    2010-01-01

    We consider a multi-item two-echelon inventory system in which the central warehouse operates under a (Q;R) policy, and each local warehouse implements (S ¡ 1; S) policy. The objective is to find the policy parameters minimizing expected system-wide inventory holding and fixed ordering costs subject

  3. 7 CFR 735.401 - Electronic warehouse receipt and USWA electronic document providers.

    Science.gov (United States)

    2010-01-01

    ... audit level financial statement prepared according to generally accepted accounting standards as defined... warehouse receipt requirements; (3) Liability; (4) Transfer of records protocol; (5) Records; (6) Conflict...

  4. DESAIN ETL DENGAN CONTOH KASUS PERGURUAN TINGGI

    Directory of Open Access Journals (Sweden)

    Spits Warnars

    2009-01-01

    Full Text Available Data Warehouse for higher education as a paradigm for helping high management in order to make an effective and efficient strategic decisions based on reliable and trusted reports which is produced from Data Warehouse itself. Data Warehouse is not a software, hardware or tool but Data Warehouse is an environment where the transactional database is modelled in other view for decision making purposes. ETL (Extraction, Transformation and Loading is a bridge to build Data Warehouse and transform data from transactional database. In every fact and dimension table will be inserted with fields which represent the construction merge loading as an ETL (Extraction, Transformation and Loading extraction. ETL needs an ETL table and ETL process where ETL table as table connectivity between tables in OLTP database and tables in Data Warehouse and ETL process will transform data from table in OLTP database into Data Warehouse table based on ETL table. The extraction process will be run with a table database as differentiate ETL process and an ETL algorithm which will be run automatically in idle transactional process, along with daily transactional database backup when the information system are not used.

  5. Texas Team: Academic Progression and IOM Toolkit.

    Science.gov (United States)

    Reid, Helen; Tart, Kathryn; Tietze, Mari; Joseph, Nitha Mathew; Easley, Carson

    The Institute of Medicine (IOM) Future of Nursing report, identified eight recommendations for nursing to improve health care for all Americans. The Texas Team for Advancing Health Through Nursing embraced the challenge of implementing the recommendations through two diverse projects. One group conducted a broad, online survey of leadership, practice, and academia, focusing on the IOM recommendations. The other focused specifically on academic progression through the use of CABNET (Consortium for Advancing Baccalaureate Nursing Education in Texas) articulation agreements. The survey revealed a lack of knowledge and understanding of the IOM recommendations, prompting development of an online IOM toolkit. The articulation agreements provide a clear pathway for students to the RN-to-BSN degree students. The toolkit and articulation agreements provide rich resources for implementation of the IOM recommendations.

  6. LAIT: a local ancestry inference toolkit.

    Science.gov (United States)

    Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei

    2017-09-06

    Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.

  7. ECCE Toolkit: Prototyping Sensor-Based Interaction

    Directory of Open Access Journals (Sweden)

    Andrea Bellucci

    2017-02-01

    Full Text Available Building and exploring physical user interfaces requires high technical skills and hours of specialized work. The behavior of multiple devices with heterogeneous input/output channels and connectivity has to be programmed in a context where not only the software interface matters, but also the hardware components are critical (e.g., sensors and actuators. Prototyping physical interaction is hindered by the challenges of: (1 programming interactions among physical sensors/actuators and digital interfaces; (2 implementing functionality for different platforms in different programming languages; and (3 building custom electronic-incorporated objects. We present ECCE (Entities, Components, Couplings and Ecosystems, a toolkit for non-programmers that copes with these issues by abstracting from low-level implementations, thus lowering the complexity of prototyping small-scale, sensor-based physical interfaces to support the design process. A user evaluation provides insights and use cases of the kind of applications that can be developed with the toolkit.

  8. Data warehouse solution for energy management tasks of a power generation corporation in a competitive market; Data-Warehouse-Loesung fuer die Energiemanagementaufgaben eines Stromerzeugungsbereiches in einem wettbewerblichen Umfeld

    Energy Technology Data Exchange (ETDEWEB)

    Brugger, H. [Siemens AG, Vienna (Austria). Bereich Energieuebertragung und -verteilung; Nobach, U. [Siemens AG, Nuernberg (Germany). Bereich Energieuebertragung und -verteilung; Hoenes, R.; Vetter, T. [Neckarwerke Stuttgart AG (Germany)

    1999-04-19

    Liberalization of the energy market gives new requirements to the organization of the departments of utilities and their IT-solutions. The authors explain by using the example of the new energy management system for Neckarwerke Stuttgart AG, how a modern Data Warehouse-platform is created for the operative and economic tasks for electricity supply and district heating in a competitive market. (orig.) [Deutsch] Aufgrund der Liberalisierung des Strommarktes erwachsen bei den Energieversorgungsunternehmen neue Anforderungen an die Organisation der Geschaeftsbereiche und an die EDV-Hilfsmittel. Die Verfasser erlaeutern am Beispiel des neuen Energiemanagementsystems fuer die Neckarwerke Stuttgart AG wie mit einem modernen Data Warehouse die Plattform fuer die Loesung der operativen und der wirtschaftlichen Aufgaben der Strom- und Fernwaermebeschaffung in einem Wettbewerbsumfeld geschaffen werden kann. (orig.)

  9. Wind Integration National Dataset (WIND) Toolkit; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Draxl, Caroline; Hodge, Bri-Mathias

    2015-07-14

    A webinar about the Wind Integration National Dataset (WIND) Toolkit was presented by Bri-Mathias Hodge and Caroline Draxl on July 14, 2015. It was hosted by the Southern Alliance for Clean Energy. The toolkit is a grid integration data set that contains meteorological and power data at a 5-minute resolution across the continental United States for 7 years and hourly power forecasts.

  10. Designing a ticket to ride with the Cognitive Work Analysis Design Toolkit.

    Science.gov (United States)

    Read, Gemma J M; Salmon, Paul M; Lenné, Michael G; Jenkins, Daniel P

    2015-01-01

    Cognitive work analysis has been applied in the design of numerous sociotechnical systems. The process used to translate analysis outputs into design concepts, however, is not always clear. Moreover, structured processes for translating the outputs of ergonomics methods into concrete designs are lacking. This paper introduces the Cognitive Work Analysis Design Toolkit (CWA-DT), a design approach which has been developed specifically to provide a structured means of incorporating cognitive work analysis outputs in design using design principles and values derived from sociotechnical systems theory. This paper outlines the CWA-DT and describes its application in a public transport ticketing design case study. Qualitative and quantitative evaluations of the process provide promising early evidence that the toolkit fulfils the evaluation criteria identified for its success, with opportunities for improvement also highlighted. The Cognitive Work Analysis Design Toolkit has been developed to provide ergonomics practitioners with a structured approach for translating the outputs of cognitive work analysis into design solutions. This paper demonstrates an application of the toolkit and provides evaluation findings.

  11. FY17Q4 Ristra project: Release Version 1.0 of a production toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Daniel, David John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-21

    The Next Generation Code project will release Version 1.0 of a production toolkit for multi-physics application development on advanced architectures. Features of this toolkit will include remap and link utilities, control and state manager, setup, visualization and I/O, as well as support for a variety of mesh and particle data representations. Numerical physics packages that operate atop this foundational toolkit will be employed in a multi-physics demonstration problem and released to the community along with results from the demonstration.

  12. Development of a Multimedia Toolkit for Engineering Graphics Education

    Directory of Open Access Journals (Sweden)

    Moudar Zgoul

    2009-09-01

    Full Text Available This paper focuses upon the development of a multimedia toolkit to support the teaching of Engineering Graphics Course. The project used different elements for the toolkit; animations, videos and presentations which were then integrated in a dedicated internet website. The purpose of using these elements is to assist the students building and practicing the needed engineering skills at their own pace as a part of an e-Learning solution. Furthermore, this kit allows students to repeat and view the processes and techniques of graphical construction, and visualization as much as needed, allowing them to follow and practice on their own.

  13. AutoMicromanager: A microscopy scripting toolkit for LABVIEW and other programming environments

    Science.gov (United States)

    Ashcroft, Brian Alan; Oosterkamp, Tjerk

    2010-11-01

    We present a scripting toolkit for the acquisition and analysis of a wide variety of imaging data by integrating the ease of use of various programming environments such as LABVIEW, IGOR PRO, MATLAB, SCILAB, and others. This toolkit is designed to allow the user to quickly program a variety of standard microscopy components for custom microscopy applications allowing much more flexibility than other packages. Included are both programming tools as well as graphical user interface classes allowing a standard, consistent, and easy to maintain scripting environment. This programming toolkit allows easy access to most commonly used cameras, stages, and shutters through the Micromanager project so the scripter can focus on their custom application instead of boilerplate code generation.

  14. AutoMicromanager: a microscopy scripting toolkit for LABVIEW and other programming environments.

    Science.gov (United States)

    Ashcroft, Brian Alan; Oosterkamp, Tjerk

    2010-11-01

    We present a scripting toolkit for the acquisition and analysis of a wide variety of imaging data by integrating the ease of use of various programming environments such as LABVIEW, IGOR PRO, MATLAB, SCILAB, and others. This toolkit is designed to allow the user to quickly program a variety of standard microscopy components for custom microscopy applications allowing much more flexibility than other packages. Included are both programming tools as well as graphical user interface classes allowing a standard, consistent, and easy to maintain scripting environment. This programming toolkit allows easy access to most commonly used cameras, stages, and shutters through the Micromanager project so the scripter can focus on their custom application instead of boilerplate code generation.

  15. A Developing and Maintenance Toolkit for Operating Support System of Nuclear Power Plant with IVI-COM Technology

    International Nuclear Information System (INIS)

    Zhou, Yang Ping; Dong, Yu Jie; Huang, Xiao Jing; Yoshikawa, Hidekazu

    2011-01-01

    Because of the development and maturation of computer and related technologies, the digitization is inevitably happening in many fields of complex industrial system such as Nuclear Power Plant (NPP). It is believed that the application of these digital operation support systems is able to improve the safety and reliability of complex industrial system and reduce the worker's work load. However, the design, development and maintenance of operation support system such as digital operating procedure under both operational states and accident conditions require not only a profound understand of design, operation and structure of NPP but also expertise on information technology. Because of the reasons mentioned above, a human interface toolkit is proposed for helping the user to develop the operation support system of complex industrial system. With a friendly graphical interface, this integrated tool includes a database, a procedure editor and a procedure executor. In this database, a three layer hierarchy is adopted to express the complexity of operation procedure, which includes mission, process and node. There are 10 kinds of node: entrance, exit, hint, manual input, detector, actuator, data treatment, branch, judgement and plug-in. With the procedure editor, user can easily develop and maintain the procedure and the finished procedure will be stored in the database. Then, the procedure executor can load the procedure from the database for operation support and thus act as a digital operation support system. The operation support system will sense and actuate the actual industrial systems with the interface based on IVI-COM (Interchangeable Virtual Instrumentation-Component Object Model) technology embedded in detector node and actuator node. With the help of various nodes, processes and missions, the developed digital system can access information from plant, make interaction with operator, call additional application, and so on. According to the design mentioned

  16. A Developing and Maintenance Toolkit for Operating Support System of Nuclear Power Plant with IVI-COM Technology

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Yang Ping; Dong, Yu Jie; Huang, Xiao Jing [Tsinghua University, Beijing (China); Yoshikawa, Hidekazu [Harbin Engineering University, Harbin (China)

    2011-08-15

    Because of the development and maturation of computer and related technologies, the digitization is inevitably happening in many fields of complex industrial system such as Nuclear Power Plant (NPP). It is believed that the application of these digital operation support systems is able to improve the safety and reliability of complex industrial system and reduce the worker's work load. However, the design, development and maintenance of operation support system such as digital operating procedure under both operational states and accident conditions require not only a profound understand of design, operation and structure of NPP but also expertise on information technology. Because of the reasons mentioned above, a human interface toolkit is proposed for helping the user to develop the operation support system of complex industrial system. With a friendly graphical interface, this integrated tool includes a database, a procedure editor and a procedure executor. In this database, a three layer hierarchy is adopted to express the complexity of operation procedure, which includes mission, process and node. There are 10 kinds of node: entrance, exit, hint, manual input, detector, actuator, data treatment, branch, judgement and plug-in. With the procedure editor, user can easily develop and maintain the procedure and the finished procedure will be stored in the database. Then, the procedure executor can load the procedure from the database for operation support and thus act as a digital operation support system. The operation support system will sense and actuate the actual industrial systems with the interface based on IVI-COM (Interchangeable Virtual Instrumentation-Component Object Model) technology embedded in detector node and actuator node. With the help of various nodes, processes and missions, the developed digital system can access information from plant, make interaction with operator, call additional application, and so on. According to the design

  17. Windows forensic analysis toolkit advanced analysis techniques for Windows 7

    CERN Document Server

    Carvey, Harlan

    2012-01-01

    Now in its third edition, Harlan Carvey has updated "Windows Forensic Analysis Toolkit" to cover Windows 7 systems. The primary focus of this edition is on analyzing Windows 7 systems and on processes using free and open-source tools. The book covers live response, file analysis, malware detection, timeline, and much more. The author presents real-life experiences from the trenches, making the material realistic and showing the why behind the how. New to this edition, the companion and toolkit materials are now hosted online. This material consists of electronic printable checklists, cheat sheets, free custom tools, and walk-through demos. This edition complements "Windows Forensic Analysis Toolkit, 2nd Edition", (ISBN: 9781597494229), which focuses primarily on XP. It includes complete coverage and examples on Windows 7 systems. It contains Lessons from the Field, Case Studies, and War Stories. It features companion online material, including electronic printable checklists, cheat sheets, free custom tools, ...

  18. VIDE: The Void IDentification and Examination toolkit

    Science.gov (United States)

    Sutter, P. M.; Lavaux, G.; Hamaus, N.; Pisani, A.; Wandelt, B. D.; Warren, M.; Villaescusa-Navarro, F.; Zivick, P.; Mao, Q.; Thompson, B. B.

    2015-03-01

    We present VIDE, the Void IDentification and Examination toolkit, an open-source Python/C++ code for finding cosmic voids in galaxy redshift surveys and N-body simulations, characterizing their properties, and providing a platform for more detailed analysis. At its core, VIDE uses a substantially enhanced version of ZOBOV (Neyinck 2008) to calculate a Voronoi tessellation for estimating the density field and performing a watershed transform to construct voids. Additionally, VIDE provides significant functionality for both pre- and post-processing: for example, VIDE can work with volume- or magnitude-limited galaxy samples with arbitrary survey geometries, or dark matter particles or halo catalogs in a variety of common formats. It can also randomly subsample inputs and includes a Halo Occupation Distribution model for constructing mock galaxy populations. VIDE uses the watershed levels to place voids in a hierarchical tree, outputs a summary of void properties in plain ASCII, and provides a Python API to perform many analysis tasks, such as loading and manipulating void catalogs and particle members, filtering, plotting, computing clustering statistics, stacking, comparing catalogs, and fitting density profiles. While centered around ZOBOV, the toolkit is designed to be as modular as possible and accommodate other void finders. VIDE has been in development for several years and has already been used to produce a wealth of results, which we summarize in this work to highlight the capabilities of the toolkit. VIDE is publicly available at http://bitbucket.org/cosmicvoids/vide_public and http://www.cosmicvoids.net.

  19. The Customer Flow Toolkit: A Framework for Designing High Quality Customer Services.

    Science.gov (United States)

    New York Association of Training and Employment Professionals, Albany.

    This document presents a toolkit to assist staff involved in the design and development of New York's one-stop system. Section 1 describes the preplanning issues to be addressed and the intended outcomes that serve as the framework for creation of the customer flow toolkit. Section 2 outlines the following strategies to assist in designing local…

  20. BAT - The Bayesian Analysis Toolkit

    CERN Document Server

    Caldwell, Allen C; Kröninger, Kevin

    2009-01-01

    We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner. A goodness-of-fit criterion is presented which is intuitive and of great practical use.

  1. Designing ETL Tools to Feed a Data Warehouse Based on Electronic Healthcare Record Infrastructure.

    Science.gov (United States)

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    Aim of this paper is to propose a methodology to design Extract, Transform and Load (ETL) tools in a clinical data warehouse architecture based on the Electronic Healthcare Record (EHR). This approach takes advantages on the use of this infrastructure as one of the main source of information to feed the data warehouse, taking also into account that clinical documents produced by heterogeneous legacy systems are structured using the HL7 CDA standard. This paper describes the main activities to be performed to map the information collected in the different types of document with the dimensional model primitives.

  2. Combining Data Warehouse and Data Mining Techniques for Web Log Analysis

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Jespersen, Søren; Thorhauge, Jesper

    2008-01-01

    a number of approaches thatcombine data warehousing and data mining techniques in order to analyze Web logs.After introducing the well-known click and session data warehouse (DW) schemas,the chapter presents the subsession schema, which allows fast queries on sequences...

  3. A Geospatial Decision Support System Toolkit, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to build and commercialize a working prototype Geospatial Decision Support Toolkit (GeoKit). GeoKit will enable scientists, agencies, and stakeholders to...

  4. Development of a Data Warehouse for Riverine and Coastal Flood Risk Management

    Science.gov (United States)

    McGrath, H.; Stefanakis, E.; Nastev, M.

    2014-11-01

    In New Brunswick flooding occurs typically during the spring freshet, though, in recent years, midwinter thaws have led to flooding in January or February. Municipalities are therefore facing a pressing need to perform risk assessments in order to identify communities at risk of flooding. In addition to the identification of communities at risk, quantitative measures of potential structural damage and societal losses are necessary for these identified communities. Furthermore, tools which allow for analysis and processing of possible mitigation plans are needed. Natural Resources Canada is in the process of adapting Hazus-MH to respond to the need for risk management. This requires extensive data from a variety of municipal, provincial, and national agencies in order to provide valid estimates. The aim is to establish a data warehouse to store relevant flood prediction data which may be accessed thru Hazus. Additionally, this data warehouse will contain tools for On-Line Analytical Processing (OLAP) and knowledge discovery to quantitatively determine areas at risk and discover unexpected dependencies between datasets. The third application of the data warehouse is to provide data for online visualization capabilities: web-based thematic maps of Hazus results, historical flood visualizations, and mitigation tools; thus making flood hazard information and tools more accessible to emergency responders, planners, and residents. This paper represents the first step of the process: locating and collecting the appropriate datasets.

  5. The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards

    Science.gov (United States)

    Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.

    2015-09-01

    The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.

  6. Heart Failure: Self-care to Success: Development and evaluation of a program toolkit.

    Science.gov (United States)

    Bryant, Rebecca

    2017-08-17

    The Heart Failure: Self-care to Success toolkit was developed to assist NPs in empowering patients with heart failure (HF) to improve individual self-care behaviors. This article details the evolution of this toolkit for NPs, its effectiveness with patients with HF, and recommendations for future research and dissemination strategies.

  7. Marine Debris and Plastic Source Reduction Toolkit

    Science.gov (United States)

    Many plastic food service ware items originate on college and university campuses—in cafeterias, snack rooms, cafés, and eateries with take-out dining options. This Campus Toolkit is a detailed “how to” guide for reducing plastic waste on college campuses.

  8. Matlab based Toolkits used to Interface with Optical Design Software for NASA's James Webb Space Telescope

    Science.gov (United States)

    Howard, Joseph

    2007-01-01

    The viewgraph presentation provides an introduction to the James Webb Space Telescope (JWST). The first part provides a brief overview of Matlab toolkits including CodeV, OSLO, and Zemax Toolkits. The toolkit overview examines purpose, layout, how Matlab gets data from CodeV, function layout, and using cvHELP. The second part provides examples of use with JWST, including wavefront sensitivities and alignment simulations.

  9. Criticality calculation of the nuclear material warehouse of the ININ; Calculo de criticidad del almacen del material nuclear del ININ

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, T.; Angeles, A.; Flores C, J., E-mail: teodoro.garcia@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2013-10-15

    In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)

  10. Guide to Using the WIND Toolkit Validation Code

    Energy Technology Data Exchange (ETDEWEB)

    Lieberman-Cribbin, W.; Draxl, C.; Clifton, A.

    2014-12-01

    In response to the U.S. Department of Energy's goal of using 20% wind energy by 2030, the Wind Integration National Dataset (WIND) Toolkit was created to provide information on wind speed, wind direction, temperature, surface air pressure, and air density on more than 126,000 locations across the United States from 2007 to 2013. The numerical weather prediction model output, gridded at 2-km and at a 5-minute resolution, was further converted to detail the wind power production time series of existing and potential wind facility sites. For users of the dataset it is important that the information presented in the WIND Toolkit is accurate and that errors are known, as then corrective steps can be taken. Therefore, we provide validation code written in R that will be made public to provide users with tools to validate data of their own locations. Validation is based on statistical analyses of wind speed, using error metrics such as bias, root-mean-square error, centered root-mean-square error, mean absolute error, and percent error. Plots of diurnal cycles, annual cycles, wind roses, histograms of wind speed, and quantile-quantile plots are created to visualize how well observational data compares to model data. Ideally, validation will confirm beneficial locations to utilize wind energy and encourage regional wind integration studies using the WIND Toolkit.

  11. Innovations and Challenges of Implementing a Glucose Gel Toolkit for Neonatal Hypoglycemia.

    Science.gov (United States)

    Hammer, Denise; Pohl, Carla; Jacobs, Peggy J; Kaufman, Susan; Drury, Brenda

    2018-05-24

    Transient neonatal hypoglycemia occurs most commonly in newborns who are small for gestational age, large for gestational age, infants of diabetic mothers, and late preterm infants. An exact blood glucose value has not been determined for neonatal hypoglycemia, and it is important to note that poor neurologic outcomes can occur if hypoglycemia is left untreated. Interventions that separate mothers and newborns, as well as use of formula to treat hypoglycemia, have the potential to disrupt exclusive breastfeeding. To determine whether implementation of a toolkit designed to support staff in the adaptation of the practice change for management of newborns at risk for hypoglycemia, that includes 40% glucose gel in an obstetric unit with a level 2 nursery will decrease admissions to the Intermediate Care Nursery, and increase exclusive breastfeeding. This descriptive study used a retrospective chart review for pre/postimplementation of the Management of Newborns at Risk for Hypoglycemia Toolkit (Toolkit) using a convenience sample of at-risk newborns in the first 2 days of life to evaluate the proposed outcomes. Following implementation of the Toolkit, at-risk newborns had a clinically but not statistically significant 6.5% increase in exclusive breastfeeding and a clinically but not statistically significant 5% decrease in admissions to the Intermediate Care Nursery. The Toolkit was designed for ease of staff use and to improve outcomes for the at-risk newborn. Future research includes replication at other level 2 and level 1 obstetric centers and investigation into the number of 40% glucose gel doses that can safely be administered.

  12. Characteristics desired in clinical data warehouse for biomedical research.

    Science.gov (United States)

    Shin, Soo-Yong; Kim, Woo Sung; Lee, Jae-Ho

    2014-04-01

    Due to the unique characteristics of clinical data, clinical data warehouses (CDWs) have not been successful so far. Specifically, the use of CDWs for biomedical research has been relatively unsuccessful thus far. The characteristics necessary for the successful implementation and operation of a CDW for biomedical research have not clearly defined yet. THREE EXAMPLES OF CDWS WERE REVIEWED: a multipurpose CDW in a hospital, a CDW for independent multi-institutional research, and a CDW for research use in an institution. After reviewing the three CDW examples, we propose some key characteristics needed in a CDW for biomedical research. A CDW for research should include an honest broker system and an Institutional Review Board approval interface to comply with governmental regulations. It should also include a simple query interface, an anonymized data review tool, and a data extraction tool. Also, it should be a biomedical research platform for data repository use as well as data analysis. The proposed characteristics desired in a CDW may have limited transfer value to organizations in other countries. However, these analysis results are still valid in Korea, and we have developed clinical research data warehouse based on these desiderata.

  13. Integrated System Health Management Development Toolkit

    Science.gov (United States)

    Figueroa, Jorge; Smith, Harvey; Morris, Jon

    2009-01-01

    This software toolkit is designed to model complex systems for the implementation of embedded Integrated System Health Management (ISHM) capability, which focuses on determining the condition (health) of every element in a complex system (detect anomalies, diagnose causes, and predict future anomalies), and to provide data, information, and knowledge (DIaK) to control systems for safe and effective operation.

  14. Development of hospital data warehouse for cost analysis of DPC based on medical costs.

    Science.gov (United States)

    Muranaga, F; Kumamoto, I; Uto, Y

    2007-01-01

    To develop a data warehouse system for cost analysis, based on the categories of the diagnosis procedure combination (DPC) system, in which medical costs were estimated by DPC category and factors influencing the balance between costs and fees. We developed a data warehouse system for cost analysis using data from the hospital central data warehouse system. The balance data of patients who were discharged from Kagoshima University Hospital from April 2003 to March 2005 were determined in terms of medical procedure, cost per day and patient admission in order to conduct a drill-down analysis. To evaluate this system, we analyzed cash flow by DPC category of patients who were categorized as having malignant tumors and whose DPC category was reevaluated in 2004. The percentages of medical expenses were highest in patients with acute leukemia, non-Hodgkin's lymphoma, and particularly in patients with malignant tumors of the liver and intrahepatic bile duct. Imaging tests degraded the percentages of medical expenses in Kagoshima University Hospital. These results suggested that cost analysis by patient is important for hospital administration in the inclusive evaluation system using a case-mix index such as DPC.

  15. pypet: A Python Toolkit for Data Management of Parameter Explorations.

    Science.gov (United States)

    Meyer, Robert; Obermayer, Klaus

    2016-01-01

    pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  16. The Liquid Argon Software Toolkit (LArSoft): Goals, Status and Plan

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Rush [Fermilab; Snider, Erica [Fermilab

    2016-08-17

    LArSoft is a toolkit that provides a software infrastructure and algorithms for the simulation, reconstruction and analysis of events in Liquid Argon Time Projection Chambers (LArTPCs). It is used by the ArgoNeuT, LArIAT, MicroBooNE, DUNE (including 35ton prototype and ProtoDUNE) and SBND experiments. The LArSoft collaboration provides an environment for the development, use, and sharing of code across experiments. The ultimate goal is to develop fully automatic processes for reconstruction and analysis of LArTPC events. The toolkit is based on the art framework and has a well-defined architecture to interface to other packages, including to GEANT4 and GENIE simulation software and the Pandora software development kit for pattern recognition. It is designed to facilitate and support the evolution of algorithms including their transition to new computing platforms. The development of the toolkit is driven by the scientific stakeholders involved. The core infrastructure includes standard definitions of types and constants, means to input experiment geometries as well as meta and event- data in several formats, and relevant general utilities. Examples of algorithms experiments have contributed to date are: photon-propagation; particle identification; hit finding, track finding and fitting; electromagnetic shower identification and reconstruction. We report on the status of the toolkit and plans for future work.

  17. Implementation of a metadata architecture and knowledge collection to support semantic interoperability in an enterprise data warehouse.

    Science.gov (United States)

    Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O

    2008-11-06

    In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.

  18. Automated prototyping tool-kit (APT)

    OpenAIRE

    Nada, Nader; Shing, M.; Berzins, V.; Luqi

    2002-01-01

    Automated prototyping tool-kit (APT) is an integrated set of software tools that generate source programs directly from real-time requirements. The APT system uses a fifth-generation prototyping language to model the communication structure, timing constraints, 1/0 control, and data buffering that comprise the requirements for an embedded software system. The language supports the specification of hard real-time systems with reusable components from domain specific component libraries. APT ha...

  19. Comparison of Diarization Tools for Building Speaker Database

    Directory of Open Access Journals (Sweden)

    Eva Kiktova

    2015-01-01

    Full Text Available This paper compares open source diarization toolkits (LIUM, DiarTK, ALIZE-Lia_Ral, which were designed for extraction of speaker identity from audio records without any prior information about the analysed data. The comparative study of used diarization tools was performed for three different types of analysed data (broadcast news - BN and TV shows. Corresponding values of achieved DER measure are presented here. The automatic speaker diarization system developed by LIUM was able to identified speech segments belonging to speakers at very good level. Its segmentation outputs can be used to build a speaker database.

  20. 76 FR 13972 - United States Warehouse Act; Export Food Aid Commodities Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ..., nuts, cottonseed, and dry beans. Warehouse operators that apply voluntarily agree to be licensed... program for port and transload facility operators storing EFAC. This proposal is in response to the...

  1. Monitoring the grid with the Globus Toolkit MDS4

    International Nuclear Information System (INIS)

    Schopf, Jennifer M; Pearlman, Laura; Miller, Neill; Kesselman, Carl; Foster, Ian; D'Arcy, Mike; Chervenak, Ann

    2006-01-01

    The Globus Toolkit Monitoring and Discovery System (MDS4) defines and implements mechanisms for service and resource discovery and monitoring in distributed environments. MDS4 is distinguished from previous similar systems by its extensive use of interfaces and behaviors defined in the WS-Resource Framework and WS-Notification specifications, and by its deep integration into essentially every component of the Globus Toolkit. We describe the MDS4 architecture and the Web service interfaces and behaviors that allow users to discover resources and services, monitor resource and service states, receive updates on current status, and visualize monitoring results. We present two current deployments to provide insights into the functionality that can be achieved via the use of these mechanisms

  2. Integrating existing software toolkits into VO system

    Science.gov (United States)

    Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze

    2004-09-01

    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.

  3. Using Fuzzy Linguistic Representations to Provide Explanatory Semantics for Data Warehouses

    NARCIS (Netherlands)

    Feng, L.; Dillon, Tharam S.

    A data warehouse integrates large amounts of extracted and summarized data from multiple sources for direct querying and analysis. While it provides decision makers with easy access to such historical and aggregate data, the real meaning of the data has been ignored. For example, "whether a total

  4. Managing warehouse efficiency and worker discomfort through enhanced storage assignment decisions

    NARCIS (Netherlands)

    Larco, José Antonio; De Koster, René; Roodbergen, Kees Jan; Dul, Jan

    2017-01-01

    Humans are at the heart of crucial processes in warehouses. Besides the common economic goal of minimising cycle times, we therefore add in this paper the human well-being goal of minimising workers' discomfort in the context of order picking. We propose amethodology for identifying the most

  5. Energy Savings Performance Contract Energy Sales Agreement Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-08-14

    FEMP developed the Energy Savings Performance Contracting Energy Sales Agreement (ESPC ESA) Toolkit to provide federal agency contracting officers and other acquisition team members with information that will facilitate the timely execution of ESPC ESA projects.

  6. A clinical data warehouse-based process for refining medication orders alerts.

    Science.gov (United States)

    Boussadi, Abdelali; Caruba, Thibaut; Zapletal, Eric; Sabatier, Brigitte; Durieux, Pierre; Degoulet, Patrice

    2012-01-01

    The objective of this case report is to evaluate the use of a clinical data warehouse coupled with a clinical information system to test and refine alerts for medication orders control before they were fully implemented. A clinical decision rule refinement process was used to assess alerts. The criteria assessed were the frequencies of alerts for initial prescriptions of 10 medications whose dosage levels depend on renal function thresholds. In the first iteration of the process, the frequency of the 'exceeds maximum daily dose' alerts was 7.10% (617/8692), while that of the 'under dose' alerts was 3.14% (273/8692). Indicators were presented to the experts. During the different iterations of the process, 45 (16.07%) decision rules were removed, 105 (37.5%) were changed and 136 new rules were introduced. Extensive retrospective analysis of physicians' medication orders stored in a clinical data warehouse facilitates alert optimization toward the goal of maximizing the safety of the patient and minimizing overridden alerts.

  7. XPIWIT--an XML pipeline wrapper for the Insight Toolkit.

    Science.gov (United States)

    Bartschat, Andreas; Hübner, Eduard; Reischl, Markus; Mikut, Ralf; Stegmaier, Johannes

    2016-01-15

    The Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos. XPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/. johannes.stegmaier@kit.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT

    Directory of Open Access Journals (Sweden)

    Mair Frances

    2010-10-01

    Full Text Available Abstract Background The use of Information and Communication Technology (ICT or e-Health is seen as essential for a modern, cost-effective health service. However, there are well documented problems with implementation of e-Health initiatives, despite the existence of a great deal of research into how best to implement e-Health (an example of the gap between research and practice. This paper reports on the development and formative evaluation of an e-Health Implementation Toolkit (e-HIT which aims to summarise and synthesise new and existing research on implementation of e-Health initiatives, and present it to senior managers in a user-friendly format. Results The content of the e-HIT was derived by combining data from a systematic review of reviews of barriers and facilitators to implementation of e-Health initiatives with qualitative data derived from interviews of "implementers", that is people who had been charged with implementing an e-Health initiative. These data were summarised, synthesised and combined with the constructs from the Normalisation Process Model. The software for the toolkit was developed by a commercial company (RocketScience. Formative evaluation was undertaken by obtaining user feedback. There are three components to the toolkit - a section on background and instructions for use aimed at novice users; the toolkit itself; and the report generated by completing the toolkit. It is available to download from http://www.ucl.ac.uk/pcph/research/ehealth/documents/e-HIT.xls Conclusions The e-HIT shows potential as a tool for enhancing future e-Health implementations. Further work is needed to make it fully web-enabled, and to determine its predictive potential for future implementations.

  9. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    Science.gov (United States)

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  10. Toolkit for data reduction to tuples for the ATLAS experiment

    International Nuclear Information System (INIS)

    Snyder, Scott; Krasznahorkay, Attila

    2012-01-01

    The final step in a HEP data-processing chain is usually to reduce the data to a ‘tuple’ form which can be efficiently read by interactive analysis tools such as ROOT. Often, this is implemented independently by each group analyzing the data, leading to duplicated effort and needless divergence in the format of the reduced data. ATLAS has implemented a common toolkit for performing this processing step. By using tools from this package, physics analysis groups can produce tuples customized for a particular analysis but which are still consistent in format and vocabulary with those produced by other physics groups. The package is designed so that almost all the code is independent of the specific form used to store the tuple. The code that does depend on this is grouped into a set of small backend packages. While the ROOT backend is the most used, backends also exist for HDF5 and for specialized databases. By now, the majority of ATLAS analyses rely on this package, and it is an important contributor to the ability of ATLAS to rapidly analyze physics data.

  11. C-A1-03: Considerations in the Design and Use of an Oracle-based Virtual Data Warehouse

    Science.gov (United States)

    Bredfeldt, Christine; McFarland, Lela

    2011-01-01

    Background/Aims The amount of clinical data available for research is growing exponentially. As it grows, increasing the efficiency of both data storage and data access becomes critical. Relational database management systems (rDBMS) such as Oracle are ideal solutions for managing longitudinal clinical data because they support large-scale data storage and highly efficient data retrieval. In addition, they can greatly simplify the management of large data warehouses, including security management and regular data refreshes. However, the HMORN Virtual Data Warehouse (VDW) was originally designed based on SAS datasets, and this design choice has a number of implications for both the design and use of an Oracle-based VDW. From a design standpoint, VDW tables are designed as flat SAS datasets, which do not take full advantage of Oracle indexing capabilities. From a data retrieval standpoint, standard VDW SAS scripts do not take advantage of SAS pass-through SQL capabilities to enable Oracle to perform the processing required to narrow datasets to the population of interest. Methods Beginning in 2009, the research department at Kaiser Permanente in the Mid-Atlantic States (KPMA) has developed an Oracle-based VDW according to the HMORN v3 specifications. In order to take advantage of the strengths of relational databases, KPMA introduced an interface layer to the VDW data, using views to provide access to standardized VDW variables. In addition, KPMA has developed SAS programs that provide access to SQL pass-through processing for first-pass data extraction into SAS VDW datasets for processing by standard VDW scripts. Results We discuss both the design and performance considerations specific to the KPMA Oracle-based VDW. We benchmarked performance of the Oracle-based VDW using both standard VDW scripts and an initial pre-processing layer to evaluate speed and accuracy of data return. Conclusions Adapting the VDW for deployment in an Oracle environment required minor

  12. NNCTRL - a CANCSD toolkit for MATLAB(R)

    DEFF Research Database (Denmark)

    Nørgård, Peter Magnus; Ravn, Ole; Poulsen, Niels Kjølstad

    1996-01-01

    A set of tools for computer-aided neuro-control system design (CANCSD) has been developed for the MATLAB environment. The tools can be used for construction and simulation of a variety of neural network based control systems. The design methods featured in the toolkit are: direct inverse control...

  13. 31 CFR 593.412 - Release of any round log or timber product originating in Liberia from a bonded warehouse or...

    Science.gov (United States)

    2010-07-01

    ... product originating in Liberia from a bonded warehouse or foreign trade zone. 593.412 Section 593.412... Interpretations § 593.412 Release of any round log or timber product originating in Liberia from a bonded... from a bonded warehouse or foreign trade zone of any round log or timber product originating in Liberia...

  14. Data Warehouse for Professional Skills Required on the IT Labor Market

    Directory of Open Access Journals (Sweden)

    Cristian GEORGESCU

    2012-11-01

    Full Text Available This paper represents a research regarding informatics graduates professional level adjustment to specific requirements of the IT labor market. It uses techniques and models for data warehouse technology to allow a comparative analysis between the supply competencies and the skills demand on the IT labor market.

  15. Determining The Optimal Order Picking Batch Size In Single Aisle Warehouses

    NARCIS (Netherlands)

    T. Le-Duc (Tho); M.B.M. de Koster (René)

    2002-01-01

    textabstractThis work aims at investigating the influence of picking batch size to average time in system of orders in a one-aisle warehouse under the assumption that order arrivals follow a Poisson process and items are uniformly distributed over the aisle's length. We model this problem as an

  16. Ethnography in design: Tool-kit or analytic science?

    DEFF Research Database (Denmark)

    Bossen, Claus

    2002-01-01

    The role of ethograpyh in system development is discussed through the selective application of an ethnographic easy-to-use toolkit, Contextual design, by a computer firm in the initial stages of the development of a health care system....

  17. An exact solution procedure for multi-item two-echelon spare parts inventory control problem with batch ordering in the central warehouse

    NARCIS (Netherlands)

    Topan, E.; Bayindir, Z.P.; Tan, T.

    2009-01-01

    We consider a multi-item two-echelon inventory system in which the central warehouse operates under a (Q; R) policy, and the local warehouses implement basestock policy. An exact solution procedure is proposed to find the inventory control policy parameters that minimize the system-wide inventory

  18. 77 FR 20353 - United States Warehouse Act; Export Food Aid Commodities Licensing Agreement

    Science.gov (United States)

    2012-04-04

    ... licensing agreement include, but are not limited to, corn soy blend, vegetable oil, and pulses such as peas, beans, and lentils. USWA licensing is a voluntary program. Warehouse operators that apply for USWA...

  19. The nursing human resource planning best practice toolkit: creating a best practice resource for nursing managers.

    Science.gov (United States)

    Vincent, Leslie; Beduz, Mary Agnes

    2010-05-01

    Evidence of acute nursing shortages in urban hospitals has been surfacing since 2000. Further, new graduate nurses account for more than 50% of total nurse turnover in some hospitals and between 35% and 60% of new graduates change workplace during the first year. Critical to organizational success, first line nurse managers must have the knowledge and skills to ensure the accurate projection of nursing resource requirements and to develop proactive recruitment and retention programs that are effective, promote positive nursing socialization, and provide early exposure to the clinical setting. The Nursing Human Resource Planning Best Practice Toolkit project supported the creation of a network of teaching and community hospitals to develop a best practice toolkit in nursing human resource planning targeted at first line nursing managers. The toolkit includes the development of a framework including the conceptual building blocks of planning tools, manager interventions, retention and recruitment and professional practice models. The development of the toolkit involved conducting a review of the literature for best practices in nursing human resource planning, using a mixed method approach to data collection including a survey and extensive interviews of managers and completing a comprehensive scan of human resource practices in the participating organizations. This paper will provide an overview of the process used to develop the toolkit, a description of the toolkit contents and a reflection on the outcomes of the project.

  20. Research standardization tools: pregnancy measures in the PhenX Toolkit.

    Science.gov (United States)

    Malinowski, Ann Kinga; Ananth, Cande V; Catalano, Patrick; Hines, Erin P; Kirby, Russell S; Klebanoff, Mark A; Mulvihill, John J; Simhan, Hyagriv; Hamilton, Carol M; Hendershot, Tabitha P; Phillips, Michael J; Kilpatrick, Lisa A; Maiese, Deborah R; Ramos, Erin M; Wright, Rosalind J; Dolan, Siobhan M

    2017-09-01

    Only through concerted and well-executed research endeavors can we gain the requisite knowledge to advance pregnancy care and have a positive impact on maternal and newborn health. Yet the heterogeneity inherent in individual studies limits our ability to compare and synthesize study results, thus impeding the capacity to draw meaningful conclusions that can be trusted to inform clinical care. The PhenX Toolkit (http://www.phenxtoolkit.org), supported since 2007 by the National Institutes of Health, is a web-based catalog of standardized protocols for measuring phenotypes and exposures relevant for clinical research. In 2016, a working group of pregnancy experts recommended 15 measures for the PhenX Toolkit that are highly relevant to pregnancy research. The working group followed the established PhenX consensus process to recommend protocols that are broadly validated, well established, nonproprietary, and have a relatively low burden for investigators and participants. The working group considered input from the pregnancy experts and the broader research community and included measures addressing the mode of conception, gestational age, fetal growth assessment, prenatal care, the mode of delivery, gestational diabetes, behavioral and mental health, and environmental exposure biomarkers. These pregnancy measures complement the existing measures for other established domains in the PhenX Toolkit, including reproductive health, anthropometrics, demographic characteristics, and alcohol, tobacco, and other substances. The preceding domains influence a woman's health during pregnancy. For each measure, the PhenX Toolkit includes data dictionaries and data collection worksheets that facilitate incorporation of the protocol into new or existing studies. The measures within the pregnancy domain offer a valuable resource to investigators and clinicians and are well poised to facilitate collaborative pregnancy research with the goal to improve patient care. To achieve this

  1. pypet: A Python Toolkit for Data Management of Parameter Explorations

    Directory of Open Access Journals (Sweden)

    Robert Meyer

    2016-08-01

    Full Text Available pypet (Python parameter exploration toolkit is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches.pypet collects and stores both simulation parameters and results in a single HDF5 file.This collective storage allows fast and convenient loading of data for further analyses.pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2 quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.

  2. Real-time high-level video understanding using data warehouse

    Science.gov (United States)

    Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois

    2006-02-01

    High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.

  3. How Can My State Benefit from an Educational Data Warehouse?

    Science.gov (United States)

    Bergner, Terry; Smith, Nancy J.

    2007-01-01

    Imagine if, at the start of the school year, a teacher could have detailed information about the academic history of every student in her or his classroom. This is possible if the teacher can log on to a Web site that provides access to an educational data warehouse. The teacher would see not only several years of state assessment results, but…

  4. BRDF profile of Tyvek and its implementation in the Geant4 simulation toolkit.

    Science.gov (United States)

    Nozka, Libor; Pech, Miroslav; Hiklova, Helena; Mandat, Dusan; Hrabovsky, Miroslav; Schovanek, Petr; Palatka, Miroslav

    2011-02-28

    Diffuse and specular characteristics of the Tyvek 1025-BL material are reported with respect to their implementation in the Geant4 Monte Carlo simulation toolkit. This toolkit incorporates the UNIFIED model. Coefficients defined by the UNIFIED model were calculated from the bidirectional reflectance distribution function (BRDF) profiles measured with a scatterometer for several angles of incidence. Results were amended with profile measurements made by a profilometer.

  5. Building Emergency Contraception Awareness among Adolescents. A Toolkit for Schools and Community-Based Organizations.

    Science.gov (United States)

    Simkin, Linda; Radosh, Alice; Nelsesteun, Kari; Silverstein, Stacy

    This toolkit presents emergency contraception (EC) as a method to help adolescent women avoid pregnancy and abortion after unprotected sexual intercourse. The sections of this toolkit are designed to help increase your knowledge of EC and stay up to date. They provide suggestions for increasing EC awareness in the workplace, whether it is a school…

  6. Requests for post-registration studies (PRS), patients follow-up in actual practice: Changes in the role of databases.

    Science.gov (United States)

    Berdaï, Driss; Thomas-Delecourt, Florence; Szwarcensztein, Karine; d'Andon, Anne; Collignon, Cécile; Comet, Denis; Déal, Cécile; Dervaux, Benoît; Gaudin, Anne-Françoise; Lamarque-Garnier, Véronique; Lechat, Philippe; Marque, Sébastien; Maugendre, Philippe; Méchin, Hubert; Moore, Nicholas; Nachbaur, Gaëlle; Robain, Mathieu; Roussel, Christophe; Tanti, André; Thiessard, Frantz

    2018-02-01

    Early market access of health products is associated with a larger number of requests for information by the health authorities. Compared with these expectations, the growing expansion of health databases represents an opportunity for responding to questions raised by the authorities. The computerised nature of the health system provides numerous sources of data, and first and foremost medical/administrative databases such as the French National Inter-Scheme Health Insurance Information System (SNIIRAM) database. These databases, although developed for other purposes, have already been used for many years with regard to post-registration studies (PRS). The use thereof will continue to increase with the recent creation of the French National Health Data System (SNDS [2016 health system reform law]). At the same time, other databases are available in France, offering an illustration of "product use under actual practice conditions" by patients and health professionals (cohorts, specific registries, data warehouses, etc.). Based on a preliminary analysis of requests for PRS, approximately two-thirds appeared to have found at least a partial response in existing databases. Using these databases has a number of disadvantages, but also numerous advantages, which are listed. In order to facilitate access and optimise their use, it seemed important to draw up recommendations aiming to facilitate these developments and guarantee the conditions for their technical validity. The recommendations drawn up notably include the need for measures aiming to promote the visibility of research conducted on databases in the field of PRS. Moreover, it seemed worthwhile to promote the interoperability of health data warehouses, to make it possible to match information originating from field studies with information originating from databases, and to develop and share algorithms aiming to identify criteria of interest (proxies). Methodological documents, such as the French National

  7. Diseño e implementación del data warehouse de una cadena de tiendas de ropa

    OpenAIRE

    Camba Fuentes, Jesús Antonio

    2016-01-01

    Una importante cadena de ropa solicita propuestas para crear un sistema de base de datos que actúe como data warehouse centralizado. La finalidad es poder tomar decisiones analizando el funcionamiento del negocio. A lo largo del documento se explican las etapas realizadas para entregar una solución acorde a las necesidades del cliente. Una important cadena de roba sol·licita propostes per crear un sistema de bases de dades que actuï com a data warehouse centralitzat. La finalitat és poder ...

  8. Guest editors' introduction to the 4th issue of Experimental Software and Toolkits (EST-4)

    NARCIS (Netherlands)

    Brand, van den M.G.J.; Kienle, H.M.; Mens, K.

    2014-01-01

    Experimental software and toolkits play a crucial role in computer science. Elsevier’s Science of Computer Programming special issues on Experimental Software and Toolkits (EST) provide a means for academic tool builders to get more visibility and credit for their work, by publishing a paper along

  9. Numerical relativity in spherical coordinates with the Einstein Toolkit

    Science.gov (United States)

    Mewes, Vassilios; Zlochower, Yosef; Campanelli, Manuela; Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.

    2018-04-01

    Numerical relativity codes that do not make assumptions on spatial symmetries most commonly adopt Cartesian coordinates. While these coordinates have many attractive features, spherical coordinates are much better suited to take advantage of approximate symmetries in a number of astrophysical objects, including single stars, black holes, and accretion disks. While the appearance of coordinate singularities often spoils numerical relativity simulations in spherical coordinates, especially in the absence of any symmetry assumptions, it has recently been demonstrated that these problems can be avoided if the coordinate singularities are handled analytically. This is possible with the help of a reference-metric version of the Baumgarte-Shapiro-Shibata-Nakamura formulation together with a proper rescaling of tensorial quantities. In this paper we report on an implementation of this formalism in the Einstein Toolkit. We adapt the Einstein Toolkit infrastructure, originally designed for Cartesian coordinates, to handle spherical coordinates, by providing appropriate boundary conditions at both inner and outer boundaries. We perform numerical simulations for a disturbed Kerr black hole, extract the gravitational wave signal, and demonstrate that the noise in these signals is orders of magnitude smaller when computed on spherical grids rather than Cartesian grids. With the public release of our new Einstein Toolkit thorns, our methods for numerical relativity in spherical coordinates will become available to the entire numerical relativity community.

  10. 19 CFR 19.4 - CBP and proprietor responsibility and supervision over warehouses.

    Science.gov (United States)

    2010-04-01

    ... or their authorized agent. (4) Records maintenance—(i) Maintenance. The proprietor shall: (A... proprietor shall maintain the warehouse facility in a safe and sanitary condition and establish procedures... seals. (7) Storage conditions. Merchandise in the bonded area shall be stored in a safe and sanitary...

  11. Estimation of warehouse throughput in freight transport demand model for the Netherlands

    NARCIS (Netherlands)

    Davydenko, I.; Tavasszy, L.

    2013-01-01

    This paper presents an extension of the classical four-step freight modeling framework with a logistics chain model. Modeling logistics at the regional level establishes a link between trade flow and transport flow, allows the warehouse and distribution center locations and throughput volumes to be

  12. Ten years of maintaining and expanding a microbial genome and metagenome analysis system.

    Science.gov (United States)

    Markowitz, Victor M; Chen, I-Min A; Chu, Ken; Pati, Amrita; Ivanova, Natalia N; Kyrpides, Nikos C

    2015-11-01

    Launched in March 2005, the Integrated Microbial Genomes (IMG) system is a comprehensive data management system that supports multidimensional comparative analysis of genomic data. At the core of the IMG system is a data warehouse that contains genome and metagenome datasets sequenced at the Joint Genome Institute or provided by scientific users, as well as public genome datasets available at the National Center for Biotechnology Information Genbank sequence data archive. Genomes and metagenome datasets are processed using IMG's microbial genome and metagenome sequence data processing pipelines and are integrated into the data warehouse using IMG's data integration toolkits. Microbial genome and metagenome application specific data marts and user interfaces provide access to different subsets of IMG's data and analysis toolkits. This review article revisits IMG's original aims, highlights key milestones reached by the system during the past 10 years, and discusses the main challenges faced by a rapidly expanding system, in particular the complexity of maintaining such a system in an academic setting with limited budgets and computing and data management infrastructure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A toolkit for promoting healthy ageing

    OpenAIRE

    Knevel, Jeroen; Gruppen, Aly

    2016-01-01

    This toolkit therefore focusses on self-management abilities. That means finding and maintaining effective, positive coping methods in relation to our health. We included many common and frequently discussed topics such as drinking, eating, physical exercise, believing in the future, resilience, preventing loneliness and social participation. Besides some concise background information, we offer you a great diversity of exercises per theme which can help you discuss, assess, change or strengt...

  14. Using stakeholder perspectives to develop an ePrescribing toolkit for NHS Hospitals: a questionnaire study.

    Science.gov (United States)

    Lee, Lisa; Cresswell, Kathrin; Slee, Ann; Slight, Sarah P; Coleman, Jamie; Sheikh, Aziz

    2014-10-01

    To evaluate how an online toolkit may support ePrescribing deployments in National Health Service hospitals, by assessing the type of knowledge-based resources currently sought by key stakeholders. Questionnaire-based survey of attendees at a national ePrescribing symposium. 2013 National ePrescribing Symposium in London, UK. Eighty-four delegates were eligible for inclusion in the survey, of whom 70 completed and returned the questionnaire. Estimate of the usefulness and type of content to be included in an ePrescribing toolkit. Interest in a toolkit designed to support the implementation and use of ePrescribing systems was high (n = 64; 91.4%). As could be expected given the current dearth of such a resource, few respondents (n = 2; 2.9%) had access or used an ePrescribing toolkit at the time of the survey. Anticipated users for the toolkit included implementation (n = 62; 88.6%) and information technology (n = 61; 87.1%) teams, pharmacists (n = 61; 87.1%), doctors (n = 58; 82.9%) and nurses (n = 56; 80.0%). Summary guidance for every stage of the implementation (n = 48; 68.6%), planning and monitoring tools (n = 47; 67.1%) and case studies of hospitals' experiences (n = 45; 64.3%) were considered the most useful types of content. There is a clear need for reliable and up-to-date knowledge to support ePrescribing system deployments and longer term use. The findings highlight how a toolkit may become a useful instrument for the management of knowledge in the field, not least by allowing the exchange of ideas and shared learning.

  15. Development of a Human Physiologically Based Pharmacokinetic (PBPK Toolkit for Environmental Pollutants

    Directory of Open Access Journals (Sweden)

    Patricia Ruiz

    2011-10-01

    Full Text Available Physiologically Based Pharmacokinetic (PBPK models can be used to determine the internal dose and strengthen exposure assessment. Many PBPK models are available, but they are not easily accessible for field use. The Agency for Toxic Substances and Disease Registry (ATSDR has conducted translational research to develop a human PBPK model toolkit by recoding published PBPK models. This toolkit, when fully developed, will provide a platform that consists of a series of priority PBPK models of environmental pollutants. Presented here is work on recoded PBPK models for volatile organic compounds (VOCs and metals. Good agreement was generally obtained between the original and the recoded models. This toolkit will be available for ATSDR scientists and public health assessors to perform simulations of exposures from contaminated environmental media at sites of concern and to help interpret biomonitoring data. It can be used as screening tools that can provide useful information for the protection of the public.

  16. 77 FR 73022 - U.S. Environmental Solutions Toolkit

    Science.gov (United States)

    2012-12-07

    ... relevant to reducing air pollution from oil and natural gas production and processing. The Department of... environmental officials and foreign end-users of environmental technologies that will outline U.S. approaches to.... technologies. The Toolkit will support the President's National Export Initiative by fostering export...

  17. Data warehouse de soporte a datos de GSA

    OpenAIRE

    Arribas López, Iván

    2008-01-01

    El presente documento describe los procesos de extracción, transformación y carga de logs de GSA en un data warehouse. GSA (Google Search Appliance) es una aplicacion de Google que utiliza su gestor de consultas para buscar información en la información indexada de un determinado sitio web. Esta aplicación consecuentemente guarda un log de consultas de usuario a ese sitio web en formato estándar CLF modificado. Analizar este log le permitiría conocer al promotor del sitio en cuestión la infor...

  18. A Framework for a Clinical Reasoning Knowledge Warehouse

    DEFF Research Database (Denmark)

    Vilstrup Pedersen, Klaus; Boye, Niels

    2004-01-01

    In many areas of the medical domain, the decision process i.e. reasoning, involving health care professionals is distributed, cooperative and complex. This paper presents a framework for a Clinical Reasoning Knowledge Warehouse that combines theories and models from Artificial Intelligence...... is stored and made accessible when relevant to the reasoning context and the specific patient case. Furthermore, the information structure supports the creation of new generalized knowledge using data mining tools. The patient case is divided into an observation level and an opinion level. At the opinion...

  19. Developing Mixed Reality Educational Applications: The Virtual Touch Toolkit.

    Science.gov (United States)

    Mateu, Juan; Lasala, María José; Alamán, Xavier

    2015-08-31

    In this paper, we present Virtual Touch, a toolkit that allows the development of educational activities through a mixed reality environment such that, using various tangible elements, the interconnection of a virtual world with the real world is enabled. The main goal of Virtual Touch is to facilitate the installation, configuration and programming of different types of technologies, abstracting the creator of educational applications from the technical details involving the use of tangible interfaces and virtual worlds. Therefore, it is specially designed to enable teachers to themselves create educational activities for their students in a simple way, taking into account that teachers generally lack advanced knowledge in computer programming and electronics. The toolkit has been used to develop various educational applications that have been tested in two secondary education high schools in Spain.

  20. Developing Mixed Reality Educational Applications: The Virtual Touch Toolkit

    Science.gov (United States)

    Mateu, Juan; Lasala, María José; Alamán, Xavier

    2015-01-01

    In this paper, we present Virtual Touch, a toolkit that allows the development of educational activities through a mixed reality environment such that, using various tangible elements, the interconnection of a virtual world with the real world is enabled. The main goal of Virtual Touch is to facilitate the installation, configuration and programming of different types of technologies, abstracting the creator of educational applications from the technical details involving the use of tangible interfaces and virtual worlds. Therefore, it is specially designed to enable teachers to themselves create educational activities for their students in a simple way, taking into account that teachers generally lack advanced knowledge in computer programming and electronics. The toolkit has been used to develop various educational applications that have been tested in two secondary education high schools in Spain. PMID:26334275

  1. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... security of goods in the care and custody of the licensee. The personnel conducting the examinations will..., Warehouse Operations Program Manager, FSA, United States Department of Agriculture, Mail Stop 0553, 1400... continuing compliance with the standards of approval and operation. FSA will conduct examinations of licensed...

  2. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    Science.gov (United States)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  3. Data warehouse model for monitoring key performance indicators (KPIs) using goal oriented approach

    Science.gov (United States)

    Abdullah, Mohammed Thajeel; Ta'a, Azman; Bakar, Muhamad Shahbani Abu

    2016-08-01

    The growth and development of universities, just as other organizations, depend on their abilities to strategically plan and implement development blueprints which are in line with their vision and mission statements. The actualizations of these statements, which are often designed into goals and sub-goals and linked to their respective actors are better measured by defining key performance indicators (KPIs) of the university. The proposes ReGADaK, which is an extended the GRAnD approach highlights the facts, dimensions, attributes, measures and KPIs of the organization. The measures from the goal analysis of this unit serve as the basis of developing the related university's KPIs. The proposed data warehouse schema is evaluated through expert review, prototyping and usability evaluation. The findings from the evaluation processes suggest that the proposed data warehouse schema is suitable for monitoring the University's KPIs.

  4. Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses

    Science.gov (United States)

    Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon

    This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.

  5. A Genetic Toolkit for Dissecting Dopamine Circuit Function in Drosophila

    Directory of Open Access Journals (Sweden)

    Tingting Xie

    2018-04-01

    Full Text Available Summary: The neuromodulator dopamine (DA plays a key role in motor control, motivated behaviors, and higher-order cognitive processes. Dissecting how these DA neural networks tune the activity of local neural circuits to regulate behavior requires tools for manipulating small groups of DA neurons. To address this need, we assembled a genetic toolkit that allows for an exquisite level of control over the DA neural network in Drosophila. To further refine targeting of specific DA neurons, we also created reagents that allow for the conversion of any existing GAL4 line into Split GAL4 or GAL80 lines. We demonstrated how this toolkit can be used with recently developed computational methods to rapidly generate additional reagents for manipulating small subsets or individual DA neurons. Finally, we used the toolkit to reveal a dynamic interaction between a small subset of DA neurons and rearing conditions in a social space behavioral assay. : The rapid analysis of how dopaminergic circuits regulate behavior is limited by the genetic tools available to target and manipulate small numbers of these neurons. Xie et al. present genetic tools in Drosophila that allow rational targeting of sparse dopaminergic neuronal subsets and selective knockdown of dopamine signaling. Keywords: dopamine, genetics, behavior, neural circuits, neuromodulation, Drosophila

  6. X-CSIT: a toolkit for simulating 2D pixel detectors

    Science.gov (United States)

    Joy, A.; Wing, M.; Hauf, S.; Kuster, M.; Rüter, T.

    2015-04-01

    A new, modular toolkit for creating simulations of 2D X-ray pixel detectors, X-CSIT (X-ray Camera SImulation Toolkit), is being developed. The toolkit uses three sequential simulations of detector processes which model photon interactions, electron charge cloud spreading with a high charge density plasma model and common electronic components used in detector readout. In addition, because of the wide variety in pixel detector design, X-CSIT has been designed as a modular platform so that existing functions can be modified or additional functionality added if the specific design of a detector demands it. X-CSIT will be used to create simulations of the detectors at the European XFEL, including three bespoke 2D detectors: the Adaptive Gain Integrating Pixel Detector (AGIPD), Large Pixel Detector (LPD) and DePFET Sensor with Signal Compression (DSSC). These simulations will be used by the detector group at the European XFEL for detector characterisation and calibration. For this purpose, X-CSIT has been integrated into the European XFEL's software framework, Karabo. This will further make it available to users to aid with the planning of experiments and analysis of data. In addition, X-CSIT will be released as a standalone, open source version for other users, collaborations and groups intending to create simulations of their own detectors.

  7. Integração da lógica nebulosa à recuperação de informação em data warehouse

    OpenAIRE

    Luz, Robinson

    2005-01-01

    A presente pesquisa tem como objeto de estudo a integração da lógica nebulosa às tecnologias de data warehouse . Objetiva, especificamente, propor, com base nas teorias e práticas da Ciência da informação, um modelo conceitual alternativo de organização e recuperação de informação.Para o desenvolvimento do modelo são descritos diversos tipos de bancos de dados e seu histórico, desde sua criação até os armazéns de dados chamados data warehouse . Quanto aos data warehouse , são expostos su...

  8. Local Safety Toolkit: Enabling safe communities of opportunity

    CSIR Research Space (South Africa)

    Holtmann, B

    2010-08-31

    Full Text Available remain inadequate to achieve safety. The Local Safety Toolkit supports a strategy for a Safe South Africa through the implementation of a model for a Safe Community of Opportunity. The model is the outcome of work undertaken over the course of the past...

  9. 19 CFR 19.35 - Establishment of duty-free stores (Class 9 warehouses).

    Science.gov (United States)

    2010-04-01

    ... SECURITY; DEPARTMENT OF THE TREASURY CUSTOMS WAREHOUSES, CONTAINER STATIONS AND CONTROL OF MERCHANDISE... and/or internal revenue taxes (where applicable) have not been paid. Except insofar as the provisions... paragraph (b) of this section means an area in close proximity to an actual exit for departing from the...

  10. The connectome viewer toolkit: an open source framework to manage, analyze, and visualize connectomes.

    Science.gov (United States)

    Gerhard, Stephan; Daducci, Alessandro; Lemkaddem, Alia; Meuli, Reto; Thiran, Jean-Philippe; Hagmann, Patric

    2011-01-01

    Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/

  11. The Connectome Viewer Toolkit: an open source framework to manage, analyze and visualize connectomes

    Directory of Open Access Journals (Sweden)

    Stephan eGerhard

    2011-06-01

    Full Text Available Abstract Advanced neuroinformatics tools are required for methods of connectome mapping, analysis and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration and sharing. We have designed and implemented the Connectome Viewer Toolkit --- a set of free and extensible open-source neuroimaging tools written in Python. The key components of the toolkit are as follows: 1. The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. 2. The Connectome File Format Library enables management and sharing of connectome files. 3. The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/.

  12. MitoMiner: a data warehouse for mitochondrial proteomics data.

    Science.gov (United States)

    Smith, Anthony C; Blackshaw, James A; Robinson, Alan J

    2012-01-01

    MitoMiner (http://mitominer.mrc-mbu.cam.ac.uk/) is a data warehouse for the storage and analysis of mitochondrial proteomics data gathered from publications of mass spectrometry and green fluorescent protein tagging studies. In MitoMiner, these data are integrated with data from UniProt, Gene Ontology, Online Mendelian Inheritance in Man, HomoloGene, Kyoto Encyclopaedia of Genes and Genomes and PubMed. The latest release of MitoMiner stores proteomics data sets from 46 studies covering 11 different species from eumetazoa, viridiplantae, fungi and protista. MitoMiner is implemented by using the open source InterMine data warehouse system, which provides a user interface allowing users to upload data for analysis, personal accounts to store queries and results and enables queries of any data in the data model. MitoMiner also provides lists of proteins for use in analyses, including the new MitoMiner mitochondrial proteome reference sets that specify proteins with substantial experimental evidence for mitochondrial localization. As further mitochondrial proteomics data sets from normal and diseased tissue are published, MitoMiner can be used to characterize the variability of the mitochondrial proteome between tissues and investigate how changes in the proteome may contribute to mitochondrial dysfunction and mitochondrial-associated diseases such as cancer, neurodegenerative diseases, obesity, diabetes, heart failure and the ageing process.

  13. Gossip Management at Universities Using Big Data Warehouse Model Integrated with a Decision Support System

    Directory of Open Access Journals (Sweden)

    Pelin Vardarlier

    2016-01-01

    Full Text Available Big Data has recently been used for many purposes like medicine, marketing and sports. It has helped improve management decisions. However, for almost each case a unique data warehouse should be built to benefit from the merits of data mining and Big Data. Hence, each time we start from scratch to form and build a Big Data Warehouse. In this study, we propose a Big Data Warehouse and a model for universities to be used for information management, to be more specific gossip management. The overall model is a decision support system that may help university administraitons when they are making decisions and also provide them with information or gossips being circulated among students and staff. In the model, unsupervised machine learning algorithms have been employed. A prototype of the proposed system has also been presented in the study. User generated data has been collected from students in order to learn gossips and students’ problems related to school, classes, staff and instructors. The findings and results of the pilot study suggest that social media messages among students may give important clues for the happenings at school and this information may be used for management purposes.The model may be developed and implemented by not only universities but also some other organisations.

  14. Archetype-based data warehouse environment to enable the reuse of electronic health record data.

    Science.gov (United States)

    Marco-Ruiz, Luis; Moner, David; Maldonado, José A; Kolstrup, Nils; Bellika, Johan G

    2015-09-01

    The reuse of data captured during health care delivery is essential to satisfy the demands of clinical research and clinical decision support systems. A main barrier for the reuse is the existence of legacy formats of data and the high granularity of it when stored in an electronic health record (EHR) system. Thus, we need mechanisms to standardize, aggregate, and query data concealed in the EHRs, to allow their reuse whenever they are needed. To create a data warehouse infrastructure using archetype-based technologies, standards and query languages to enable the interoperability needed for data reuse. The work presented makes use of best of breed archetype-based data transformation and storage technologies to create a workflow for the modeling, extraction, transformation and load of EHR proprietary data into standardized data repositories. We converted legacy data and performed patient-centered aggregations via archetype-based transformations. Later, specific purpose aggregations were performed at a query level for particular use cases. Laboratory test results of a population of 230,000 patients belonging to Troms and Finnmark counties in Norway requested between January 2013 and November 2014 have been standardized. Test records normalization has been performed by defining transformation and aggregation functions between the laboratory records and an archetype. These mappings were used to automatically generate open EHR compliant data. These data were loaded into an archetype-based data warehouse. Once loaded, we defined indicators linked to the data in the warehouse to monitor test activity of Salmonella and Pertussis using the archetype query language. Archetype-based standards and technologies can be used to create a data warehouse environment that enables data from EHR systems to be reused in clinical research and decision support systems. With this approach, existing EHR data becomes available in a standardized and interoperable format, thus opening a world

  15. National eHealth strategy toolkit

    CERN Document Server

    2012-01-01

    Worldwide the application of information and communication technologies to support national health-care services is rapidly expanding and increasingly important. This is especially so at a time when all health systems face stringent economic challenges and greater demands to provide more and better care especially to those most in need. The National eHealth Strategy Toolkit is an expert practical guide that provides governments their ministries and stakeholders with a solid foundation and method for the development and implementation of a national eHealth vision action plan and monitoring fram

  16. Effects of a Short Video-Based Resident-as-Teacher Training Toolkit on Resident Teaching.

    Science.gov (United States)

    Ricciotti, Hope A; Freret, Taylor S; Aluko, Ashley; McKeon, Bri Anne; Haviland, Miriam J; Newman, Lori R

    2017-10-01

    To pilot a short video-based resident-as-teacher training toolkit and assess its effect on resident teaching skills in clinical settings. A video-based resident-as-teacher training toolkit was previously developed by educational experts at Beth Israel Deaconess Medical Center, Harvard Medical School. Residents were recruited from two academic hospitals, watched two videos from the toolkit ("Clinical Teaching Skills" and "Effective Clinical Supervision"), and completed an accompanying self-study guide. A novel assessment instrument for evaluating the effect of the toolkit on teaching was created through a modified Delphi process. Before and after the intervention, residents were observed leading a clinical teaching encounter and scored using the 15-item assessment instrument. The primary outcome of interest was the change in number of skills exhibited, which was assessed using the Wilcoxon signed-rank test. Twenty-eight residents from two academic hospitals were enrolled, and 20 (71%) completed all phases of the study. More than one third of residents who volunteered to participate reported no prior formal teacher training. After completing two training modules, residents demonstrated a significant increase in the median number of teaching skills exhibited in a clinical teaching encounter, from 7.5 (interquartile range 6.5-9.5) to 10.0 (interquartile range 9.0-11.5; P<.001). Of the 15 teaching skills assessed, there were significant improvements in asking for the learner's perspective (P=.01), providing feedback (P=.005), and encouraging questions (P=.046). Using a resident-as-teacher video-based toolkit was associated with improvements in teaching skills in residents from multiple specialties.

  17. Educator Toolkits on Second Victim Syndrome, Mindfulness and Meditation, and Positive Psychology: The 2017 Resident Wellness Consensus Summit

    Directory of Open Access Journals (Sweden)

    Jon Smart

    2018-02-01

    Full Text Available Introduction: Burnout, depression, and suicidality among residents of all specialties have become a critical focus of attention for the medical education community. Methods: As part of the 2017 Resident Wellness Consensus Summit in Las Vegas, Nevada, resident participants from 31 programs collaborated in the Educator Toolkit workgroup. Over a seven-month period leading up to the summit, this workgroup convened virtually in the Wellness Think Tank, an online resident community, to perform a literature review and draft curricular plans on three core wellness topics. These topics were second victim syndrome, mindfulness and meditation, and positive psychology. At the live summit event, the workgroup expanded to include residents outside the Wellness Think Tank to obtain a broader consensus of the evidence-based toolkits for these three topics. Results: Three educator toolkits were developed. The second victim syndrome toolkit has four modules, each with a pre-reading material and a leader (educator guide. In the mindfulness and meditation toolkit, there are three modules with a leader guide in addition to a longitudinal, guided meditation plan. The positive psychology toolkit has two modules, each with a leader guide and a PowerPoint slide set. These toolkits provide educators the necessary resources, reading materials, and lesson plans to implement didactic sessions in their residency curriculum. Conclusion: Residents from across the world collaborated and convened to reach a consensus on high-yield—and potentially high-impact—lesson plans that programs can use to promote and improve resident wellness. These lesson plans may stand alone or be incorporated into a larger wellness curriculum.

  18. Educator Toolkits on Second Victim Syndrome, Mindfulness and Meditation, and Positive Psychology: The 2017 Resident Wellness Consensus Summit.

    Science.gov (United States)

    Chung, Arlene S; Smart, Jon; Zdradzinski, Michael; Roth, Sarah; Gende, Alecia; Conroy, Kylie; Battaglioli, Nicole

    2018-03-01

    Burnout, depression, and suicidality among residents of all specialties have become a critical focus of attention for the medical education community. As part of the 2017 Resident Wellness Consensus Summit in Las Vegas, Nevada, resident participants from 31 programs collaborated in the Educator Toolkit workgroup. Over a seven-month period leading up to the summit, this workgroup convened virtually in the Wellness Think Tank, an online resident community, to perform a literature review and draft curricular plans on three core wellness topics. These topics were second victim syndrome, mindfulness and meditation, and positive psychology. At the live summit event, the workgroup expanded to include residents outside the Wellness Think Tank to obtain a broader consensus of the evidence-based toolkits for these three topics. Three educator toolkits were developed. The second victim syndrome toolkit has four modules, each with a pre-reading material and a leader (educator) guide. In the mindfulness and meditation toolkit, there are three modules with a leader guide in addition to a longitudinal, guided meditation plan. The positive psychology toolkit has two modules, each with a leader guide and a PowerPoint slide set. These toolkits provide educators the necessary resources, reading materials, and lesson plans to implement didactic sessions in their residency curriculum. Residents from across the world collaborated and convened to reach a consensus on high-yield-and potentially high-impact-lesson plans that programs can use to promote and improve resident wellness. These lesson plans may stand alone or be incorporated into a larger wellness curriculum.

  19. Data Warehouse Design from HL7 Clinical Document Architecture Schema.

    Science.gov (United States)

    Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L

    2015-01-01

    This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.

  20. Aspects of Data Warehouse Technologies for Complex Web Data

    OpenAIRE

    Thomsen, Christian

    2008-01-01

    This thesis is about aspects of specification and development of datawarehouse technologies for complex web data. Today, large amounts of dataexist in different web resources and in different formats. But it is oftenhard to analyze and query the often big and complex data or data about thedata (i.e., metadata). It is therefore interesting to apply Data Warehouse(DW) technology to the data. But to apply DW technology to complex web datais not straightforward and the DW community faces new and ...

  1. Developing Mixed Reality Educational Applications: The Virtual Touch Toolkit

    Directory of Open Access Journals (Sweden)

    Juan Mateu

    2015-08-01

    Full Text Available In this paper, we present Virtual Touch, a toolkit that allows the development of educational activities through a mixed reality environment such that, using various tangible elements, the interconnection of a virtual world with the real world is enabled. The main goal of Virtual Touch is to facilitate the installation, configuration and programming of different types of technologies, abstracting the creator of educational applications from the technical details involving the use of tangible interfaces and virtual worlds. Therefore, it is specially designed to enable teachers to themselves create educational activities for their students in a simple way, taking into account that teachers generally lack advanced knowledge in computer programming and electronics. The toolkit has been used to develop various educational applications that have been tested in two secondary education high schools in Spain.

  2. ProtoMD: A prototyping toolkit for multiscale molecular dynamics

    Science.gov (United States)

    Somogyi, Endre; Mansour, Andrew Abi; Ortoleva, Peter J.

    2016-05-01

    ProtoMD is a toolkit that facilitates the development of algorithms for multiscale molecular dynamics (MD) simulations. It is designed for multiscale methods which capture the dynamic transfer of information across multiple spatial scales, such as the atomic to the mesoscopic scale, via coevolving microscopic and coarse-grained (CG) variables. ProtoMD can be also be used to calibrate parameters needed in traditional CG-MD methods. The toolkit integrates 'GROMACS wrapper' to initiate MD simulations, and 'MDAnalysis' to analyze and manipulate trajectory files. It facilitates experimentation with a spectrum of coarse-grained variables, prototyping rare events (such as chemical reactions), or simulating nanocharacterization experiments such as terahertz spectroscopy, AFM, nanopore, and time-of-flight mass spectroscopy. ProtoMD is written in python and is freely available under the GNU General Public License from github.com/CTCNano/proto_md.

  3. SlideToolkit: an assistive toolset for the histological quantification of whole slide images.

    Directory of Open Access Journals (Sweden)

    Bastiaan G L Nelissen

    Full Text Available The demand for accurate and reproducible phenotyping of a disease trait increases with the rising number of biobanks and genome wide association studies. Detailed analysis of histology is a powerful way of phenotyping human tissues. Nonetheless, purely visual assessment of histological slides is time-consuming and liable to sampling variation and optical illusions and thereby observer variation, and external validation may be cumbersome. Therefore, within our own biobank, computerized quantification of digitized histological slides is often preferred as a more precise and reproducible, and sometimes more sensitive approach. Relatively few free toolkits are, however, available for fully digitized microscopic slides, usually known as whole slides images. In order to comply with this need, we developed the slideToolkit as a fast method to handle large quantities of low contrast whole slides images using advanced cell detecting algorithms. The slideToolkit has been developed for modern personal computers and high-performance clusters (HPCs and is available as an open-source project on github.com. We here illustrate the power of slideToolkit by a repeated measurement of 303 digital slides containing CD3 stained (DAB abdominal aortic aneurysm tissue from a tissue biobank. Our workflow consists of four consecutive steps. In the first step (acquisition, whole slide images are collected and converted to TIFF files. In the second step (preparation, files are organized. The third step (tiles, creates multiple manageable tiles to count. In the fourth step (analysis, tissue is analyzed and results are stored in a data set. Using this method, two consecutive measurements of 303 slides showed an intraclass correlation of 0.99. In conclusion, slideToolkit provides a free, powerful and versatile collection of tools for automated feature analysis of whole slide images to create reproducible and meaningful phenotypic data sets.

  4. An order batching algorithm for wave picking in a parallel-aisle warehouse

    NARCIS (Netherlands)

    Gademann, A.J.R.M.; Berg, van den J.P.; Hoff, van der H.H.

    2001-01-01

    In this paper we address the problem of batching orders in a parallel-aisle warehouse, with the objective to minimize the maximum lead time of any of the batches. This is a typical objective for a wave picking operation. Many heuristics have been suggested to solve order batching problems. We

  5. BI Reporting, Data Warehouse Systems, and Beyond. CDS Spotlight Report. Research Bulletin

    Science.gov (United States)

    Lang, Leah; Pirani, Judith A.

    2014-01-01

    This Spotlight focuses on data from the 2013 Core Data Service [CDS] to better understand how higher education institutions approach business intelligence (BI) reporting and data warehouse systems (see the Sidebar for definitions). Information provided for this Spotlight was derived from Module 8 of CDS, which contains several questions regarding…

  6. Using Electronic Health Records to Build an Ophthalmologic Data Warehouse and Visualize Patients' Data.

    Science.gov (United States)

    Kortüm, Karsten U; Müller, Michael; Kern, Christoph; Babenko, Alexander; Mayer, Wolfgang J; Kampik, Anselm; Kreutzer, Thomas C; Priglinger, Siegfried; Hirneiss, Christoph

    2017-06-01

    To develop a near-real-time data warehouse (DW) in an academic ophthalmologic center to gain scientific use of increasing digital data from electronic medical records (EMR) and diagnostic devices. Database development. Specific macular clinic user interfaces within the institutional hospital information system were created. Orders for imaging modalities were sent by an EMR-linked picture-archiving and communications system to the respective devices. All data of 325 767 patients since 2002 were gathered in a DW running on an SQL database. A data discovery tool was developed. An exemplary search for patients with age-related macular degeneration, performed cataract surgery, and at least 10 intravitreal (excluding bevacizumab) injections was conducted. Data related to those patients (3 142 204 diagnoses [including diagnoses from other fields of medicine], 720 721 procedures [eg, surgery], and 45 416 intravitreal injections) were stored, including 81 274 optical coherence tomography measurements. A web-based browsing tool was successfully developed for data visualization and filtering data by several linked criteria, for example, minimum number of intravitreal injections of a specific drug and visual acuity interval. The exemplary search identified 450 patients with 516 eyes meeting all criteria. A DW was successfully implemented in an ophthalmologic academic environment to support and facilitate research by using increasing EMR and measurement data. The identification of eligible patients for studies was simplified. In future, software for decision support can be developed based on the DW and its structured data. The improved classification of diseases and semiautomatic validation of data via machine learning are warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Graph algorithms in the titan toolkit.

    Energy Technology Data Exchange (ETDEWEB)

    McLendon, William Clarence, III; Wylie, Brian Neil

    2009-10-01

    Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.

  8. Applications toolkit for accelerator control and analysis

    International Nuclear Information System (INIS)

    Borland, M.

    1997-01-01

    The Advanced Photon Source (APS) has taken a unique approach to creating high-level software applications for accelerator operation and analysis. The approach is based on self-describing data, modular program toolkits, and scripts. Self-describing data provide a communication standard that aids the creation of modular program toolkits by allowing compliant programs to be used in essentially arbitrary combinations. These modular programs can be used as part of an arbitrary number of high-level applications. At APS, a group of about 70 data analysis, manipulation, and display tools is used in concert with about 20 control-system-specific tools to implement applications for commissioning and operations. High-level applications are created using scripts, which are relatively simple interpreted programs. The Tcl/Tk script language is used, allowing creating of graphical user interfaces (GUIs) and a library of algorithms that are separate from the interface. This last factor allows greater automation of control by making it easy to take the human out of the loop. Applications of this methodology to operational tasks such as orbit correction, configuration management, and data review will be discussed

  9. INNOVATIVE DEVELOPMENT OF WAREHOUSE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Judit OLÁH

    2017-12-01

    Full Text Available The smooth operation of stocking and the warehouse play a very important role in all manufacturing companies; therefore ongoing monitoring and application of new techniques is essential to increase efficiency. The aim of our research is twofold: the utilization of the pallet shuttle racking system, and the introduction of a development opportunity by the merging of storage and order picking operations in the pallet shuttle system. It can be concluded that it is beneficial for the company to purchase two mobile cars in order to increase the utilization of the pallet shuttle racking system from 60% to 72% and that of the storage from 74% to 76%. We established that after the merging of the storage and order picking activities within the pallet shuttle system, the forklift driver can also complete the selection activities immediately after storage. By merging the two operations and saving time the number of forklift drivers can be reduced from 4 to 3 per shift.

  10. Harnessing the Power of Scientific Data Warehouses

    Directory of Open Access Journals (Sweden)

    Kevin Deeb

    2005-04-01

    Full Text Available Data warehousing architecture should generally protect the confidentiality of data before it can be published, provide sufficient granularity to enable scientists to variously manipulate data, support robust metadata services, and define standardized spatial components. Data can then be transformed into information that would make them readily available in a common format that is easily accessible, fast, and bridges the islands of dispersed information. The benefits of the warehouse can be further enhanced by adding a spatial component so that the data can be brought to life, overlapping layers of information in a format that is easily grasped by management, enabling them to tease out trends in their areas of expertise.

  11. EasyInterface: A toolkit for rapid development of GUIs for research prototype tools

    OpenAIRE

    Doménech, Jesús; Genaim, Samir; Johnsen, Einar Broch; Schlatte, Rudolf

    2017-01-01

    In this paper we describe EasyInterface, an open-source toolkit for rapid development of web-based graphical user interfaces (GUIs). This toolkit addresses the need of researchers to make their research prototype tools available to the community, and integrating them in a common environment, rapidly and without being familiar with web programming or GUI libraries in general. If a tool can be executed from a command-line and its output goes to the standard output, then in few minutes one can m...

  12. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    Science.gov (United States)

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit

  13. A flexible open-source toolkit for lava flow simulations

    Science.gov (United States)

    Mossoux, Sophie; Feltz, Adelin; Poppe, Sam; Canters, Frank; Kervyn, Matthieu

    2014-05-01

    Lava flow hazard modeling is a useful tool for scientists and stakeholders confronted with imminent or long term hazard from basaltic volcanoes. It can improve their understanding of the spatial distribution of volcanic hazard, influence their land use decisions and improve the city evacuation during a volcanic crisis. Although a range of empirical, stochastic and physically-based lava flow models exists, these models are rarely available or require a large amount of physical constraints. We present a GIS toolkit which models lava flow propagation from one or multiple eruptive vents, defined interactively on a Digital Elevation Model (DEM). It combines existing probabilistic (VORIS) and deterministic (FLOWGO) models in order to improve the simulation of lava flow spatial spread and terminal length. Not only is this toolkit open-source, running in Python, which allows users to adapt the code to their needs, but it also allows users to combine the models included in different ways. The lava flow paths are determined based on the probabilistic steepest slope (VORIS model - Felpeto et al., 2001) which can be constrained in order to favour concentrated or dispersed flow fields. Moreover, the toolkit allows including a corrective factor in order for the lava to overcome small topographical obstacles or pits. The lava flow terminal length can be constrained using a fixed length value, a Gaussian probability density function or can be calculated based on the thermo-rheological properties of the open-channel lava flow (FLOWGO model - Harris and Rowland, 2001). These slope-constrained properties allow estimating the velocity of the flow and its heat losses. The lava flow stops when its velocity is zero or the lava temperature reaches the solidus. Recent lava flows of Karthala volcano (Comoros islands) are here used to demonstrate the quality of lava flow simulations with the toolkit, using a quantitative assessment of the match of the simulation with the real lava flows. The

  14. Avaliação de Desempenho de Projetos de Data Warehouse

    Directory of Open Access Journals (Sweden)

    Diogo Everson Santos

    2006-08-01

    Full Text Available This article presents a proposal of methodology of evaluation of performance for the project of creation of Data Warehouse (DW of Sebrae MG. A DW is a great corporative data base fed by information systems does business, many distinct times in formats and architectures, very increasing in the time, integrated and guided to the business whose function and to support the managemental decisions (IMMON, 1996. The evaluation methodology proposal is bases on three elements - people, process and techniques (RUBIO and BIRTH, 2005 - and uses the conceptualization of performance of projects of Shenhar (2001. Destarte, the judgment on the degree the success (or failure of a project of Data Warehouse (DW will depend on the perception of the involved ones (stake holders and of the different moments where the evaluation (re is made. The evaluation of performance of DW projects if initiates during the development of the project and if it extends after until its conclusion. In the phases you initiate, is looked to identify necessities of correction of route to also guarantee the benefits of short term and, by means of estimates of future impacts, of medium and long run.

  15. Toolkit for healthcare facility design evaluation - some case studies

    CSIR Research Space (South Africa)

    De Jager, Peta

    2007-10-01

    Full Text Available themes in approach. Further study is indicated, but preliminary research shows that, whilst these toolkits can be applied to the South African context, there are compelling reasons for them to be adapted. This paper briefly outlines these three case...

  16. Toolkit for healthcare facility design evaluation - some case studies.

    CSIR Research Space (South Africa)

    De Jager, Peta

    2007-10-01

    Full Text Available themes in approach. Further study is indicated, but preliminary research shows that, whilst these toolkits can be applied to the South African context, there are compelling reasons for them to be adapted. This paper briefly outlines these three case...

  17. New Careers in Nursing Scholar Alumni Toolkit: Development of an Innovative Resource for Transition to Practice.

    Science.gov (United States)

    Mauro, Ann Marie P; Escallier, Lori A; Rosario-Sim, Maria G

    2016-01-01

    The transition from student to professional nurse is challenging and may be more difficult for underrepresented minority nurses. The Robert Wood Johnson Foundation New Careers in Nursing (NCIN) program supported development of a toolkit that would serve as a transition-to-practice resource to promote retention of NCIN alumni and other new nurses. Thirteen recent NCIN alumni (54% male, 23% Hispanic/Latino, 23% African Americans) from 3 schools gave preliminary content feedback. An e-mail survey was sent to a convenience sample of 29 recent NCIN alumni who evaluated the draft toolkit using a Likert scale (poor = 1; excellent = 5). Twenty NCIN alumni draft toolkit reviewers (response rate 69%) were primarily female (80%) and Hispanic/Latino (40%). Individual chapters' mean overall rating of 4.67 demonstrated strong validation. Mean scores for overall toolkit content (4.57), usability (4.5), relevance (4.79), and quality (4.71) were also excellent. Qualitative comments were analyzed using thematic content analysis and supported the toolkit's relevance and utility. A multilevel peer review process was also conducted. Peer reviewer feedback resulted in a 6-chapter document that offers resources for successful transition to practice and lays the groundwork for continued professional growth. Future research is needed to determine the ideal time to introduce this resource. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Analysis of Relationship between Levofloxacin and Corrected QT Prolongation Using a Clinical Data Warehouse.

    Science.gov (United States)

    Park, Man Young; Kim, Eun Yeob; Lee, Young Ho; Kim, Woojae; Kim, Ku Sang; Sheen, Seung Soo; Lim, Hong Seok; Park, Rae Woong

    2011-03-01

    The aim of this study was to examine whether or not levofloxacin has any relationship with QT prolongation in a real clinical setting by analyzing a clinical data warehouse of data collected from different hospital information systems. Electronic prescription data and medical charts from 3 different hospitals spanning the past 9 years were reviewed, and a clinical data warehouse was constructed. Patients who were both administrated levofloxacin and given electrocardiograms (ECG) were selected. The correlations between various patient characteristics, concomitant drugs, corrected QT (QTc) prolongation, and the interval difference in QTc before and after levofloxacin administration were analyzed. A total of 2,176 patients from 3 different hospitals were included in the study. QTc prolongation was found in 364 patients (16.7%). The study revealed that age (OR 1.026, p data seem to be essential to adverse drug reaction surveillance in future.

  19. RGtk2: A Graphical User Interface Toolkit for R

    Directory of Open Access Journals (Sweden)

    Duncan Temple Lang

    2011-01-01

    Full Text Available Graphical user interfaces (GUIs are growing in popularity as a complement or alternative to the traditional command line interfaces to R. RGtk2 is an R package for creating GUIs in R. The package provides programmatic access to GTK+ 2.0, an open-source GUI toolkit written in C. To construct a GUI, the R programmer calls RGtk2 functions that map to functions in the underlying GTK+ library. This paper introduces the basic concepts underlying GTK+ and explains how to use RGtk2 to construct GUIs from R. The tutorial is based on simple and pratical programming examples. We also provide more complex examples illustrating the advanced features of the package. The design of the RGtk2 API and the low-level interface from R to GTK+ are discussed at length. We compare RGtk2 to alternative GUI toolkits for R.

  20. A methodological toolkit for field assessments of artisanally mined alluvial diamond deposits

    Science.gov (United States)

    Chirico, Peter G.; Malpeli, Katherine C.

    2014-01-01

    This toolkit provides a standardized checklist of critical issues relevant to artisanal mining-related field research. An integrated sociophysical geographic approach to collecting data at artisanal mine sites is outlined. The implementation and results of a multistakeholder approach to data collection, carried out in the assessment of Guinea’s artisanally mined diamond deposits, also are summarized. This toolkit, based on recent and successful field campaigns in West Africa, has been developed as a reference document to assist other government agencies or organizations in collecting the data necessary for artisanal diamond mining or similar natural resource assessments.